US20240046654A1 - Image fusion for autonomous vehicle operation - Google Patents
Image fusion for autonomous vehicle operation Download PDFInfo
- Publication number
- US20240046654A1 US20240046654A1 US18/489,306 US202318489306A US2024046654A1 US 20240046654 A1 US20240046654 A1 US 20240046654A1 US 202318489306 A US202318489306 A US 202318489306A US 2024046654 A1 US2024046654 A1 US 2024046654A1
- Authority
- US
- United States
- Prior art keywords
- image
- vehicle
- bounding box
- images
- common
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims description 16
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000001514 detection method Methods 0.000 claims description 41
- 230000011218 segmentation Effects 0.000 claims description 19
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 230000008447 perception Effects 0.000 abstract description 14
- 238000012545 processing Methods 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000000844 transformation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
- G06F18/256—Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/303—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
-
- G05D2201/0213—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/259—Fusion by voting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- This document generally relates to image processing to improve autonomous vehicular driving.
- Autonomous vehicle navigation is a technology for sensing the position and movement of a vehicle and, based on the sensing, autonomously control the vehicle to navigate towards a destination.
- Autonomous vehicle navigation can have important applications in transportation of people, goods and services.
- One of the components of autonomous driving, which ensures the safety of the vehicle and its passengers, as well as people and property in the vicinity of the vehicle, is the use of multiple cameras and the real-time responsiveness of the driving algorithms for safety and maneuvering.
- the disclosed technology can be used to provide a method for improving perception in an autonomous vehicle.
- This method includes receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image; identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, wherein the first and second bounding boxes correspond to a common object; and fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object.
- the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
- a device that is configured or operable to perform the above-described method.
- the device may include a processor that is programmed to implement this method.
- FIG. 1 shows a block diagram of an exemplary long-distance perception system to perform image processing on images obtained from multiple cameras of an autonomous vehicle.
- FIGS. 2 A and 2 B show an example of the pre-processing that generates inputs for the fusion of different scenes of real-time image feeds.
- FIG. 3 shows an example of the workflow for the fusion of different scenes of real-time image feeds.
- FIGS. 4 A and 4 B show an example of fusing two cropped images based on the detected characteristics of an object.
- FIG. 5 shows another example of fusing two cropped images based on the detected characteristics of an object.
- FIG. 6 shows a flowchart of an example method for improving perception in an autonomous vehicle, in accordance with embodiments of the disclosed technology.
- FIG. 7 shows an example of a hardware platform that can implement some techniques described in the present document.
- Current implementations are in intermediate stages, such as the partially-autonomous operation in some vehicles (e.g., autonomous acceleration and navigation, but with the requirement of a present and attentive driver), the safety-protecting operation of some vehicles (e.g., maintaining a safe following distance and automatic braking), the safety-protecting warnings of some vehicles (e.g., blind-spot indicators in side-view mirrors and proximity sensors), as well as ease-of-use operations (e.g., autonomous parallel parking).
- some vehicles e.g., autonomous acceleration and navigation, but with the requirement of a present and attentive driver
- the safety-protecting operation of some vehicles e.g., maintaining a safe following distance and automatic braking
- the safety-protecting warnings of some vehicles e.g., blind-spot indicators in side-view mirrors and proximity sensors
- ease-of-use operations e.g., autonomous parallel parking.
- Level 4 which is characterized by the vehicle operating without human input or oversight but only under select conditions defined by factors such as road type or geographic area
- Level 5 which is characterized as a driverless car that can operate on any road and in any environment a human driver could negotiate.
- the differing levels of autonomy are typically supported by sensors or cameras that provides data or images of one or more areas surrounding the autonomous vehicle.
- a computer located in the conventional autonomous vehicle performs image processing to determine the presence or absence of objects (e.g., vehicles or pedestrians) within a limited range from the location of the autonomous vehicle. For example, using conventional techniques, a computer in an autonomous vehicle can perceive objects within a 300-meter distance from the location of the autonomous vehicle. However, a limited range of perception may not be sufficient if the autonomous vehicle is a semi-trailer truck. An autonomous semi-trailer truck is designed to drive safely on the road. However, in some cases, a limited range of perception (e.g., up to 300 meters) may not be sufficient to proactively detect an object on the road so that the autonomous semi-trailer truck may safely stop prior to colliding with or to safely maneuver around that object.
- objects e.g., vehicles or pedestrians
- FIG. 1 shows a block diagram of an exemplary long-distance perception system 100 to perform image processing on images obtained from one or more cameras 102 in or on an autonomous vehicle 101 , such as an autonomous semi-trailer truck.
- the exemplary image processing techniques described in this patent document can be used to get an accurate three-dimension (3D) position of object located up to 1000 meters from the location of the autonomous vehicle 101 .
- the exemplary image processing techniques can also be used to track and build motion models for each object perceived.
- the exemplary long-distance perception system 100 can be used to enhance safety of an autonomous vehicle 101 driven on the road.
- the long-distance perception system 100 includes one or more cameras 102 installed on or in an autonomous vehicle 101 .
- Each camera 102 can generate high-resolution images in real-time while the autonomous vehicle 101 is in operation, such as driving on the road or stopping at a stop sign.
- the term image can include an image frame from a video feed of a camera 102 .
- the resolution of an image frame from the one or more cameras 102 can be, for example, 1024 ⁇ 576 pixels.
- the one or more cameras 102 can obtain images at a speed of, for example, 20 frames per second (FPS).
- FPS frames per second
- FIG. 1 shows several modules and a database that can perform image processing based on the images received from the one or more cameras 102 .
- the features or operations of the modules 104 , 108 , 110 , 112 and terrain map database 106 are performed by an onboard computer 114 located in an autonomous vehicle 101 .
- the onboard computer 114 located in the autonomous vehicle 101 includes at least one processor and a memory having instructions stored thereupon. The instructions upon execution by the processor configure the onboard computer 114 to perform the operations associated with the modules and/or database as described in this patent document.
- the terrain map database 106 may be stored in the onboard computer 114 and provides coordinates of various points in the spatial region (e.g., road surface or mountain elevation) where or around which the autonomous vehicle 101 is being driven or is located.
- the terrain map database 106 stores the terrain information that can be represented in 3D space or 3D world coordinates, where the coordinate information characterizes various points in the spatial region that surrounds the autonomous vehicle 101 .
- a terrain map database 106 can include 3D world coordinates for one or more points of a road surface on which the autonomous vehicle 101 is being driven.
- a terrain map database 106 can include 3D world coordinates for one or more points in a spatial region towards which or within which the autonomous vehicle 101 is being driven.
- Image processing by the picture-in-picture module can perform image processing to perceive objects (e.g., vehicles, pedestrians, obstacles) from information provided by sensors such as cameras.
- the picture-in-picture (PIP) module 104 can process the images obtained from the camera(s) 102 to improve perception of objects that can be located far from the location of the autonomous vehicle 101 .
- the images obtained from each camera 102 is sent to a PIP module 104 .
- the PIP module 104 obtains an original image from a camera to select and crop one or more regions of interest in the image.
- the PIP module 104 sends the cropped region(s) of interest in the image to one or more downstream modules as shown in FIG. 1 , such as the detection module 110 .
- the PIP module 104 can select and crop one or more regions of interest in an image obtained from a camera 102 .
- the region(s) selected by the PIP module 104 may include area(s) located in front of the autonomous vehicle (e.g., road, highway ramp, or intersection).
- the selected area(s) are either past a pre-determined distance in front of the location of the autonomous vehicle (e.g., past a distance of 500 meters in front) or are within a range of pre-determined distances in front of the location of the autonomous vehicle (e.g., between 500 meters to 1000 meters in front).
- the PIP module 104 may select and crop region(s) of interest in one of several ways.
- the PIP module 104 can obtain information about a road in front of the autonomous vehicle to select its region(s) of interest. For example, if the PIP module 104 determines that the road is straight (e.g., by identifying the curvature or shape of the lane markers), then the PIP module 104 can select and crop a center region of the original image that includes a region of the road or highway ramp or intersection, where the center region has a pre-determined pixel resolution.
- the PIP module 104 can obtain coordinate information of points on the road from a terrain map database 106 so that whether the road is curved or straight the PIP module 104 can select and crop one or more regions of interest that include region(s) of the road or highway ramp or intersection.
- the selected and cropped region(s) are located (i) in front of and either past, or (ii) within a range of pre-determined distances in front of the location of the autonomous vehicle.
- the long-distance perception system includes a canvas algorithm that fuses the vehicle detection outputs (from, for example, detection module 110 in FIG. 1 ) from different cameras or different crops of a single image, forming vehicle detection results on a unified virtual focal plane, which simplifies the processing of downstream modules.
- the vehicle detection algorithm outputs a bounding box, feature points, contour, lighting signals, vehicle classification and/or distance estimation of each visible vehicles in the photo.
- Sensors for autonomous driving usually include cameras of different focal length pointing to the same direction (e.g., camera(s) 102 in FIG. 1 ).
- the vehicle detection algorithm will run on each camera photo separately.
- Each camera photo may have different regions of interest, so the vehicle detection algorithm may run on different crops of the original images separately. Therefore, there could be many separate vehicle detection results, processing whose interrelation could be burdensome for downstream systems.
- the captured images will satisfy some simple geometric transformations (e.g., projective transformations).
- the crops of an original image also satisfy the same family or geometric transformation. Therefore, using those geometric transformations, the vehicle detection outputs of different crops of different camera may be fused and unified vehicle detection results on a selected virtual focal plane be composed.
- each vehicle attribute in each output is inferred first, and then the unified vehicle attributes are predicted based on visibility, with conflictions properly flagged. Additional examples, which further elucidate embodiments of the disclosed technology are discussed in the next section.
- FIGS. 2 A and 2 B show an example of the pre-processing that generates inputs for the fusion of different scenes of real-time image feeds.
- a vehicle 200 may include at least a first camera 202 - 1 and a second camera 202 - 2 .
- the first camera 202 - 1 may capture one or images from which a first cropped image 221 and a second cropped image 223 may be generated.
- the second camera may capture an image from which a cropped image 227 is generated.
- the first and second cameras may be pointing in substantially the same direction, which may result in the cropped images ( 221 , 223 and 227 ) including a common object, whose information can then be fused.
- the second cropped image 223 may be an inset picture (and thus a subset) of the first cropped image 221 , and will include a common object.
- the output of the fusion of these cropped images will retain the information in the images, but will typically be of a smaller byte size than the sum of the byte sizes of the two cropped images.
- FIG. 3 shows an example of the workflow for the fusion of different scenes of real-time image feeds.
- multiple cameras e.g., 302 - 1 , 302 - 2 and 302 - 3
- forward-facing camera #1 ( 302 - 1 ) captures images that result in cropped images 321 and 323
- forward-facing camera #2 ( 302 - 2 ) corresponds to another cropped image 327 .
- a rear-facing camera #3 ( 302 - 3 ) captures images that are cropped to generate an cropped image 329 .
- the cropped images (e.g., 321 , 323 , 327 and 329 ) are processed by a detection module (e.g., detection module 110 in FIG. 1 ), which results in outputs (e.g., 331 , 333 , 337 and 339 , respectively) that include the bounding boxes being used to identify objects in each of the cropped images.
- the objects that are detected may include vehicles (e.g., cars, trucks, motorcycles, etc.), pedestrians, and structures adjacent to roadways (e.g., signposts, fire hydrants, etc.).
- the bounding boxes in a cropped image may be associated with metadata that provides additional information corresponding to the objects detected in the cropped image.
- the metadata may include at least one of 2D or 3D detection results, a vehicle-type classification, a vehicle re-identification (e.g., a feature vector that includes the make, model and/or color of the vehicle), a taillight signal detection results and a vehicle segmentation mask (e.g., detailed contours of the vehicle).
- the metadata of the bounding boxes may be used to fuse the detection results for objects that are common amongst the cropped images.
- Embodiments of the disclosed technology provide methods for intra-camera fusion (e.g., fuse results from cropped images generated from images captured by a single camera) and inter-camera fusion (e.g., fuse results from cropped images generated from images captured from two or more cameras that are pointing in substantially the same direction).
- Intra-camera fusion fuses results from cropped images generated from images captured by a single camera, and includes the steps of:
- Inter-camera fusion fuses results from cropped images generated from images captured from two or more cameras facing the same direction (e.g., forward-facing cameras #1 and #2 in FIG. 3 ), and includes the steps of:
- both intra-camera and inter-camera fusion compensate for the different fields-of-view in each of the cropped images, as well as the lack of consistent visibility in any set of cropped images. For example, only a portion of an object may be visible in one cropped image, and a different portion may be visible in another cropped image. Embodiments of the disclosed technology are advantageously able to integrate this information, thereby reducing any redundancy and ensuring real-time autonomous operation of the vehicle.
- different types of detection results which are available in the metadata of the bounding boxes, may be fused.
- the different types of detection results include:
- FIGS. 4 A and 4 B show an example of fusing two cropped images based on the detected characteristics of an object.
- a first cropped image 421 may include the front portion of a vehicle
- a second cropped image 423 may include the rear portion of the vehicle.
- Bounding boxes in the first and second cropped images are matched, as shown in FIG. 4 B , based on, for example, (i) the re-identification vector in the metadata of the bounding boxes, which may include the make, model and color of the detected vehicle, and (ii) the segmentation mask, whose contours can be aligned. Having identified the common vehicle in the two cropped images, the results may be fused to generate the result shown in FIG. 4 B .
- FIG. 5 shows another example of fusing two cropped images based on the detected characteristics of an object.
- a first cropped image 521 include the rear of a vehicle in which both the taillights of the vehicle are visible, whereas the second cropped image 523 includes a portion of the rear of the vehicle, in which only one of the taillights of the vehicle is visible.
- the taillight signal detection results may be as follows:
- the majority vote that is performed as part of the fusion process results in a ⁇ Left: red lighted, Right: red lighted ⁇ output result for the taillight signal detection.
- This example illustrates the efficacy of the disclosed technology in that the information available in both the input cropped images is preserved in the output result, but the amount of data in the output result is less than that of the input.
- Embodiments of the disclosed technology are able to reduce the redundant information in multiple cropped images, thereby reducing the amount of information that needs to be processed by downstream modules, and advantageously improving autonomous vehicle operation.
- FIG. 6 shows a flowchart for an example method 600 for improving perception in an autonomous vehicle.
- the method 600 includes, at step 610 , receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image.
- each of the one or more bounding boxes comprise metadata associated with a detection of the one or more objects.
- the method 600 includes, at step 620 , identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, the first and second bounding boxes corresponding to a common object.
- the method 600 includes, at step 630 , fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object.
- the common object is a vehicle
- the metadata of a bounding box comprises at least one of a vehicle feature vector, a taillight signal detection result or a vehicle segmentation mask corresponding to the vehicle detected in the first or the second bounding box.
- the metadata further comprises at least one of a camera pose, a focal length, a shutter speed or a field-of-view associated with a camera of the plurality of cameras that was a source for the cropped image.
- the vehicle feature vector comprises a color of the vehicle or a make of the vehicle.
- the vehicle segmentation mask comprises one or more contours of the vehicle.
- the plurality of cropped images is generated from one or more images captured by exactly one of the plurality of cameras.
- the plurality of cropped images is generated from one or more images captured by two of more of the plurality of cameras facing towards a substantially similar direction.
- the common object is a common vehicle
- the first cropped image comprises a left taillight and a right taillight of the common vehicle
- the second cropped image comprises exactly one taillight of the common vehicle
- the output result comprises a taillight signal detection that is a majority vote based on the left taillight, the right taillight and the exactly one taillight.
- the common object is a common vehicle
- the first cropped image comprises a first vehicle segmentation mask corresponding to the common vehicle
- the second cropped image comprises a second vehicle segmentation mask corresponding to the common vehicle
- the output result comprises a vehicle segmentation mask based on a convex combination of the first and second vehicle segmentation masks.
- a byte size of the output result is less than a byte size of the metadata corresponding to the common object from both the first and second cropped images.
- FIG. 7 shows an example of a hardware platform 700 that can be used to implement some of the techniques described in the present document.
- the hardware platform 700 may implement the method 600 or may implement the various modules described herein.
- the hardware platform 700 may include a processor 702 that can execute code to implement a method.
- the hardware platform 700 may include a memory 704 that may be used to store processor-executable code and/or store data.
- the hardware platform 700 may further include a communication interface 706 .
- the communication interface 706 may implement one or more of the communication protocols (LTE, Wi-Fi, and so on) described herein.
- the hardware platform may further include one or more cameras 740 , a matching module 785 and a fusion module 795 . In some embodiments, some portion or all of the matching module 785 and/or the fusion module 795 may be implemented in the processor 702 .
- Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
- the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
- data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
- the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
- a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program does not necessarily correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
- the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read only memory or a random access memory or both.
- the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- a computer need not have such devices.
- Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
- semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
- the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Mechanical Engineering (AREA)
- Studio Devices (AREA)
Abstract
Devices, systems and methods for fusing scenes from real-time image feeds from on-vehicle cameras in autonomous vehicles to reduce redundancy of the information processed to enable real-time autonomous operation are described. One example of a method for improving perception in an autonomous vehicle includes receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image; identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, wherein the first and second bounding boxes correspond to a common object; and fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object.
Description
- This patent application is a continuation of U.S. patent application Ser. No. 16/442,182, filed on Jun. 14, 2019. The aforementioned application of which is incorporated herein by reference in its entirety.
- This document generally relates to image processing to improve autonomous vehicular driving.
- Autonomous vehicle navigation is a technology for sensing the position and movement of a vehicle and, based on the sensing, autonomously control the vehicle to navigate towards a destination. Autonomous vehicle navigation can have important applications in transportation of people, goods and services. One of the components of autonomous driving, which ensures the safety of the vehicle and its passengers, as well as people and property in the vicinity of the vehicle, is the use of multiple cameras and the real-time responsiveness of the driving algorithms for safety and maneuvering.
- Disclosed are devices, systems and methods for fusing scenes from real-time image feeds from on-vehicle cameras in autonomous vehicles to reduce redundancy of the information processed to enable real-time autonomous operation. In one aspect, the disclosed technology can be used to provide a method for improving perception in an autonomous vehicle. This method includes receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image; identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, wherein the first and second bounding boxes correspond to a common object; and fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object.
- In another aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
- In yet another aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
- The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
-
FIG. 1 shows a block diagram of an exemplary long-distance perception system to perform image processing on images obtained from multiple cameras of an autonomous vehicle. -
FIGS. 2A and 2B show an example of the pre-processing that generates inputs for the fusion of different scenes of real-time image feeds. -
FIG. 3 shows an example of the workflow for the fusion of different scenes of real-time image feeds. -
FIGS. 4A and 4B show an example of fusing two cropped images based on the detected characteristics of an object. -
FIG. 5 shows another example of fusing two cropped images based on the detected characteristics of an object. -
FIG. 6 shows a flowchart of an example method for improving perception in an autonomous vehicle, in accordance with embodiments of the disclosed technology. -
FIG. 7 shows an example of a hardware platform that can implement some techniques described in the present document. - The transportation industry has been undergoing considerable changes in the way technology is used to control the operation of the vehicles. As exemplified in the automotive passenger vehicle, there has been a general advancement towards shifting more of the operational and navigational decision making away from the human driving and into on-board computing power. This is exemplified in the extreme by the numerous under-development autonomous vehicles. Current implementations are in intermediate stages, such as the partially-autonomous operation in some vehicles (e.g., autonomous acceleration and navigation, but with the requirement of a present and attentive driver), the safety-protecting operation of some vehicles (e.g., maintaining a safe following distance and automatic braking), the safety-protecting warnings of some vehicles (e.g., blind-spot indicators in side-view mirrors and proximity sensors), as well as ease-of-use operations (e.g., autonomous parallel parking).
- These different types of autonomous vehicles have been classified into different levels of automation by under the SAE International's J3016 standard, ranging from Level 0 in which the vehicle has no automation to Level 4 (L4), which is characterized by the vehicle operating without human input or oversight but only under select conditions defined by factors such as road type or geographic area, and Level 5 (L5), which is characterized as a driverless car that can operate on any road and in any environment a human driver could negotiate.
- The differing levels of autonomy are typically supported by sensors or cameras that provides data or images of one or more areas surrounding the autonomous vehicle. A computer located in the conventional autonomous vehicle performs image processing to determine the presence or absence of objects (e.g., vehicles or pedestrians) within a limited range from the location of the autonomous vehicle. For example, using conventional techniques, a computer in an autonomous vehicle can perceive objects within a 300-meter distance from the location of the autonomous vehicle. However, a limited range of perception may not be sufficient if the autonomous vehicle is a semi-trailer truck. An autonomous semi-trailer truck is designed to drive safely on the road. However, in some cases, a limited range of perception (e.g., up to 300 meters) may not be sufficient to proactively detect an object on the road so that the autonomous semi-trailer truck may safely stop prior to colliding with or to safely maneuver around that object.
- Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
- Examples of a Long-Distance Perception System
-
FIG. 1 shows a block diagram of an exemplary long-distance perception system 100 to perform image processing on images obtained from one ormore cameras 102 in or on an autonomous vehicle 101, such as an autonomous semi-trailer truck. The exemplary image processing techniques described in this patent document can be used to get an accurate three-dimension (3D) position of object located up to 1000 meters from the location of the autonomous vehicle 101. The exemplary image processing techniques can also be used to track and build motion models for each object perceived. Thus, the exemplary long-distance perception system 100 can be used to enhance safety of an autonomous vehicle 101 driven on the road. - Cameras, onboard computers and databases. In some embodiments, the long-distance perception system 100 includes one or
more cameras 102 installed on or in an autonomous vehicle 101. Eachcamera 102 can generate high-resolution images in real-time while the autonomous vehicle 101 is in operation, such as driving on the road or stopping at a stop sign. In this patent document, the term image can include an image frame from a video feed of acamera 102. The resolution of an image frame from the one ormore cameras 102 can be, for example, 1024×576 pixels. The one ormore cameras 102 can obtain images at a speed of, for example, 20 frames per second (FPS). -
FIG. 1 shows several modules and a database that can perform image processing based on the images received from the one ormore cameras 102. The features or operations of themodules terrain map database 106 are performed by anonboard computer 114 located in an autonomous vehicle 101. Theonboard computer 114 located in the autonomous vehicle 101 includes at least one processor and a memory having instructions stored thereupon. The instructions upon execution by the processor configure theonboard computer 114 to perform the operations associated with the modules and/or database as described in this patent document. - In some embodiments, the
terrain map database 106 may be stored in theonboard computer 114 and provides coordinates of various points in the spatial region (e.g., road surface or mountain elevation) where or around which the autonomous vehicle 101 is being driven or is located. Theterrain map database 106 stores the terrain information that can be represented in 3D space or 3D world coordinates, where the coordinate information characterizes various points in the spatial region that surrounds the autonomous vehicle 101. For example, aterrain map database 106 can include 3D world coordinates for one or more points of a road surface on which the autonomous vehicle 101 is being driven. In another example, aterrain map database 106 can include 3D world coordinates for one or more points in a spatial region towards which or within which the autonomous vehicle 101 is being driven. - Image processing by the picture-in-picture module. In some embodiments, and as shown in
FIG. 1 , theonboard computer 114 on an autonomous vehicle can perform image processing to perceive objects (e.g., vehicles, pedestrians, obstacles) from information provided by sensors such as cameras. The picture-in-picture (PIP)module 104 can process the images obtained from the camera(s) 102 to improve perception of objects that can be located far from the location of the autonomous vehicle 101. - As shown in
FIG. 1 , the images obtained from eachcamera 102 is sent to aPIP module 104. As explained in this section, thePIP module 104 obtains an original image from a camera to select and crop one or more regions of interest in the image. Next, thePIP module 104 sends the cropped region(s) of interest in the image to one or more downstream modules as shown inFIG. 1 , such as thedetection module 110. - In an example, the
PIP module 104 can select and crop one or more regions of interest in an image obtained from acamera 102. The region(s) selected by thePIP module 104 may include area(s) located in front of the autonomous vehicle (e.g., road, highway ramp, or intersection). The selected area(s) are either past a pre-determined distance in front of the location of the autonomous vehicle (e.g., past a distance of 500 meters in front) or are within a range of pre-determined distances in front of the location of the autonomous vehicle (e.g., between 500 meters to 1000 meters in front). - In another example, the
PIP module 104 may select and crop region(s) of interest in one of several ways. Typically, thePIP module 104 can obtain information about a road in front of the autonomous vehicle to select its region(s) of interest. For example, if thePIP module 104 determines that the road is straight (e.g., by identifying the curvature or shape of the lane markers), then thePIP module 104 can select and crop a center region of the original image that includes a region of the road or highway ramp or intersection, where the center region has a pre-determined pixel resolution. In another example, thePIP module 104 can obtain coordinate information of points on the road from aterrain map database 106 so that whether the road is curved or straight thePIP module 104 can select and crop one or more regions of interest that include region(s) of the road or highway ramp or intersection. In both these examples, the selected and cropped region(s) are located (i) in front of and either past, or (ii) within a range of pre-determined distances in front of the location of the autonomous vehicle. - Canvas algorithm for perception. In some embodiments, the long-distance perception system includes a canvas algorithm that fuses the vehicle detection outputs (from, for example,
detection module 110 inFIG. 1 ) from different cameras or different crops of a single image, forming vehicle detection results on a unified virtual focal plane, which simplifies the processing of downstream modules. In an example, the vehicle detection algorithm outputs a bounding box, feature points, contour, lighting signals, vehicle classification and/or distance estimation of each visible vehicles in the photo. - Sensors for autonomous driving usually include cameras of different focal length pointing to the same direction (e.g., camera(s) 102 in
FIG. 1 ). In an example, the vehicle detection algorithm will run on each camera photo separately. Each camera photo may have different regions of interest, so the vehicle detection algorithm may run on different crops of the original images separately. Therefore, there could be many separate vehicle detection results, processing whose interrelation could be burdensome for downstream systems. - However, since some of the cameras are pointing in the same general direction, are physically located close to each other, and have synchronized shutters, the captured images will satisfy some simple geometric transformations (e.g., projective transformations). The crops of an original image also satisfy the same family or geometric transformation. Therefore, using those geometric transformations, the vehicle detection outputs of different crops of different camera may be fused and unified vehicle detection results on a selected virtual focal plane be composed.
- In an example, when fusing vehicle detection outputs, the field of view of each camera and each crop of photo must be considered, because one vehicle could be partially visible in some photo or crop of photo. In principle, visibility of each vehicle attribute in each output is inferred first, and then the unified vehicle attributes are predicted based on visibility, with conflictions properly flagged. Additional examples, which further elucidate embodiments of the disclosed technology are discussed in the next section.
- Examples of Fusing Real-Time Image Feeds from On-Vehicle Cameras
- As discussed above, crops of one or images can be fused to eliminate any redundancy in the images, which advantageously ensures efficient processing for real-time autonomous operation.
FIGS. 2A and 2B show an example of the pre-processing that generates inputs for the fusion of different scenes of real-time image feeds. As shown inFIG. 2A , and as described above, avehicle 200 may include at least a first camera 202-1 and a second camera 202-2. In an example, the first camera 202-1 may capture one or images from which a first croppedimage 221 and a second croppedimage 223 may be generated. Similarly, the second camera may capture an image from which a croppedimage 227 is generated. - In an example, the first and second cameras may be pointing in substantially the same direction, which may result in the cropped images (221, 223 and 227) including a common object, whose information can then be fused. For example, and as shown in
FIG. 2B , the second croppedimage 223 may be an inset picture (and thus a subset) of the first croppedimage 221, and will include a common object. The output of the fusion of these cropped images will retain the information in the images, but will typically be of a smaller byte size than the sum of the byte sizes of the two cropped images. -
FIG. 3 shows an example of the workflow for the fusion of different scenes of real-time image feeds. As shown therein, and described in the context ofFIG. 2A , multiple cameras (e.g., 302-1, 302-2 and 302-3) may be configured to capture images (not shown inFIG. 3 ), which are then cropped to generate a plurality of cropped images (e.g., 321, 323, 327, 329). For example, forward-facing camera #1 (302-1) captures images that result in croppedimages image 327. In a similar manner, a rear-facing camera #3 (302-3) captures images that are cropped to generate an croppedimage 329. - In some embodiments, the cropped images (e.g., 321, 323, 327 and 329) are processed by a detection module (e.g.,
detection module 110 inFIG. 1 ), which results in outputs (e.g., 331, 333, 337 and 339, respectively) that include the bounding boxes being used to identify objects in each of the cropped images. In an example, the objects that are detected may include vehicles (e.g., cars, trucks, motorcycles, etc.), pedestrians, and structures adjacent to roadways (e.g., signposts, fire hydrants, etc.). - In some embodiments, the bounding boxes in a cropped image may be associated with metadata that provides additional information corresponding to the objects detected in the cropped image. In an example, the metadata may include at least one of 2D or 3D detection results, a vehicle-type classification, a vehicle re-identification (e.g., a feature vector that includes the make, model and/or color of the vehicle), a taillight signal detection results and a vehicle segmentation mask (e.g., detailed contours of the vehicle).
- In some embodiments, The metadata of the bounding boxes may be used to fuse the detection results for objects that are common amongst the cropped images. Embodiments of the disclosed technology provide methods for intra-camera fusion (e.g., fuse results from cropped images generated from images captured by a single camera) and inter-camera fusion (e.g., fuse results from cropped images generated from images captured from two or more cameras that are pointing in substantially the same direction).
- Intra-camera fusion fuses results from cropped images generated from images captured by a single camera, and includes the steps of:
-
- (i) Establishing a photometric correspondence between cropped images using cropping information (e.g., focal length of camera, which portion of image was cropped, etc.);
- (ii) Match detection results for a common/identical vehicle across images; and
- (iii) Fuse detection results of different types to generate an
output 351.
- Inter-camera fusion fuses results from cropped images generated from images captured from two or more cameras facing the same direction (e.g., forward-facing
cameras # 1 and #2 inFIG. 3 ), and includes the steps of: - (i) Establishing a photometric correspondence between cropped images using camera pose (e.g., based on calibrated camera position, focal length, etc.);
- (ii) Match detection results for a common/identical vehicle across images; and
- (iii) Fuse detection results of different types to generate an
output 353. - In some embodiments, both intra-camera and inter-camera fusion compensate for the different fields-of-view in each of the cropped images, as well as the lack of consistent visibility in any set of cropped images. For example, only a portion of an object may be visible in one cropped image, and a different portion may be visible in another cropped image. Embodiments of the disclosed technology are advantageously able to integrate this information, thereby reducing any redundancy and ensuring real-time autonomous operation of the vehicle.
- In some embodiments, different types of detection results, which are available in the metadata of the bounding boxes, may be fused. In an example, the different types of detection results include:
-
- Multiple 2D bounding boxes, wherein the fused result includes the largest bounding box that covers the object of interest that is present in each of the bounding boxes
- Multiple 3D bounding boxes, wherein the fused result includes an average of visible keypoints (e.g., a identifiable location in an image)
- Vehicle type classification, wherein the fused result includes the most common (e.g., a majority vote) vehicle type classification amongst the input cropped images
- Vehicle re-identification features, wherein the fused result includes the largest visible feature amongst the input cropped images
- Taillight signal detection, wherein the fused result includes a majority vote across the input cropped images, accounting for visibility of the taillights
- Vehicle segmentation mask, wherein the fused result includes the largest coverage (e.g. a convex hull) of all the input segmentation masks
-
FIGS. 4A and 4B show an example of fusing two cropped images based on the detected characteristics of an object. As shown inFIG. 4A , a first croppedimage 421 may include the front portion of a vehicle, whereas a second croppedimage 423 may include the rear portion of the vehicle. Bounding boxes in the first and second cropped images are matched, as shown inFIG. 4B , based on, for example, (i) the re-identification vector in the metadata of the bounding boxes, which may include the make, model and color of the detected vehicle, and (ii) the segmentation mask, whose contours can be aligned. Having identified the common vehicle in the two cropped images, the results may be fused to generate the result shown inFIG. 4B . -
FIG. 5 shows another example of fusing two cropped images based on the detected characteristics of an object. A first croppedimage 521 include the rear of a vehicle in which both the taillights of the vehicle are visible, whereas the second croppedimage 523 includes a portion of the rear of the vehicle, in which only one of the taillights of the vehicle is visible. In an example, the taillight signal detection results may be as follows: -
- First cropped
image 521 taillight detection- Left: red lighted
- Right: red lighted
- Second cropped
image 523 taillight detection- Left: unknown
- Right: red lighted
- First cropped
- Give these exemplary detection results, the majority vote that is performed as part of the fusion process results in a {Left: red lighted, Right: red lighted} output result for the taillight signal detection. This example illustrates the efficacy of the disclosed technology in that the information available in both the input cropped images is preserved in the output result, but the amount of data in the output result is less than that of the input. Embodiments of the disclosed technology are able to reduce the redundant information in multiple cropped images, thereby reducing the amount of information that needs to be processed by downstream modules, and advantageously improving autonomous vehicle operation.
- Exemplary Embodiments of the Disclosed Technology
-
FIG. 6 shows a flowchart for anexample method 600 for improving perception in an autonomous vehicle. Themethod 600 includes, atstep 610, receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image. In some embodiments, each of the one or more bounding boxes comprise metadata associated with a detection of the one or more objects. - The
method 600 includes, atstep 620, identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, the first and second bounding boxes corresponding to a common object. - The
method 600 includes, atstep 630, fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object. - In some embodiments, the common object is a vehicle, and the metadata of a bounding box comprises at least one of a vehicle feature vector, a taillight signal detection result or a vehicle segmentation mask corresponding to the vehicle detected in the first or the second bounding box. In an example, the metadata further comprises at least one of a camera pose, a focal length, a shutter speed or a field-of-view associated with a camera of the plurality of cameras that was a source for the cropped image. In another example, the vehicle feature vector comprises a color of the vehicle or a make of the vehicle. In yet another example, the vehicle segmentation mask comprises one or more contours of the vehicle.
- In some embodiments, the plurality of cropped images is generated from one or more images captured by exactly one of the plurality of cameras.
- In some embodiments, the plurality of cropped images is generated from one or more images captured by two of more of the plurality of cameras facing towards a substantially similar direction.
- In some embodiments, the common object is a common vehicle, the first cropped image comprises a left taillight and a right taillight of the common vehicle, the second cropped image comprises exactly one taillight of the common vehicle, and the output result comprises a taillight signal detection that is a majority vote based on the left taillight, the right taillight and the exactly one taillight.
- In some embodiments, the common object is a common vehicle, the first cropped image comprises a first vehicle segmentation mask corresponding to the common vehicle, the second cropped image comprises a second vehicle segmentation mask corresponding to the common vehicle, and the output result comprises a vehicle segmentation mask based on a convex combination of the first and second vehicle segmentation masks.
- In some embodiments, a byte size of the output result is less than a byte size of the metadata corresponding to the common object from both the first and second cropped images.
-
FIG. 7 shows an example of ahardware platform 700 that can be used to implement some of the techniques described in the present document. For example, thehardware platform 700 may implement themethod 600 or may implement the various modules described herein. Thehardware platform 700 may include aprocessor 702 that can execute code to implement a method. Thehardware platform 700 may include amemory 704 that may be used to store processor-executable code and/or store data. Thehardware platform 700 may further include acommunication interface 706. For example, thecommunication interface 706 may implement one or more of the communication protocols (LTE, Wi-Fi, and so on) described herein. The hardware platform may further include one ormore cameras 740, amatching module 785 and afusion module 795. In some embodiments, some portion or all of thematching module 785 and/or thefusion module 795 may be implemented in theprocessor 702. - Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
- A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
- While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
- Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
- Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
Claims (20)
1. A method implemented by a processor disposed in a vehicle, the method comprising:
receiving images from one or more sensors installed in the vehicle;
applying, on the images, a detection algorithm stored in the processor to identify a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box are associated with metadata providing information corresponding to a common object; and
generating an output including a fusion of the first image and the second image, the output having a size smaller than a sum of a size of the first image and a size of the second image.
2. The method of claim 1 , further comprising:
fusing the first image and the second image based on the metadata.
3. The method of claim 1 , further comprising, before the applying of the detection algorithm:
selecting and cropping one or more regions of interest in an image.
4. The method of claim 1 , wherein the first image and the second image are from a same sensor.
5. The method of claim 1 , wherein the first image and the second image are from different sensors facing a substantially similar direction.
6. The method of claim 1 , wherein the common object is a vehicle, and wherein the metadata of a bounding box comprises at least one of a vehicle feature vector, a taillight signal detection result or a vehicle segmentation mask corresponding to the vehicle detected in the first bounding box or the second bounding box.
7. The method of claim 1 , wherein the common object is a common vehicle, wherein the first image comprises a left taillight and a right taillight of the common vehicle, wherein the second image comprises exactly one taillight of the common vehicle, and wherein the output comprises a taillight signal detection that is a majority vote based on the left taillight, the right taillight and the exactly one taillight.
8. The method of claim 1 , wherein the common object is a common vehicle, wherein the first image comprises a first vehicle segmentation mask corresponding to the common vehicle, wherein the second image comprises a second vehicle segmentation mask corresponding to the common vehicle, and wherein the output comprises a vehicle segmentation mask based on a convex combination of the first vehicle segmentation mask and the second vehicle segmentation mask.
9. The method of claim 1 , wherein the applying the detection algorithm provides detection outputs having a unified focal plane.
10. An apparatus implemented in a vehicle, comprising:
a plurality of sensors;
a processor; and
a memory with instructions thereon,
wherein images are generated from one or more images captured by at least one of the plurality of sensors,
wherein the instructions upon execution by the processor cause the processor to:
identify, based on metadata in the images, a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box correspond to a common object;
generate an output including a fusion of the first image and the second image,
wherein a size of the output is smaller than a sum of a size of the first image and a size of the second image.
11. The apparatus of claim 10 , wherein the metadata includes at least one of 2D or 3D detection results, a vehicle-type classification, a vehicle identification, a taillight signal detection results, or a vehicle segmentation mask.
12. The apparatus of claim 10 , wherein the first image and the second image are from a same sensor.
13. The apparatus of claim 10 , wherein the first image and the second image are from different sensors facing a substantially similar direction.
14. The apparatus of claim 10 , wherein the common object is a vehicle, a pedestrian, a structure adjacent to roadways.
15. A computer-readable storage medium having code stored thereon, the code, upon execution by one or more processors, causing the one or more processors to implement a method comprising:
receiving, from one or more sensors, images;
applying, on the images, a detection algorithm stored in the one or more processors to identify a first bounding box in a first image and a second bounding box in a second image, wherein the first bounding box and the second bounding box are associated with metadata providing information corresponding to a common object; and
generating an output including a fusion of the first image and the second image, the output having a size smaller than a sum of a size of the first image and a size of the second image.
16. The computer-readable storage medium of claim 15 , wherein the metadata of the first bounding box comprises at least one of an object feature vector, a taillight signal detection result or an object segmentation mask corresponding to the common object detected in the first bounding box.
17. The computer-readable storage medium of claim 16 , wherein the metadata of the first bounding box in the first image further comprises at least one of a camera pose, a focal length, a shutter speed or a field-of-view associated with a sensor that has generated the first image.
18. The computer-readable storage medium of claim 16 , wherein the object feature vector comprises a color of an object or a make of the object.
19. The computer-readable storage medium of claim 16 , wherein the object segmentation mask comprises one or more contours of an object.
20. The computer-readable storage medium of claim 15 , wherein the images are generated by exactly one of the one or more sensors or two or more of the one or more sensors facing towards a substantially similar direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/489,306 US20240046654A1 (en) | 2019-06-14 | 2023-10-18 | Image fusion for autonomous vehicle operation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/442,182 US11823460B2 (en) | 2019-06-14 | 2019-06-14 | Image fusion for autonomous vehicle operation |
US18/489,306 US20240046654A1 (en) | 2019-06-14 | 2023-10-18 | Image fusion for autonomous vehicle operation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/442,182 Continuation US11823460B2 (en) | 2019-06-14 | 2019-06-14 | Image fusion for autonomous vehicle operation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20240046654A1 true US20240046654A1 (en) | 2024-02-08 |
Family
ID=70977451
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/442,182 Active 2041-09-07 US11823460B2 (en) | 2019-06-14 | 2019-06-14 | Image fusion for autonomous vehicle operation |
US18/489,306 Pending US20240046654A1 (en) | 2019-06-14 | 2023-10-18 | Image fusion for autonomous vehicle operation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/442,182 Active 2041-09-07 US11823460B2 (en) | 2019-06-14 | 2019-06-14 | Image fusion for autonomous vehicle operation |
Country Status (4)
Country | Link |
---|---|
US (2) | US11823460B2 (en) |
EP (1) | EP3751455A3 (en) |
CN (1) | CN112085047A (en) |
AU (1) | AU2020203284A1 (en) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018176000A1 (en) | 2017-03-23 | 2018-09-27 | DeepScale, Inc. | Data synthesis for autonomous control systems |
US10671349B2 (en) | 2017-07-24 | 2020-06-02 | Tesla, Inc. | Accelerated mathematical engine |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
US11157441B2 (en) | 2017-07-24 | 2021-10-26 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
US11215999B2 (en) | 2018-06-20 | 2022-01-04 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11361457B2 (en) | 2018-07-20 | 2022-06-14 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
WO2020077117A1 (en) | 2018-10-11 | 2020-04-16 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
US11196678B2 (en) | 2018-10-25 | 2021-12-07 | Tesla, Inc. | QOS manager for system on a chip communications |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US11150664B2 (en) | 2019-02-01 | 2021-10-19 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
US10997461B2 (en) | 2019-02-01 | 2021-05-04 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US10956755B2 (en) | 2019-02-19 | 2021-03-23 | Tesla, Inc. | Estimating object properties using visual image data |
US11568100B2 (en) * | 2019-06-28 | 2023-01-31 | Zoox, Inc. | Synthetic scenario simulator based on events |
US11574089B2 (en) * | 2019-06-28 | 2023-02-07 | Zoox, Inc. | Synthetic scenario generator based on attributes |
US11526721B1 (en) | 2020-02-21 | 2022-12-13 | Zoox, Inc. | Synthetic scenario generator using distance-biased confidences for sensor data |
US11195033B2 (en) * | 2020-02-27 | 2021-12-07 | Gm Cruise Holdings Llc | Multi-modal, multi-technique vehicle signal detection |
US11393184B2 (en) * | 2020-11-13 | 2022-07-19 | Denso International America, Inc. | Systems and methods for adaptive bounding box selection |
CN113190031B (en) * | 2021-04-30 | 2023-03-24 | 成都思晗科技股份有限公司 | Forest fire automatic photographing and tracking method, device and system based on unmanned aerial vehicle |
CN114202542B (en) * | 2022-02-18 | 2022-04-19 | 象辑科技(武汉)股份有限公司 | Visibility inversion method and device, computer equipment and storage medium |
Family Cites Families (209)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU642638B2 (en) | 1989-12-11 | 1993-10-28 | Caterpillar Inc. | Integrated vehicle positioning and navigation system, apparatus and method |
US6822563B2 (en) | 1997-09-22 | 2004-11-23 | Donnelly Corporation | Vehicle imaging system with accessory control |
US5877897A (en) | 1993-02-26 | 1999-03-02 | Donnelly Corporation | Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array |
US7103460B1 (en) | 1994-05-09 | 2006-09-05 | Automotive Technologies International, Inc. | System and method for vehicle diagnostics |
US7783403B2 (en) | 1994-05-23 | 2010-08-24 | Automotive Technologies International, Inc. | System and method for preventing vehicular accidents |
US7655894B2 (en) | 1996-03-25 | 2010-02-02 | Donnelly Corporation | Vehicular image sensing system |
US6084870A (en) | 1996-07-22 | 2000-07-04 | Qualcomm Incorporated | Method and apparatus for the remote monitoring and configuration of electronic control systems |
US6263088B1 (en) | 1997-06-19 | 2001-07-17 | Ncr Corporation | System and method for tracking movement of objects in a scene |
JP3183501B2 (en) | 1997-07-07 | 2001-07-09 | 本田技研工業株式会社 | Travel control device for vehicles |
US6594821B1 (en) | 2000-03-30 | 2003-07-15 | Transmeta Corporation | Translation consistency checking for modified target instructions by comparing to original copy |
US8711217B2 (en) | 2000-10-24 | 2014-04-29 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US7363149B2 (en) | 2001-12-13 | 2008-04-22 | Robert Bosch Gmbh | Autonomous in-vehicle navigation system and diagnostic system |
US7167519B2 (en) | 2001-12-20 | 2007-01-23 | Siemens Corporate Research, Inc. | Real-time video object generation for smart cameras |
EP1504276B1 (en) | 2002-05-03 | 2012-08-08 | Donnelly Corporation | Object detection system for vehicle |
US9007197B2 (en) | 2002-05-20 | 2015-04-14 | Intelligent Technologies International, Inc. | Vehicular anticipatory sensor system |
US6975923B2 (en) | 2002-10-01 | 2005-12-13 | Roke Manor Research Limited | Autonomous vehicle guidance on or near airports |
US6777904B1 (en) | 2003-02-25 | 2004-08-17 | Ford Global Technologies, Llc | Method and system for controlling a motor |
US8855405B2 (en) | 2003-04-30 | 2014-10-07 | Deere & Company | System and method for detecting and analyzing features in an agricultural field for vehicle guidance |
WO2005098751A1 (en) | 2004-04-08 | 2005-10-20 | Mobileye Technologies Limited | Crowd detection |
US20070230792A1 (en) | 2004-04-08 | 2007-10-04 | Mobileye Technologies Ltd. | Pedestrian Detection |
WO2005098782A1 (en) | 2004-04-08 | 2005-10-20 | Mobileye Technologies Limited | Collision warning system |
US7526103B2 (en) | 2004-04-15 | 2009-04-28 | Donnelly Corporation | Imaging system for vehicle |
US8078338B2 (en) | 2004-10-22 | 2011-12-13 | Irobot Corporation | System and method for behavior based control of an autonomous vehicle |
US7742841B2 (en) | 2005-02-23 | 2010-06-22 | Panasonic Electric Works Co., Ltd. | Autonomous vehicle and planar obstacle recognition method |
KR100802511B1 (en) | 2005-10-11 | 2008-02-13 | 주식회사 코리아 와이즈넛 | System and method for offering searching service based on topics |
EP1790541A2 (en) | 2005-11-23 | 2007-05-30 | MobilEye Technologies, Ltd. | Systems and methods for detecting obstructions in a camera field of view |
US8164628B2 (en) | 2006-01-04 | 2012-04-24 | Mobileye Technologies Ltd. | Estimating distance to an object using a sequence of images recorded by a monocular camera |
US8150155B2 (en) | 2006-02-07 | 2012-04-03 | Qualcomm Incorporated | Multi-mode region-of-interest video object segmentation |
US8265392B2 (en) | 2006-02-07 | 2012-09-11 | Qualcomm Incorporated | Inter-mode region-of-interest video object segmentation |
US7689559B2 (en) | 2006-02-08 | 2010-03-30 | Telenor Asa | Document similarity scoring and ranking method, device and computer program product |
US8050863B2 (en) | 2006-03-16 | 2011-11-01 | Gray & Company, Inc. | Navigation and control system for autonomous vehicles |
US8417060B2 (en) | 2006-03-20 | 2013-04-09 | Arizona Board Of Regents For And On Behalf Of Arizona State University | Methods for multi-point descriptors for image registrations |
US8108092B2 (en) | 2006-07-14 | 2012-01-31 | Irobot Corporation | Autonomous behaviors for a remote vehicle |
US7786898B2 (en) | 2006-05-31 | 2010-08-31 | Mobileye Technologies Ltd. | Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications |
US8064643B2 (en) | 2006-12-06 | 2011-11-22 | Mobileye Technologies Limited | Detecting and recognizing traffic signs |
US20080249667A1 (en) | 2007-04-09 | 2008-10-09 | Microsoft Corporation | Learning and reasoning to enhance energy efficiency in transportation systems |
US7839292B2 (en) | 2007-04-11 | 2010-11-23 | Nec Laboratories America, Inc. | Real-time driving danger level prediction |
US8229163B2 (en) | 2007-08-22 | 2012-07-24 | American Gnc Corporation | 4D GIS based virtual reality for moving target prediction |
US8041111B1 (en) | 2007-10-15 | 2011-10-18 | Adobe Systems Incorporated | Subjective and locatable color theme extraction for images |
US9176006B2 (en) | 2008-01-15 | 2015-11-03 | Mobileye Vision Technologies Ltd. | Detection and classification of light sources using a diffraction grating |
US9117133B2 (en) | 2008-06-18 | 2015-08-25 | Spectral Image, Inc. | Systems and methods for hyperspectral imaging |
US20100049397A1 (en) | 2008-08-22 | 2010-02-25 | Garmin Ltd. | Fuel efficient routing |
US8126642B2 (en) | 2008-10-24 | 2012-02-28 | Gray & Company, Inc. | Control and systems for autonomously driven vehicles |
US8345956B2 (en) | 2008-11-03 | 2013-01-01 | Microsoft Corporation | Converting 2D video into stereo video |
US9459515B2 (en) | 2008-12-05 | 2016-10-04 | Mobileye Vision Technologies Ltd. | Adjustable camera mount for a vehicle windshield |
US8175376B2 (en) | 2009-03-09 | 2012-05-08 | Xerox Corporation | Framework for image thumbnailing based on visual similarity |
RU2011143140A (en) | 2009-03-26 | 2013-05-10 | Конинклейке Филипс Электроникс Н.В. | METHOD AND DEVICE FOR CHANGING THE IMAGE BY USING A CAUTION CARD BASED ON COLOR FREQUENCY |
US8271871B2 (en) | 2009-04-30 | 2012-09-18 | Xerox Corporation | Automated method for alignment of document objects |
US8392117B2 (en) | 2009-05-22 | 2013-03-05 | Toyota Motor Engineering & Manufacturing North America, Inc. | Using topological structure for path planning in semi-structured environments |
US9002632B1 (en) | 2009-07-19 | 2015-04-07 | Aaron T. Emigh | Fuel cost optimized routing |
DE102009046124A1 (en) | 2009-10-28 | 2011-05-05 | Ifm Electronic Gmbh | Method and apparatus for calibrating a 3D TOF camera system |
TWI393074B (en) | 2009-12-10 | 2013-04-11 | Ind Tech Res Inst | Apparatus and method for moving object detection |
JP2011176748A (en) | 2010-02-25 | 2011-09-08 | Sony Corp | Image processing apparatus and method, and program |
US8726305B2 (en) | 2010-04-02 | 2014-05-13 | Yahoo! Inc. | Methods and systems for application rendering and management on internet television enabled displays |
KR101145112B1 (en) | 2010-05-11 | 2012-05-14 | 국방과학연구소 | Steering control device of autonomous vehicle, autonomous vehicle having the same and steering control method of autonomous vehicle |
US9753128B2 (en) | 2010-07-23 | 2017-09-05 | Heptagon Micro Optics Pte. Ltd. | Multi-path compensation using multiple modulation frequencies in time of flight sensor |
US8412406B2 (en) | 2010-08-13 | 2013-04-02 | Deere & Company | Method and system for performing diagnostics or software maintenance for a vehicle |
US9280711B2 (en) | 2010-09-21 | 2016-03-08 | Mobileye Vision Technologies Ltd. | Barrier and guardrail detection using a single camera |
US9118816B2 (en) | 2011-12-06 | 2015-08-25 | Mobileye Vision Technologies Ltd. | Road vertical contour detection |
US8509982B2 (en) | 2010-10-05 | 2013-08-13 | Google Inc. | Zone driving |
EP2448251B1 (en) | 2010-10-31 | 2019-09-25 | Mobileye Vision Technologies Ltd. | Bundling night vision and other driver assistance systems (DAS) using near infra red (NIR) illumination and a rolling shutter |
WO2012068154A1 (en) | 2010-11-15 | 2012-05-24 | Huawei Technologies Co., Ltd. | Method and system for video summarization |
EP2993654B1 (en) | 2010-12-07 | 2017-05-03 | Mobileye Vision Technologies Ltd. | Method and system for forward collision warning |
US9823339B2 (en) | 2010-12-21 | 2017-11-21 | Microsoft Technology Licensing, Llc | Plural anode time-of-flight sensor |
WO2012095658A1 (en) | 2011-01-14 | 2012-07-19 | Bae Systems Plc | Data transfer system and method thereof |
US9323250B2 (en) | 2011-01-28 | 2016-04-26 | Intouch Technologies, Inc. | Time-dependent navigation of telepresence robots |
KR101533905B1 (en) | 2011-02-21 | 2015-07-03 | 스트라테크 시스템즈 리미티드 | A surveillance system and a method for detecting a foreign object, debris, or damage in an airfield |
US8401292B2 (en) | 2011-04-26 | 2013-03-19 | Eastman Kodak Company | Identifying high saliency regions in digital images |
US9233659B2 (en) | 2011-04-27 | 2016-01-12 | Mobileye Vision Technologies Ltd. | Pedestrian collision warning system |
KR101777875B1 (en) | 2011-04-28 | 2017-09-13 | 엘지디스플레이 주식회사 | Stereoscopic image display and method of adjusting stereoscopic image thereof |
US9183447B1 (en) | 2011-06-09 | 2015-11-10 | Mobileye Vision Technologies Ltd. | Object detection using candidate object alignment |
US20120314070A1 (en) | 2011-06-09 | 2012-12-13 | GM Global Technology Operations LLC | Lane sensing enhancement through object vehicle information for lane centering/keeping |
GB2492848A (en) | 2011-07-15 | 2013-01-16 | Softkinetic Sensors Nv | Optical distance measurement |
CN103718427B (en) | 2011-07-28 | 2017-04-12 | 本田技研工业株式会社 | wireless power transmission method |
US8744123B2 (en) | 2011-08-29 | 2014-06-03 | International Business Machines Corporation | Modeling of temporarily static objects in surveillance video data |
DE102011083749B4 (en) | 2011-09-29 | 2015-06-11 | Aktiebolaget Skf | Rotor blade of a wind turbine with a device for detecting a distance value and method for detecting a distance value |
US8891820B2 (en) | 2011-09-29 | 2014-11-18 | The Boeing Company | Multi-modal sensor fusion |
US20140143839A1 (en) | 2011-11-16 | 2014-05-22 | Flextronics Ap, Llc. | On board vehicle remote control module |
US9214084B2 (en) | 2011-12-05 | 2015-12-15 | Brightway Vision Ltd. | Smart traffic sign system and method |
US9297641B2 (en) | 2011-12-12 | 2016-03-29 | Mobileye Vision Technologies Ltd. | Detection of obstacles at night by analysis of shadows |
FR2984254B1 (en) | 2011-12-16 | 2016-07-01 | Renault Sa | CONTROL OF AUTONOMOUS VEHICLES |
US8810666B2 (en) | 2012-01-16 | 2014-08-19 | Google Inc. | Methods and systems for processing a video for stabilization using dynamic crop |
US9317776B1 (en) | 2013-03-13 | 2016-04-19 | Hrl Laboratories, Llc | Robust static and moving object detection system via attentional mechanisms |
JP5605381B2 (en) | 2012-02-13 | 2014-10-15 | 株式会社デンソー | Cruise control equipment |
US9042648B2 (en) | 2012-02-23 | 2015-05-26 | Microsoft Technology Licensing, Llc | Salient object segmentation |
US8457827B1 (en) | 2012-03-15 | 2013-06-04 | Google Inc. | Modifying behavior of autonomous vehicle based on predicted behavior of other vehicles |
US9476970B1 (en) | 2012-03-19 | 2016-10-25 | Google Inc. | Camera based localization |
US8737690B2 (en) | 2012-04-06 | 2014-05-27 | Xerox Corporation | Video-based method for parking angle violation detection |
US8718861B1 (en) | 2012-04-11 | 2014-05-06 | Google Inc. | Determining when to drive autonomously |
US9549158B2 (en) | 2012-04-18 | 2017-01-17 | Brightway Vision Ltd. | Controllable single pixel sensors |
JP6243402B2 (en) | 2012-04-18 | 2017-12-06 | ブライトウェイ ビジョン リミテッド | Multiple gated pixels per readout |
US9723233B2 (en) | 2012-04-18 | 2017-08-01 | Brightway Vision Ltd. | Controllable gated sensor |
WO2013179280A1 (en) | 2012-05-29 | 2013-12-05 | Brightway Vision Ltd. | Gated imaging using an adaptive depth of field |
US9134402B2 (en) | 2012-08-13 | 2015-09-15 | Digital Signal Corporation | System and method for calibrating video and lidar subsystems |
CN104769653B (en) | 2012-08-21 | 2017-08-04 | 布莱特瓦维森有限公司 | The traffic light signals in different range are illuminated simultaneously |
US9025880B2 (en) | 2012-08-29 | 2015-05-05 | Disney Enterprises, Inc. | Visual saliency estimation for images and video |
US9165190B2 (en) | 2012-09-12 | 2015-10-20 | Avigilon Fortress Corporation | 3D human pose and shape modeling |
US9120485B1 (en) | 2012-09-14 | 2015-09-01 | Google Inc. | Methods and systems for smooth trajectory generation for a self-driving vehicle |
US9488492B2 (en) | 2014-03-18 | 2016-11-08 | Sri International | Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics |
US9111444B2 (en) | 2012-10-31 | 2015-08-18 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
EP2925494B1 (en) | 2012-12-03 | 2020-07-08 | ABB Schweiz AG | Teleoperation of machines having at least one actuated mechanism and one machine controller comprising a program code including instructions for transferring control of the machine from said controller to a remote control station |
WO2014095539A1 (en) | 2012-12-17 | 2014-06-26 | Pmdtechnologies Gmbh | Light propagation time camera with a motion detector |
US9602807B2 (en) | 2012-12-19 | 2017-03-21 | Microsoft Technology Licensing, Llc | Single frequency time of flight de-aliasing |
US9081385B1 (en) | 2012-12-21 | 2015-07-14 | Google Inc. | Lane boundary detection using images |
US9092430B2 (en) | 2013-01-02 | 2015-07-28 | International Business Machines Corporation | Assigning shared catalogs to cache structures in a cluster computing system |
US8788134B1 (en) | 2013-01-04 | 2014-07-22 | GM Global Technology Operations LLC | Autonomous driving merge management system |
WO2014111814A2 (en) | 2013-01-15 | 2014-07-24 | Mobileye Technologies Limited | Stereo assist with rolling shutters |
US9277132B2 (en) | 2013-02-21 | 2016-03-01 | Mobileye Vision Technologies Ltd. | Image distortion correction of a camera with a rolling shutter |
US9147255B1 (en) | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
US9652860B1 (en) | 2013-03-15 | 2017-05-16 | Puretech Systems, Inc. | System and method for autonomous PTZ tracking of aerial targets |
US9342074B2 (en) | 2013-04-05 | 2016-05-17 | Google Inc. | Systems and methods for transitioning control of an autonomous vehicle to a driver |
CN103198128A (en) | 2013-04-11 | 2013-07-10 | 苏州阔地网络科技有限公司 | Method and system for data search of cloud education platform |
AU2013205548A1 (en) | 2013-04-30 | 2014-11-13 | Canon Kabushiki Kaisha | Method, system and apparatus for tracking objects of a scene |
US9438878B2 (en) | 2013-05-01 | 2016-09-06 | Legend3D, Inc. | Method of converting 2D video to 3D video using 3D object models |
US9025825B2 (en) | 2013-05-10 | 2015-05-05 | Palo Alto Research Center Incorporated | System and method for visual motion based object segmentation and tracking |
US9729860B2 (en) | 2013-05-24 | 2017-08-08 | Microsoft Technology Licensing, Llc | Indirect reflection suppression in depth imaging |
CN105659304B (en) | 2013-06-13 | 2020-01-03 | 移动眼视力科技有限公司 | Vehicle, navigation system and method for generating and delivering navigation information |
IL227265A0 (en) | 2013-06-30 | 2013-12-31 | Brightway Vision Ltd | Smart camera flash |
KR102111784B1 (en) | 2013-07-17 | 2020-05-15 | 현대모비스 주식회사 | Apparatus and method for discernmenting position of car |
US9315192B1 (en) | 2013-09-30 | 2016-04-19 | Google Inc. | Methods and systems for pedestrian avoidance using LIDAR |
US9122954B2 (en) | 2013-10-01 | 2015-09-01 | Mobileye Vision Technologies Ltd. | Performing a histogram using an array of addressable registers |
US9738280B2 (en) | 2013-10-03 | 2017-08-22 | Robert Bosch Gmbh | Adaptive cruise control with on-ramp detection |
US9330334B2 (en) | 2013-10-24 | 2016-05-03 | Adobe Systems Incorporated | Iterative saliency map estimation |
US9299004B2 (en) | 2013-10-24 | 2016-03-29 | Adobe Systems Incorporated | Image foreground detection |
US9156473B2 (en) | 2013-12-04 | 2015-10-13 | Mobileye Vision Technologies Ltd. | Multi-threshold reaction zone for autonomous vehicle navigation |
EP2887311B1 (en) | 2013-12-20 | 2016-09-14 | Thomson Licensing | Method and apparatus for performing depth estimation |
WO2015103159A1 (en) | 2013-12-30 | 2015-07-09 | Tieman Craig Arnold | Connected vehicle system with infotainment interface for mobile devices |
EP3100206B1 (en) | 2014-01-30 | 2020-09-09 | Mobileye Vision Technologies Ltd. | Systems and methods for lane end recognition |
WO2015125022A2 (en) | 2014-02-20 | 2015-08-27 | Mobileye Vision Technologies Ltd. | Navigation based on radar-cued visual imaging |
CN103793925B (en) | 2014-02-24 | 2016-05-18 | 北京工业大学 | Merge the video image vision significance degree detection method of space-time characteristic |
US9981389B2 (en) | 2014-03-03 | 2018-05-29 | California Institute Of Technology | Robotics platforms incorporating manipulators having common joint designs |
DE102014205170A1 (en) | 2014-03-20 | 2015-11-26 | Bayerische Motoren Werke Aktiengesellschaft | Method and device for determining a trajectory for a vehicle |
US9739609B1 (en) | 2014-03-25 | 2017-08-22 | Amazon Technologies, Inc. | Time-of-flight sensor with configurable phase delay |
US9471889B2 (en) | 2014-04-24 | 2016-10-18 | Xerox Corporation | Video tracking based method for automatic sequencing of vehicles in drive-thru applications |
CN105100134A (en) | 2014-04-28 | 2015-11-25 | 思科技术公司 | Screen shared cache management |
US9443163B2 (en) | 2014-05-14 | 2016-09-13 | Mobileye Vision Technologies Ltd. | Systems and methods for curb detection and pedestrian hazard assessment |
US9720418B2 (en) | 2014-05-27 | 2017-08-01 | Here Global B.V. | Autonomous vehicle monitoring and control |
US10572744B2 (en) | 2014-06-03 | 2020-02-25 | Mobileye Vision Technologies Ltd. | Systems and methods for detecting an object |
US9457807B2 (en) | 2014-06-05 | 2016-10-04 | GM Global Technology Operations LLC | Unified motion planning algorithm for autonomous driving vehicle in obstacle avoidance maneuver |
IL233356A (en) | 2014-06-24 | 2015-10-29 | Brightway Vision Ltd | Gated sensor based imaging system with minimized delay time between sensor exposures |
US9628565B2 (en) | 2014-07-23 | 2017-04-18 | Here Global B.V. | Highly assisted driving platform |
US20160026787A1 (en) | 2014-07-25 | 2016-01-28 | GM Global Technology Operations LLC | Authenticating messages sent over a vehicle bus that include message authentication codes |
US9766625B2 (en) | 2014-07-25 | 2017-09-19 | Here Global B.V. | Personalized driving of autonomously driven vehicles |
US9554030B2 (en) | 2014-09-29 | 2017-01-24 | Yahoo! Inc. | Mobile device image acquisition using objects of interest recognition |
US9248834B1 (en) | 2014-10-02 | 2016-02-02 | Google Inc. | Predicting trajectories of objects based on contextual information |
US9746550B2 (en) | 2014-10-08 | 2017-08-29 | Ford Global Technologies, Llc | Detecting low-speed close-range vehicle cut-in |
US9779276B2 (en) | 2014-10-10 | 2017-10-03 | Hand Held Products, Inc. | Depth sensor based auto-focus system for an indicia scanner |
US9773155B2 (en) | 2014-10-14 | 2017-09-26 | Microsoft Technology Licensing, Llc | Depth from time of flight camera |
US9959903B2 (en) | 2014-10-23 | 2018-05-01 | Qnap Systems, Inc. | Video playback method |
US20170336203A1 (en) * | 2014-10-26 | 2017-11-23 | Galileo Group, Inc. | Methods and systems for remote sensing with drones and mounted sensor devices |
US9547985B2 (en) | 2014-11-05 | 2017-01-17 | Here Global B.V. | Method and apparatus for providing access to autonomous vehicles based on user context |
KR101664582B1 (en) | 2014-11-12 | 2016-10-10 | 현대자동차주식회사 | Path Planning Apparatus and Method for Autonomous Vehicle |
US9494935B2 (en) | 2014-11-13 | 2016-11-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | Remote operation of autonomous vehicle in unexpected environment |
KR102312273B1 (en) | 2014-11-13 | 2021-10-12 | 삼성전자주식회사 | Camera for depth image measure and method of operating the same |
CN107624155B (en) | 2014-12-05 | 2021-09-28 | 苹果公司 | Autonomous navigation system |
US9347779B1 (en) | 2014-12-10 | 2016-05-24 | Here Global B.V. | Method and apparatus for determining a position of a vehicle based on driving behavior |
US9805294B2 (en) | 2015-02-12 | 2017-10-31 | Mitsubishi Electric Research Laboratories, Inc. | Method for denoising time-of-flight range images |
US10115024B2 (en) | 2015-02-26 | 2018-10-30 | Mobileye Vision Technologies Ltd. | Road vertical contour detection using a stabilized coordinate frame |
JP6421684B2 (en) | 2015-04-17 | 2018-11-14 | 井関農機株式会社 | Riding mower |
US9649999B1 (en) | 2015-04-28 | 2017-05-16 | Sprint Communications Company L.P. | Vehicle remote operations control |
US10635761B2 (en) | 2015-04-29 | 2020-04-28 | Energid Technologies Corporation | System and method for evaluation of object autonomy |
US9483839B1 (en) | 2015-05-06 | 2016-11-01 | The Boeing Company | Occlusion-robust visual object fingerprinting using fusion of multiple sub-region signatures |
US10345809B2 (en) | 2015-05-13 | 2019-07-09 | Uber Technologies, Inc. | Providing remote assistance to an autonomous vehicle |
US9613273B2 (en) | 2015-05-19 | 2017-04-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Apparatus and method for object tracking |
US9690290B2 (en) | 2015-06-04 | 2017-06-27 | Toyota Motor Engineering & Manufacturing North America, Inc. | Situation-based transfer of vehicle sensor data during remote operation of autonomous vehicles |
DE102015211926A1 (en) | 2015-06-26 | 2016-12-29 | Robert Bosch Gmbh | Method and device for determining or evaluating a desired trajectory of a motor vehicle |
WO2017013875A1 (en) | 2015-07-23 | 2017-01-26 | 日本電気株式会社 | Route switching device, route switching system, and route switching method |
US9989965B2 (en) | 2015-08-20 | 2018-06-05 | Motionloft, Inc. | Object detection and analysis via unmanned aerial vehicle |
US10282591B2 (en) | 2015-08-24 | 2019-05-07 | Qualcomm Incorporated | Systems and methods for depth map sampling |
US9587952B1 (en) | 2015-09-09 | 2017-03-07 | Allstate Insurance Company | Altering autonomous or semi-autonomous vehicle operation based on route traversal values |
WO2017045116A1 (en) | 2015-09-15 | 2017-03-23 | SZ DJI Technology Co., Ltd. | System and method for supporting smooth target following |
US9881219B2 (en) * | 2015-10-07 | 2018-01-30 | Ford Global Technologies, Llc | Self-recognition of autonomous vehicles in mirrored or reflective surfaces |
US9612123B1 (en) | 2015-11-04 | 2017-04-04 | Zoox, Inc. | Adaptive mapping to navigate autonomous vehicles responsive to physical environment changes |
US9507346B1 (en) | 2015-11-04 | 2016-11-29 | Zoox, Inc. | Teleoperation system and method for trajectory modification of autonomous vehicles |
US9754490B2 (en) | 2015-11-04 | 2017-09-05 | Zoox, Inc. | Software application to request and control an autonomous vehicle service |
WO2017079349A1 (en) | 2015-11-04 | 2017-05-11 | Zoox, Inc. | System for implementing an active safety system in an autonomous vehicle |
US9734455B2 (en) | 2015-11-04 | 2017-08-15 | Zoox, Inc. | Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles |
US10127685B2 (en) | 2015-12-16 | 2018-11-13 | Objectvideo Labs, Llc | Profile matching of buildings and urban structures |
US10102434B2 (en) | 2015-12-22 | 2018-10-16 | Omnivision Technologies, Inc. | Lane detection system and method |
US9568915B1 (en) | 2016-02-11 | 2017-02-14 | Mitsubishi Electric Research Laboratories, Inc. | System and method for controlling autonomous or semi-autonomous vehicle |
US9760837B1 (en) | 2016-03-13 | 2017-09-12 | Microsoft Technology Licensing, Llc | Depth from time-of-flight using machine learning |
WO2017165627A1 (en) | 2016-03-23 | 2017-09-28 | Netradyne Inc. | Advanced path prediction |
US9535423B1 (en) | 2016-03-29 | 2017-01-03 | Adasworks Kft. | Autonomous vehicle with improved visual detection ability |
US9776638B1 (en) | 2016-04-20 | 2017-10-03 | GM Global Technology Operations LLC | Remote interrogation and override for automated driving system |
US10362429B2 (en) | 2016-04-28 | 2019-07-23 | California Institute Of Technology | Systems and methods for generating spatial sound information relevant to real-world environments |
US9672446B1 (en) | 2016-05-06 | 2017-06-06 | Uber Technologies, Inc. | Object detection for an autonomous vehicle |
CN113140125B (en) | 2016-08-31 | 2022-06-17 | 北京万集科技股份有限公司 | Vehicle-road cooperative auxiliary driving method and road side equipment |
US10261574B2 (en) | 2016-11-30 | 2019-04-16 | University Of Macau | Real-time detection system for parked vehicles |
US11295458B2 (en) | 2016-12-01 | 2022-04-05 | Skydio, Inc. | Object tracking by an unmanned aerial vehicle using visual sensors |
CN106781591A (en) | 2016-12-19 | 2017-05-31 | 吉林大学 | A kind of city vehicle navigation system based on bus or train route collaboration |
US9953236B1 (en) | 2017-03-10 | 2018-04-24 | TuSimple | System and method for semantic segmentation using dense upsampling convolution (DUC) |
US10147193B2 (en) | 2017-03-10 | 2018-12-04 | TuSimple | System and method for semantic segmentation using hybrid dilated convolution (HDC) |
US10209089B2 (en) | 2017-04-03 | 2019-02-19 | Robert Bosch Gmbh | Automated image labeling for vehicles based on maps |
US20180373980A1 (en) | 2017-06-27 | 2018-12-27 | drive.ai Inc. | Method for training and refining an artificial intelligence |
JP2019023787A (en) * | 2017-07-24 | 2019-02-14 | 株式会社デンソー | Image recognition system |
US10565457B2 (en) | 2017-08-23 | 2020-02-18 | Tusimple, Inc. | Feature matching and correspondence refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map |
US10223807B1 (en) | 2017-08-23 | 2019-03-05 | TuSimple | Feature extraction from 3D submap and global map system and method for centimeter precision localization using camera-based submap and lidar-based global map |
US10223806B1 (en) | 2017-08-23 | 2019-03-05 | TuSimple | System and method for centimeter precision localization using camera-based submap and LiDAR-based global map |
US10762673B2 (en) | 2017-08-23 | 2020-09-01 | Tusimple, Inc. | 3D submap reconstruction system and method for centimeter precision localization using camera-based submap and LiDAR-based global map |
US10410055B2 (en) | 2017-10-05 | 2019-09-10 | TuSimple | System and method for aerial video traffic analysis |
US10812589B2 (en) | 2017-10-28 | 2020-10-20 | Tusimple, Inc. | Storage architecture for heterogeneous multimedia data |
US10666730B2 (en) | 2017-10-28 | 2020-05-26 | Tusimple, Inc. | Storage architecture for heterogeneous multimedia data |
CN108010360A (en) | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN112004729B (en) | 2018-01-09 | 2023-12-01 | 图森有限公司 | Real-time remote control of vehicles with high redundancy |
WO2019140277A2 (en) | 2018-01-11 | 2019-07-18 | TuSimple | Monitoring system for autonomous vehicle operation |
CN108182817A (en) | 2018-01-11 | 2018-06-19 | 北京图森未来科技有限公司 | Automatic Pilot auxiliary system, trackside end auxiliary system and vehicle-mounted end auxiliary system |
US10685244B2 (en) | 2018-02-27 | 2020-06-16 | Tusimple, Inc. | System and method for online real-time multi-object tracking |
US10685239B2 (en) * | 2018-03-18 | 2020-06-16 | Tusimple, Inc. | System and method for lateral vehicle detection |
US11126873B2 (en) * | 2018-05-17 | 2021-09-21 | Zoox, Inc. | Vehicle lighting state determination |
-
2019
- 2019-06-14 US US16/442,182 patent/US11823460B2/en active Active
-
2020
- 2020-05-20 AU AU2020203284A patent/AU2020203284A1/en active Pending
- 2020-06-03 EP EP20178069.9A patent/EP3751455A3/en not_active Ceased
- 2020-06-12 CN CN202010536866.XA patent/CN112085047A/en active Pending
-
2023
- 2023-10-18 US US18/489,306 patent/US20240046654A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20200393845A1 (en) | 2020-12-17 |
EP3751455A3 (en) | 2021-03-10 |
CN112085047A (en) | 2020-12-15 |
EP3751455A2 (en) | 2020-12-16 |
US11823460B2 (en) | 2023-11-21 |
AU2020203284A1 (en) | 2021-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240046654A1 (en) | Image fusion for autonomous vehicle operation | |
US10685246B2 (en) | Systems and methods for curb detection and pedestrian hazard assessment | |
US9568611B2 (en) | Detecting objects obstructing a driver's view of a road | |
JP2019096072A (en) | Object detection device, object detection method and program | |
US11577748B1 (en) | Real-time perception system for small objects at long range for autonomous vehicles | |
US20190135169A1 (en) | Vehicle communication system using projected light | |
CN114066929A (en) | Method of predicting a trajectory of a target vehicle relative to an autonomous vehicle | |
CN107273788A (en) | The imaging system and vehicle imaging systems of lane detection are performed in vehicle | |
JP4951481B2 (en) | Road marking recognition device | |
CN113838060A (en) | Perception system for autonomous vehicle | |
CN116691731A (en) | Fusion of imaging data and lidar data to improve target recognition | |
Li et al. | Pitch angle estimation using a Vehicle-Mounted monocular camera for range measurement | |
CN109070799B (en) | Moving body periphery display method and moving body periphery display device | |
US11681047B2 (en) | Ground surface imaging combining LiDAR and camera data | |
CN117836818A (en) | Information processing device, information processing system, model, and model generation method | |
CN113840079A (en) | Division of images acquired from unmanned vehicle cameras | |
CN113841154A (en) | Obstacle detection method and device | |
US12094144B1 (en) | Real-time confidence-based image hole-filling for depth maps | |
US20240190331A1 (en) | Methods and apparatuses for adaptive high beam control for a vehicle | |
US20220250652A1 (en) | Virtual lane methods and systems | |
US11461922B2 (en) | Depth estimation in images obtained from an autonomous vehicle camera | |
WO2024069689A1 (en) | Driving assistance method and driving assistance device | |
WO2023126097A1 (en) | Method and apparatus for generating ground truth for driving boundaries | |
WO2023126142A1 (en) | Method and apparatus for generating ground truth for other road participant |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TUSIMPLE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, YIJIE;LIU, SIYUAN;GE, LINGTING;AND OTHERS;SIGNING DATES FROM 20190610 TO 20190614;REEL/FRAME:065267/0064 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |