US10867201B2 - Detecting sensor occlusion with compressed image data - Google Patents

Detecting sensor occlusion with compressed image data Download PDF

Info

Publication number
US10867201B2
US10867201B2 US16/248,096 US201916248096A US10867201B2 US 10867201 B2 US10867201 B2 US 10867201B2 US 201916248096 A US201916248096 A US 201916248096A US 10867201 B2 US10867201 B2 US 10867201B2
Authority
US
United States
Prior art keywords
file size
imaging sensor
file
compressed image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/248,096
Other versions
US20200226403A1 (en
Inventor
Ruffin Evans
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Waymo LLC
Original Assignee
Waymo LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Waymo LLC filed Critical Waymo LLC
Priority to US16/248,096 priority Critical patent/US10867201B2/en
Assigned to WAYMO LLC reassignment WAYMO LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EVANS, RUFFIN
Priority to PCT/US2020/013293 priority patent/WO2020150127A1/en
Priority to JP2021532044A priority patent/JP7198358B2/en
Priority to CN202080009199.8A priority patent/CN113302651B/en
Priority to CA3126389A priority patent/CA3126389A1/en
Priority to IL284592A priority patent/IL284592B2/en
Priority to EP20742015.9A priority patent/EP3881282A4/en
Priority to KR1020217021819A priority patent/KR102688017B1/en
Publication of US20200226403A1 publication Critical patent/US20200226403A1/en
Priority to US17/098,479 priority patent/US11216682B2/en
Publication of US10867201B2 publication Critical patent/US10867201B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G06K9/209
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • H04N23/811Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation by dust removal, e.g. from surfaces of the image sensor or processing of the image signal output by the electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • H04N5/23229
    • H04N5/247
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S2007/4975Means for monitoring or calibrating of sensor obstruction by, e.g. dirt- or ice-coating, e.g. by reflection measurement on front-screen
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S2007/4975Means for monitoring or calibrating of sensor obstruction by, e.g. dirt- or ice-coating, e.g. by reflection measurement on front-screen
    • G01S2007/4977Means for monitoring or calibrating of sensor obstruction by, e.g. dirt- or ice-coating, e.g. by reflection measurement on front-screen including means to prevent or remove the obstruction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • Autonomous vehicles such as vehicles that do not require a human driver, can be used to aid in the transport of passengers or items from one location to another.
  • Such vehicles may operate in a fully autonomous driving mode where passengers may provide some initial input, such as a destination, and the vehicle maneuvers itself to that destination.
  • passengers may provide some initial input, such as a destination, and the vehicle maneuvers itself to that destination.
  • Such vehicles may be largely dependent on systems that are capable of determining the location of the autonomous vehicle at any given time, as well as detecting and identifying objects external to the vehicle, such as other vehicles, stop lights, pedestrians, etc.
  • sensors for autonomous vehicles may come in many different configurations, as an example, these sensors may include imaging sensors such as LIDAR sensors, radar units, sonar systems, cameras, etc.
  • the cameras have various features such as gain, exposure time, etc. which must be set to particular values in order to obtain useful images in different lighting conditions.
  • gain, exposure time, etc. which must be set to particular values in order to obtain useful images in different lighting conditions.
  • One aspect of the disclosure provides a method for detecting possible imaging sensor occlusion, the method includes: capturing first image data using the imaging sensor; encoding, by one or more processors, the first image data into an uncompressed image file; generating, by the one or more processors, a compressed image file based on the uncompressed image file; determining, by the one or more processors, a file size of the compressed image file; and determining, by the one or more processors, based on the file size of the compressed image file, that the imaging sensor is possibly occluded.
  • the method the file size of the compressed image file is compared to a threshold file size, and determining that the imaging sensor is occluded further includes determining the file size of the compressed image file meets the threshold file size.
  • determining the threshold file size based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion.
  • determining the threshold file size based on the smallest file size of a compressed image captured during a training period by the imaging sensor, wherein the training period corresponds to a predefined number of frames or a predefined time period.
  • determining the threshold file size based on a running average file size of a set of compressed image files generated by the imaging sensor.
  • the threshold file size is within a predefined range of the running average file size. In some instances, determining the threshold file size based on compressed image files corresponding to image data captured at a similar time and/or location of the first image data. In some instances, subsequent to determining that the imaging sensor is occluded, adjusting the threshold file size in response to receiving input that the imaging sensor is not occluded.
  • the method further includes generating one or more additional compressed image files based on additional image data captured subsequent to the first image data, determining a file size of the one or more additional image files, and wherein determining the imaging sensor is occluded is further based on the file size of the one or more additional images. In some instances, the method further includes determining a rate of change between the file size of the compressed image and the file size of the one or more additional image files, wherein determining the imaging sensor is occluded further includes determining the rate of change is below a threshold value.
  • the method includes sending a signal to activate a cleaning system in order to clean the imaging sensor based on the determination that the imaging sensor is occluded.
  • the imaging sensor is attached to a vehicle having an autonomous driving mode, and the method further comprises using the determination to control the vehicle in the autonomous driving mode.
  • the imaging sensor is a LIDAR sensor, radar unit, or camera.
  • the system may include an imaging sensor and one or more processors, wherein the one or more processors are configured to: capture first image data using the imaging sensor, encode the first image data into an uncompressed image file, generate a compressed image file based on the uncompressed image file, determine a file size of the compressed image file, and determine based on the file size of the compressed image file, that the imaging sensor is possibly occluded.
  • the one or more processors are further configured to compare the file size of the compressed image file to a threshold file size, and wherein determining that the imaging sensor is occluded further includes determining the file size of the compressed image file meets the threshold file size. In some instances, the one or more processors are further configured to determine the threshold file size based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion.
  • the one or more processors are further configured to determine the threshold file size based on the smallest file size of a compressed image captured during a training period by the imaging sensor, wherein the training period corresponds to a predefined number of frames or a predefined time period. In some instances, the one or more processors are further configured to determine the threshold file size based on a running average file size of a set of compressed image files generated by the imaging sensor. In some instances, the one or more processors are further configured to determine the threshold file size based on compressed image files corresponding to image data captured at a similar time and/or location of the first image data.
  • the imaging sensor is a LIDAR sensor, RADAR unit, or camera.
  • FIG. 1 is a functional diagram of an example vehicle in accordance with aspects of the disclosure.
  • FIG. 2 is an example representative view of a vehicle in accordance with aspects of the disclosure.
  • FIG. 3 is an example functional diagram of an imaging sensor in accordance with aspects of the disclosure.
  • FIGS. 4A and 4B are example images captured by a camera in accordance with aspects of the disclosure.
  • FIG. 5 is example image captured by a camera in accordance with aspects of the disclosure.
  • FIG. 6 is an example graphical representation of data in accordance with aspects of the disclosure.
  • FIG. 7 is example image captured by a camera in accordance with aspects of the disclosure.
  • FIG. 8 is an example flow diagram in accordance with aspects of the disclosure.
  • This technology relates to tracking the data size of compressed images captured by an imaging sensor, such as LIDAR sensors, radar units, sonar systems, cameras, etc., to detect sensor occlusion.
  • an imaging sensor such as a camera
  • Understanding when an imaging sensor, such as a camera, is occluded may be especially useful in situations where critical decisions are made in real time using such images, such as in the case of autonomous vehicles which use imaging sensors to make driving decisions.
  • the difference may be the result of one of the cameras being occluded by an object, such as dirt, mud, dust, rain, detritus (e.g., a plastic bag, napkin, etc.), a leaf, a scratch on a lens and/or housing, etc.
  • an object such as dirt, mud, dust, rain, detritus (e.g., a plastic bag, napkin, etc.), a leaf, a scratch on a lens and/or housing, etc.
  • detritus e.g., a plastic bag, napkin, etc.
  • the redundant system may require significant processing resources and time to process the images and determine whether they are the same.
  • an imaging sensor may include a lens which focuses light towards an image sensor which is attached to a controller which can process information from the image sensor.
  • the image sensor and/or a processor may generate image data which may be encoded into an uncompressed image file.
  • the uncompressed image file may be converted into a compressed image file by compressing the uncompressed file with a compression algorithm.
  • the compressed image file may be indicative of whether the imaging sensor which captured the image data corresponding to the compressed image file was occluded.
  • compressed image files which have a file size that meets (i.e., is larger, smaller, and/or equals) a threshold value may be identified as possibly containing an occlusion.
  • the threshold value may be a threshold file size.
  • other techniques such as statistical time-series analysis and/or machine-learning methods may be used to detect anomalous changes in the file size of the compressed images that could correspond to the presence of an occlusion of the imaging sensor which captured the images.
  • the features described herein may allow for the detection of whether an imaging sensor is observing something dark or simply occluded using a simple yet effective analysis and thereby improving the operation of the imaging sensor. This determination may be made in real time, for instance, by processors of the imaging sensor or remote processing devices.
  • the functionality of the imaging sensor can be self-assessed or automatically assessed and used to determine whether to clean the imaging sensor and/or notify a human operator. This can be especially useful in certain situations where the occlusion cannot be easily identified from a visual inspection by a human operator, for instance, because imaging sensor is not easily accessible or no human operator is available.
  • this technology is also useful in situations where critical decisions are made in real time using such images, such as in the case of autonomous vehicles which use imaging sensor images to make driving decisions.
  • many visual processing systems already compress raw image files for storage and processing, the bulk of the processing required to perform the occlusion analysis described herein may already occur. As such, occlusion detection does not require much additional processing from what is already performed by the systems.
  • the techniques discussed herein do not require a particular model or theory for the type of occlusion. As such, these techniques are applicable to many types of occlusions, including those not previously detectable by imaging sensor systems.
  • a vehicle 100 in accordance with one aspect of the disclosure includes various components. While certain aspects of the disclosure are particularly useful in connection with specific types of vehicles, the vehicle may be any type of vehicle including, but not limited to, cars, trucks, motorcycles, busses, recreational vehicles, etc.
  • the vehicle may have one or more computing devices, such as computing devices 110 containing one or more processors 120 , memory 130 and other components typically present in general purpose computing devices.
  • the memory 130 stores information accessible by the one or more processors 120 , including instructions 132 and data 134 that may be executed or otherwise used by the processor 120 .
  • the memory 130 may be of any type capable of storing information accessible by the processor, including a computing device-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM, RAM, DVD or other optical disks, as well as other write-capable and read-only memories.
  • Systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media.
  • the instructions 132 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor.
  • the instructions may be stored as computing device code on the computing device-readable medium.
  • the terms “instructions” and “programs” may be used interchangeably herein.
  • the instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
  • the data 134 may be retrieved, stored or modified by processor 120 in accordance with the instructions 132 .
  • the data may be stored in computing device registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files.
  • the data may also be formatted in any computing device-readable format.
  • the one or more processor 120 may be any conventional processors, such as commercially available CPUs. Alternatively, the one or more processors may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of computing devices 110 as being within the same block, it will be understood by those of ordinary skill in the art that the processor, computing device, or memory may actually include multiple processors, computing devices, or memories that may or may not be stored within the same physical housing.
  • memory may be a hard drive or other storage media located in a housing different from that of computing devices 110 . Accordingly, references to a processor or computing device will be understood to include references to a collection of processors or computing devices or memories that may or may not operate in parallel.
  • Computing devices 110 may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user input 150 (e.g., a mouse, keyboard, touch screen and/or microphone) and various electronic displays (e.g., a monitor having a screen or any other electrical device that is operable to display information).
  • a user input 150 e.g., a mouse, keyboard, touch screen and/or microphone
  • various electronic displays e.g., a monitor having a screen or any other electrical device that is operable to display information
  • the vehicle includes an internal electronic display 152 as well as one or more speakers 154 to provide information or audio visual experiences.
  • internal electronic display 152 may be located within a cabin of vehicle 100 and may be used by computing devices 110 to provide information to passengers within the vehicle 100 .
  • Computing devices 110 may also include one or more wireless network connections 156 to facilitate communication with other computing devices, such as the client computing devices and server computing devices described in detail below.
  • the wireless network connections may include short range communication protocols such as Bluetooth, Bluetooth low energy (LE), cellular connections, as well as various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, Wi-Fi and HTTP, and various combinations of the foregoing.
  • computing devices 110 may be an autonomous driving computing system incorporated into vehicle 100 .
  • the autonomous driving computing system may capable of communicating with various components of the vehicle in order to maneuver vehicle 100 in a fully autonomous driving mode and/or semi-autonomous driving mode.
  • computing devices 110 may be in communication with various systems of vehicle 100 , such as deceleration system 160 , acceleration system 162 , steering system 164 , signaling system 166 , navigation system 168 , positioning system 170 , perception system 172 , and power system 174 (for instance, a gasoline or diesel powered motor or electric engine) in order to control the movement, speed, etc. of vehicle 100 in accordance with the instructions 132 of memory 130 .
  • these systems are shown as external to computing devices 110 , in actuality, these systems may also be incorporated into computing devices 110 , again as an autonomous driving computing system for controlling vehicle 100 .
  • computing devices 110 may interact with deceleration system 160 and acceleration system 162 in order to control the speed of the vehicle.
  • steering system 164 may be used by computing devices 110 in order to control the direction of vehicle 100 .
  • vehicle 100 is configured for use on a road, such as a car or truck, the steering system may include components to control the angle of wheels to turn the vehicle.
  • Signaling system 166 may be used by computing devices 110 in order to signal the vehicle's intent to other drivers or vehicles, for example, by lighting turn signals or brake lights when needed.
  • Navigation system 168 may be used by computing devices 110 in order to determine and follow a route to a location.
  • the navigation system 168 and/or data 134 may store detailed map information, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, vegetation, or other such objects and information.
  • this detailed map information may define the geometry of vehicle's expected environment including roadways as well as speed restrictions (legal speed limits) for those roadways.
  • this map information may include information regarding traffic controls, such as traffic signal lights, stop signs, yield signs, etc., which, in conjunction with real time information received from the perception system 172 , can be used by the computing devices 110 to determine which directions of traffic have the right of way at a given location.
  • traffic controls such as traffic signal lights, stop signs, yield signs, etc.
  • the perception system 172 also includes one or more components for detecting objects external to the vehicle such as other vehicles, obstacles in the roadway, traffic signals, signs, trees, etc.
  • the perception system 172 may include one or more imaging sensors including visible-light cameras, thermal imaging systems, laser and radio-frequency detection systems (e.g., LIDAR, RADAR, etc.), sonar devices, microphones, and/or any other detection devices that record data which may be processed by computing devices 110 .
  • the sensors of the perception system may detect objects and their characteristics such as location, orientation, size, shape, type, direction and speed of movement, etc.
  • the raw data from the sensors and/or the aforementioned characteristics can be quantified or arranged into a descriptive function or vector and sent for further processing to the computing devices 110 .
  • computing devices 110 may use the positioning system 170 to determine the vehicle's location and perception system 172 to detect and respond to objects when needed to reach the location safely.
  • FIG. 2 is an example external view of vehicle 100 including aspects of the perception system 172 .
  • roof-top housing 210 and dome housing 212 may include a LIDAR sensor or system as well as various cameras and radar units.
  • housing 220 located at the front end of vehicle 100 and housings 230 , 232 on the driver's and passenger's sides of the vehicle may each store a LIDAR sensor or system.
  • housing 230 is located in front of driver door 260 .
  • Vehicle 100 also includes housings 240 , 242 for radar units and/or cameras also located on the roof of vehicle 100 . Additional radar units and cameras (not shown) may be located at the front and rear ends of vehicle 100 and/or on other positions along the roof or roof-top housing 210 .
  • FIG. 3 is an example functional view of an imaging sensor 300 which may be any of the imaging sensors of the perception system 172 or any other imaging sensor.
  • the imaging sensor 300 may include a lens 310 configured to focus received radiation, such as electromagnetic radiation, towards an image sensor 320 .
  • the image sensor is attached to a controller 330 which can process information received from the image sensor 320 .
  • the controller 330 may include one or more processors, configured similarly to processors 120 , which control the operation of the image sensor 320 , for instance by setting the gain and exposure time.
  • the image sensor 320 may generate sensor data (i.e., image data) representative of the received electromagnetic radiation received by the image sensor 320 over a period of time, typically based on the imaging sensor's exposure time.
  • the controller may be configured to send this sensor data, or rather, the image, to the computing devices, such as computing device 110 for further processing.
  • the controller 330 may also control an active illumination source 340 for transmitting electromagnetic radiation into the imaging sensor's external environment.
  • the transmitted electromagnetic radiation may reflect off of objects in the imaging sensor's external environment and be received by the image sensor 320 as received electromagnetic radiation.
  • the imaging sensor 300 may be a camera and the active illumination source 340 may be a flash.
  • the imaging sensor 300 may be a LIDAR sensor and the active illumination source 340 may be one or more lasers configured to generate a pulse or short burst of light.
  • the imaging sensor 300 may be RADAR and the active illumination source 340 may be one or more transducers configured to generate a pulse or short burst of radio waves.
  • the imaging sensor 300 may receive and/or transmit sound waves in lieu of, or in addition to electromagnetic radiation.
  • the active illumination source 340 and image sensor 320 may be replaced or supplemented with one or more transducers.
  • the imaging sensor 300 may be a sonar sensor configured to transmit and receive sound waves with one or more transducers.
  • the image sensor 320 In operation, as received electromagnetic radiation (or sound waves) hit the image sensor 320 (or transducer), an image is captured and sensor data, representative of the image captured, is generated, as further shown in FIG. 3 .
  • the sensor data may be encoded by the image sensor 320 and/or controller 330 , into an unprocessed and uncompressed image file. These uncompressed and unprocessed image files are typically referred to as raw image files.
  • raw image files are representative of all the sensor data generated by the image sensor 320 over the total exposure time of the imaging sensor, such as a camera, as it captures an image, the file size of raw image files may be large. For instance, the camera may generate raw image files having a size of around 1 megabyte per megapixel, or more or less.
  • a processor such as a processor in the imaging sensor or processor 120 of computing device 110 , may convert the format of the raw image files into compressed image files.
  • the processor may compress the raw image files into a lossy format or lossless format using a lossy compression or a lossless compression algorithm, respectively.
  • a lossy compression algorithm may reduce the size of the resulting image file, but at the expense of irreversibly removing data from the raw image file.
  • the raw image file may be converted into the lossy JPEG format using the JPEG compression algorithm, which irreversibly removes data to convert the raw image file into a compressed JPEG file.
  • a lossless compression algorithm may not reduce the size of the raw image file as much as a lossy compression algorithm, but all of the raw image data may be recovered by reversing the compression.
  • the raw image file may be converted into the lossless TIFF format using the TIFF compression algorithm which reversibly removes data to convert the raw image file into a compressed TIFF file.
  • more general file compression algorithms such as DEFLATE, can be used on images generated from a camera or other imaging sensors, such as LIDAR, radar, sonar, etc.
  • the file size of a compressed image file may be representative of the amount of repeated and/or similar data within the original, raw image file.
  • the compression algorithms either lossy or lossless, may be able to generate a smaller file size in instances where the data contained in the raw image file corresponds to an image having repetitive and/or similar features such as areas having the same or similar colors and/or repeated spatial patterns, than in instances where the data of the raw image file corresponds to an image having irregular and/or dissimilar features. This is because compression algorithms leverage repetition within data to achieve data reduction. As such, data which includes more repetition, such as raw image files corresponding to images having repetitive and/or similar features of an image, may be more compactly compressed than raw image files corresponding to irregular and dissimilar features.
  • data within a raw image file may correspond to an image 401 , captured by implemented in this example as a camera, having irregular and dissimilar features such as trees 403 - 406 , a roadway 407 , and hills 408 , 409 .
  • data within a raw image file may correspond to an image 411 captured by the camera having similar and repetitive features, such as a roadway 417 and a single, large hill 418 .
  • Compressing the raw image file corresponding to the image 401 shown in FIG. 4A may result in the generation of a compressed JPEG file having a file size of around, for example, a factor of two to five less than the raw image file, or more or less.
  • Compressing the raw image file corresponding to image 411 shown in FIG. 4B may result in the generation of a compressed JPEG file having a smaller file size, such as around, for example, a factor of four to fifteen less than the original raw image, or more or less.
  • An image which is captured by an occluded imaging sensor 300 may contain one or more areas having a continual dark/blank (i.e., repetitive and similar) feature.
  • the camera may capture an image 501 having continual dark/blank feature 511 , such as shown in FIG. 5 .
  • a compressed image file generated from the raw image file corresponding to the image 501 may be more compact than a compressed image file generated from raw image files corresponding to a similar image not having an occlusion, such as image 401 .
  • a similar image not having an occlusion such as image 401 .
  • compressed image files corresponding to images captured from a camera without occlusion may be around 27 MB
  • compressed image files corresponding to images captured when the camera was occluded may be around 9 MB
  • compressed image files corresponding to images captured during the time period when the camera was becoming occluded may be some value between 27 MB and 9 MB.
  • the file sizes of the compressed image files graphed in FIG. 6 are based on images captured by a single camera.
  • the file sizes of compressed image files captured by other cameras and/or other imaging sensors may be more or less than those shown in FIG. 6 .
  • FIG. 5 illustrates an occlusion 511 which completely, or nearly completely, blocks light from reaching the image sensor of the imaging sensor (i.e., the camera) occlusions may also include other obstructions which block, alter, and/or otherwise obscure light as it reaches and/or passes through an imaging sensor's lens and/or housing.
  • occlusions caused by water droplets 703 - 706 on the lens of an imaging sensor, such as lens 310 of imaging sensor 300 may allow nearly all light to pass through, but may introduce blur over part, or all of the image 701 captured by the imaging sensor.
  • the image captured by the imaging sensor 300 implemented as a camera in the example shown in FIG. 7 , may have a loss of spatial information at the locations where the water droplets 703 - 706 occluded the camera.
  • the occluded portions of the image may look similar (i.e., have repetitive and similar features).
  • the file size of a compressed image file may be compared to a threshold value by one or more computing devices, such as computing device 110 .
  • the threshold value may represent a threshold file size.
  • compressed image files which have a file size that meets the threshold value i.e., is smaller than the threshold value
  • the threshold value may be determined based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion.
  • the average file size of compressed image files may be determined in advance of detecting occlusions, such as at vehicle startup or during a calibration period.
  • the average file size of the compressed image files may be stored for later use, such as in memory 130 of computing device 110 .
  • the average file size of compressed image files captured by the one or more imaging sensors known to not have an occlusion may be determined by averaging the file size of the compressed image files.
  • the threshold value may be set as the average file size. In some instances, the threshold value may be some value below the average file size, such as within some percentage of the average file size, such as 50%, or more or less. Compressed image files generated by the imaging sensor which fall below the threshold value may be identified as possibly being captured by an occluded imaging sensor. In instances where the average file size is based on images files captured by more than one imaging sensor, the imaging sensors may preferably be the same, or rather, the same make and model. Alternatively, when different imaging sensors are used, the file size of raw image files generated from images captured by the different imaging sensors may be the same or nearly the same to allow for generally accurate threshold values to be determined.
  • the threshold value may be determined based on the size of compressed image files previously generated from images captured by the imaging sensor.
  • the file sizes of compressed image files generated from images captured by the imaging sensor may be tracked over a period of time or for a number of frames (e.g., a training period), such as for one minute or 100 frames, or more or less.
  • the smallest file size may be determined from the tracked compressed image files and set as the threshold value.
  • Each newly generated compressed image file generated from an image captured by the imaging sensor may be identified as possibly containing an occlusion if the file size of the newly generated compressed image falls below the smallest file size, or rather, the threshold value.
  • a running average of the file sizes of compressed image files generated from images captured by an imaging sensor may be used to determine the threshold value.
  • the average file size a set of previously compressed image filed generated images captured by the imaging sensor may be determined.
  • the set size of compressed image files may be 100 files, 1,000 files, 10,000 files, etc., or more or less.
  • the threshold value may be based on the average file size of the set, such as within some percentage of the average file size (e.g., within 50% or more or less).
  • the image For each image captured by the imaging sensor, such as imaging sensor 300 , which satisfies the threshold value, the image may be added to the set and the oldest image may be removed from the set. Each image captured by the imaging sensor which fails to satisfy the threshold value may be identified as possibly containing an occlusion.
  • the threshold value may also be determined based on historical data including previous images captured by imaging sensors at a similar time of day and/or location a new image is being captured by the imaging sensor.
  • a database may store file sizes or average file size for one or more compressed image files in association with locations and/or times of day at which they were captured.
  • the threshold value may be based on the stored average file size or a determination of the average file size of the compressed image files stored in the database. For instance, the file size of a compressed image file generated from an image captured by an imaging sensors, such as image sensor 300 , at a first location at a particular time of night may be compared the threshold value based on the stored average file size or a determination of the average file size of the compressed image files stored in the database.
  • the imaging sensors may preferably be the same, or rather, the same make and model as the imaging sensors that possible occlusion is being determined.
  • the file size of raw image files generated from images captured by the different imaging sensors may be the same or nearly the same to allow for generally accurate threshold values to be determined for the imaging sensors that possible occlusion is being determined.
  • the threshold value may be adjusted to avoid generating false positive identifications of possible occlusions. In this regard, if after investigating a possible occlusion it is determined that the imaging sensor or imaging sensors which captured the images having possible occlusions were not occluded at the time the images were captured, the threshold value may be further adjusted to reduce the likelihood of false positives.
  • a statistical time-series analysis of the file size could be used to detect anomalous changes in the file size of the compressed images that could correspond to the presence of an occlusion.
  • Such an analysis could rely on examining the slope (derivative, or rather, rate of change) of the file size, whereby a rapid change in the slope could be indicative of occlusion regardless of the actual value of the signal.
  • a first-principles or phenomenological model for the file size of the compressed image as a function of time could be used to statistically fit the observed file-size data, where an occlusion is declared whenever a goodness-of-fit statistic passes some threshold value.
  • machine-learning methods including, but not limited to long short-term memory networks, random decision forests, gradient boosting regressor techniques, and time delay neural networks, may be used directly on the time-series data to detect the presence of an occlusion. Any of the above analyses could also be combined with data from one or more other sensors to aid in the detection of an occlusion.
  • remedial actions may be taken.
  • the computing devices 110 may cause the imaging sensor to be cleaned, taken offline, flagged for maintenance, etc.
  • a signal may be sent to activate a cleaning system in order to clean the imaging sensor.
  • a message may be sent to a human operator indicating that the imaging sensor is occluded.
  • the determination may be used by a control system of the vehicle to control the vehicle in the autonomous driving mode for instance, by driving slower and/or discarding information captured by the imaging sensor until the imaging sensor is cleaned or no longer occluded.
  • the images captured by the potentially occluded imaging sensor may be compared with images captured by another imaging sensor to determine whether the images captured by the two imaging sensors are the same and/or nearly the same. In the event they are the same or nearly the same, the threshold value for detecting possible occlusion may be reduced and/or no further remedial actions may be taken.
  • FIG. 8 is an example flow diagram 800 for determining whether an imaging sensor is occluded in accordance with some of the aspects described herein.
  • the example flow diagram refers to a system including an imaging sensor, such as imaging sensor 300 and one or more computing devices having one or more processors, such as one or more processors 120 of one or more computing devices 110 .
  • first image data is captured using the image sensor of the imaging sensor.
  • the first image data is encoded into an uncompressed image file.
  • a compressed image file is generated based on the uncompressed image file.
  • a file size of the compressed image file is determined.
  • 850 based on the file size of the compressed image file a determination is made that the imaging sensor is possibly occluded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The technology relates to detecting possible imaging sensor occlusion. In one example, a system including an imaging sensor and one or more processors may be configured to capture first image data using the imaging sensor. The one or more processors may encode the first image data into an uncompressed image file and generate a compressed image file based on the uncompressed image file. The file size of the compressed image file may be determined and based on the file size of the compressed image file, the system may determine that the imaging sensor is possibly occluded.

Description

BACKGROUND
Autonomous vehicles, such as vehicles that do not require a human driver, can be used to aid in the transport of passengers or items from one location to another. Such vehicles may operate in a fully autonomous driving mode where passengers may provide some initial input, such as a destination, and the vehicle maneuvers itself to that destination. Thus, such vehicles may be largely dependent on systems that are capable of determining the location of the autonomous vehicle at any given time, as well as detecting and identifying objects external to the vehicle, such as other vehicles, stop lights, pedestrians, etc.
While sensors for autonomous vehicles may come in many different configurations, as an example, these sensors may include imaging sensors such as LIDAR sensors, radar units, sonar systems, cameras, etc. In the camera example, in addition to configuration, the cameras have various features such as gain, exposure time, etc. which must be set to particular values in order to obtain useful images in different lighting conditions. However, in some instances, it may be possible for a camera to be unable to capture a useful image because the camera's lens is completely or partially occluded.
BRIEF SUMMARY
One aspect of the disclosure provides a method for detecting possible imaging sensor occlusion, the method includes: capturing first image data using the imaging sensor; encoding, by one or more processors, the first image data into an uncompressed image file; generating, by the one or more processors, a compressed image file based on the uncompressed image file; determining, by the one or more processors, a file size of the compressed image file; and determining, by the one or more processors, based on the file size of the compressed image file, that the imaging sensor is possibly occluded.
In one example, the method the file size of the compressed image file is compared to a threshold file size, and determining that the imaging sensor is occluded further includes determining the file size of the compressed image file meets the threshold file size. In some instances, determining the threshold file size based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion. In some instances, determining the threshold file size based on the smallest file size of a compressed image captured during a training period by the imaging sensor, wherein the training period corresponds to a predefined number of frames or a predefined time period. In some instances, determining the threshold file size based on a running average file size of a set of compressed image files generated by the imaging sensor. In some examples, the threshold file size is within a predefined range of the running average file size. In some instances, determining the threshold file size based on compressed image files corresponding to image data captured at a similar time and/or location of the first image data. In some instances, subsequent to determining that the imaging sensor is occluded, adjusting the threshold file size in response to receiving input that the imaging sensor is not occluded.
In some examples, the method further includes generating one or more additional compressed image files based on additional image data captured subsequent to the first image data, determining a file size of the one or more additional image files, and wherein determining the imaging sensor is occluded is further based on the file size of the one or more additional images. In some instances, the method further includes determining a rate of change between the file size of the compressed image and the file size of the one or more additional image files, wherein determining the imaging sensor is occluded further includes determining the rate of change is below a threshold value.
In some examples, the method includes sending a signal to activate a cleaning system in order to clean the imaging sensor based on the determination that the imaging sensor is occluded.
In some examples, the imaging sensor is attached to a vehicle having an autonomous driving mode, and the method further comprises using the determination to control the vehicle in the autonomous driving mode.
In some examples, the imaging sensor is a LIDAR sensor, radar unit, or camera.
Another aspect of the technology is directed to a system for detecting possible imaging sensor occlusion. The system may include an imaging sensor and one or more processors, wherein the one or more processors are configured to: capture first image data using the imaging sensor, encode the first image data into an uncompressed image file, generate a compressed image file based on the uncompressed image file, determine a file size of the compressed image file, and determine based on the file size of the compressed image file, that the imaging sensor is possibly occluded.
In some examples, the one or more processors are further configured to compare the file size of the compressed image file to a threshold file size, and wherein determining that the imaging sensor is occluded further includes determining the file size of the compressed image file meets the threshold file size. In some instances, the one or more processors are further configured to determine the threshold file size based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion.
In some examples, the one or more processors are further configured to determine the threshold file size based on the smallest file size of a compressed image captured during a training period by the imaging sensor, wherein the training period corresponds to a predefined number of frames or a predefined time period. In some instances, the one or more processors are further configured to determine the threshold file size based on a running average file size of a set of compressed image files generated by the imaging sensor. In some instances, the one or more processors are further configured to determine the threshold file size based on compressed image files corresponding to image data captured at a similar time and/or location of the first image data.
In some examples, the imaging sensor is a LIDAR sensor, RADAR unit, or camera.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a functional diagram of an example vehicle in accordance with aspects of the disclosure.
FIG. 2 is an example representative view of a vehicle in accordance with aspects of the disclosure.
FIG. 3 is an example functional diagram of an imaging sensor in accordance with aspects of the disclosure.
FIGS. 4A and 4B are example images captured by a camera in accordance with aspects of the disclosure.
FIG. 5 is example image captured by a camera in accordance with aspects of the disclosure.
FIG. 6 is an example graphical representation of data in accordance with aspects of the disclosure.
FIG. 7 is example image captured by a camera in accordance with aspects of the disclosure.
FIG. 8 is an example flow diagram in accordance with aspects of the disclosure.
DETAILED DESCRIPTION
Overview
This technology relates to tracking the data size of compressed images captured by an imaging sensor, such as LIDAR sensors, radar units, sonar systems, cameras, etc., to detect sensor occlusion. Understanding when an imaging sensor, such as a camera, is occluded may be especially useful in situations where critical decisions are made in real time using such images, such as in the case of autonomous vehicles which use imaging sensors to make driving decisions. However, it can be very difficult to determine occlusion of an imaging sensor based on image data alone. For instance, images captured by one camera may be compared with images captured by another camera to determine whether the images captured by the two cameras are the same and/or nearly the same. In the event the images are different, the difference may be the result of one of the cameras being occluded by an object, such as dirt, mud, dust, rain, detritus (e.g., a plastic bag, napkin, etc.), a leaf, a scratch on a lens and/or housing, etc. However, such comparisons generally require at least one redundant camera capable of capturing a similar image to another camera. In addition, the redundant system may require significant processing resources and time to process the images and determine whether they are the same.
For instance, an imaging sensor may include a lens which focuses light towards an image sensor which is attached to a controller which can process information from the image sensor. The image sensor and/or a processor may generate image data which may be encoded into an uncompressed image file. The uncompressed image file may be converted into a compressed image file by compressing the uncompressed file with a compression algorithm.
The compressed image file may be indicative of whether the imaging sensor which captured the image data corresponding to the compressed image file was occluded. In this regard, compressed image files which have a file size that meets (i.e., is larger, smaller, and/or equals) a threshold value may be identified as possibly containing an occlusion. The threshold value may be a threshold file size. Alternatively, or in addition to determining whether the compressed image file meets a threshold value, other techniques, such as statistical time-series analysis and/or machine-learning methods may be used to detect anomalous changes in the file size of the compressed images that could correspond to the presence of an occlusion of the imaging sensor which captured the images.
The features described herein may allow for the detection of whether an imaging sensor is observing something dark or simply occluded using a simple yet effective analysis and thereby improving the operation of the imaging sensor. This determination may be made in real time, for instance, by processors of the imaging sensor or remote processing devices. Thus, the functionality of the imaging sensor can be self-assessed or automatically assessed and used to determine whether to clean the imaging sensor and/or notify a human operator. This can be especially useful in certain situations where the occlusion cannot be easily identified from a visual inspection by a human operator, for instance, because imaging sensor is not easily accessible or no human operator is available. Similarly, as noted above, this technology is also useful in situations where critical decisions are made in real time using such images, such as in the case of autonomous vehicles which use imaging sensor images to make driving decisions. In addition, since many visual processing systems already compress raw image files for storage and processing, the bulk of the processing required to perform the occlusion analysis described herein may already occur. As such, occlusion detection does not require much additional processing from what is already performed by the systems. In addition, the techniques discussed herein do not require a particular model or theory for the type of occlusion. As such, these techniques are applicable to many types of occlusions, including those not previously detectable by imaging sensor systems.
Example Systems
As shown in FIG. 1, a vehicle 100 in accordance with one aspect of the disclosure includes various components. While certain aspects of the disclosure are particularly useful in connection with specific types of vehicles, the vehicle may be any type of vehicle including, but not limited to, cars, trucks, motorcycles, busses, recreational vehicles, etc. The vehicle may have one or more computing devices, such as computing devices 110 containing one or more processors 120, memory 130 and other components typically present in general purpose computing devices.
The memory 130 stores information accessible by the one or more processors 120, including instructions 132 and data 134 that may be executed or otherwise used by the processor 120. The memory 130 may be of any type capable of storing information accessible by the processor, including a computing device-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM, RAM, DVD or other optical disks, as well as other write-capable and read-only memories. Systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media.
The instructions 132 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor. For example, the instructions may be stored as computing device code on the computing device-readable medium. In that regard, the terms “instructions” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
The data 134 may be retrieved, stored or modified by processor 120 in accordance with the instructions 132. For instance, although the claimed subject matter is not limited by any particular data structure, the data may be stored in computing device registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files. The data may also be formatted in any computing device-readable format.
The one or more processor 120 may be any conventional processors, such as commercially available CPUs. Alternatively, the one or more processors may be a dedicated device such as an ASIC or other hardware-based processor. Although FIG. 1 functionally illustrates the processor, memory, and other elements of computing devices 110 as being within the same block, it will be understood by those of ordinary skill in the art that the processor, computing device, or memory may actually include multiple processors, computing devices, or memories that may or may not be stored within the same physical housing. For example, memory may be a hard drive or other storage media located in a housing different from that of computing devices 110. Accordingly, references to a processor or computing device will be understood to include references to a collection of processors or computing devices or memories that may or may not operate in parallel.
Computing devices 110 may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user input 150 (e.g., a mouse, keyboard, touch screen and/or microphone) and various electronic displays (e.g., a monitor having a screen or any other electrical device that is operable to display information). In this example, the vehicle includes an internal electronic display 152 as well as one or more speakers 154 to provide information or audio visual experiences. In this regard, internal electronic display 152 may be located within a cabin of vehicle 100 and may be used by computing devices 110 to provide information to passengers within the vehicle 100.
Computing devices 110 may also include one or more wireless network connections 156 to facilitate communication with other computing devices, such as the client computing devices and server computing devices described in detail below. The wireless network connections may include short range communication protocols such as Bluetooth, Bluetooth low energy (LE), cellular connections, as well as various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, Wi-Fi and HTTP, and various combinations of the foregoing.
In one example, computing devices 110 may be an autonomous driving computing system incorporated into vehicle 100. The autonomous driving computing system may capable of communicating with various components of the vehicle in order to maneuver vehicle 100 in a fully autonomous driving mode and/or semi-autonomous driving mode. For example, returning to FIG. 1, computing devices 110 may be in communication with various systems of vehicle 100, such as deceleration system 160, acceleration system 162, steering system 164, signaling system 166, navigation system 168, positioning system 170, perception system 172, and power system 174 (for instance, a gasoline or diesel powered motor or electric engine) in order to control the movement, speed, etc. of vehicle 100 in accordance with the instructions 132 of memory 130. Again, although these systems are shown as external to computing devices 110, in actuality, these systems may also be incorporated into computing devices 110, again as an autonomous driving computing system for controlling vehicle 100.
As an example, computing devices 110 may interact with deceleration system 160 and acceleration system 162 in order to control the speed of the vehicle. Similarly, steering system 164 may be used by computing devices 110 in order to control the direction of vehicle 100. For example, if vehicle 100 is configured for use on a road, such as a car or truck, the steering system may include components to control the angle of wheels to turn the vehicle. Signaling system 166 may be used by computing devices 110 in order to signal the vehicle's intent to other drivers or vehicles, for example, by lighting turn signals or brake lights when needed.
Navigation system 168 may be used by computing devices 110 in order to determine and follow a route to a location. In this regard, the navigation system 168 and/or data 134 may store detailed map information, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, vegetation, or other such objects and information. In other words, this detailed map information may define the geometry of vehicle's expected environment including roadways as well as speed restrictions (legal speed limits) for those roadways. In addition, this map information may include information regarding traffic controls, such as traffic signal lights, stop signs, yield signs, etc., which, in conjunction with real time information received from the perception system 172, can be used by the computing devices 110 to determine which directions of traffic have the right of way at a given location.
The perception system 172 also includes one or more components for detecting objects external to the vehicle such as other vehicles, obstacles in the roadway, traffic signals, signs, trees, etc. For example, the perception system 172 may include one or more imaging sensors including visible-light cameras, thermal imaging systems, laser and radio-frequency detection systems (e.g., LIDAR, RADAR, etc.), sonar devices, microphones, and/or any other detection devices that record data which may be processed by computing devices 110. The sensors of the perception system may detect objects and their characteristics such as location, orientation, size, shape, type, direction and speed of movement, etc. The raw data from the sensors and/or the aforementioned characteristics can be quantified or arranged into a descriptive function or vector and sent for further processing to the computing devices 110. As an example, computing devices 110 may use the positioning system 170 to determine the vehicle's location and perception system 172 to detect and respond to objects when needed to reach the location safely.
FIG. 2 is an example external view of vehicle 100 including aspects of the perception system 172. For instance, roof-top housing 210 and dome housing 212 may include a LIDAR sensor or system as well as various cameras and radar units. In addition, housing 220 located at the front end of vehicle 100 and housings 230, 232 on the driver's and passenger's sides of the vehicle may each store a LIDAR sensor or system. For example, housing 230 is located in front of driver door 260. Vehicle 100 also includes housings 240, 242 for radar units and/or cameras also located on the roof of vehicle 100. Additional radar units and cameras (not shown) may be located at the front and rear ends of vehicle 100 and/or on other positions along the roof or roof-top housing 210.
FIG. 3 is an example functional view of an imaging sensor 300 which may be any of the imaging sensors of the perception system 172 or any other imaging sensor. The imaging sensor 300 may include a lens 310 configured to focus received radiation, such as electromagnetic radiation, towards an image sensor 320. The image sensor is attached to a controller 330 which can process information received from the image sensor 320. The controller 330 may include one or more processors, configured similarly to processors 120, which control the operation of the image sensor 320, for instance by setting the gain and exposure time. As the received electromagnetic radiation hits pixels on the image sensor 320, the image sensor 320 may generate sensor data (i.e., image data) representative of the received electromagnetic radiation received by the image sensor 320 over a period of time, typically based on the imaging sensor's exposure time. The controller may be configured to send this sensor data, or rather, the image, to the computing devices, such as computing device 110 for further processing.
The controller 330, or other such processors, may also control an active illumination source 340 for transmitting electromagnetic radiation into the imaging sensor's external environment. The transmitted electromagnetic radiation may reflect off of objects in the imaging sensor's external environment and be received by the image sensor 320 as received electromagnetic radiation. For instance, the imaging sensor 300 may be a camera and the active illumination source 340 may be a flash. In another example, the imaging sensor 300 may be a LIDAR sensor and the active illumination source 340 may be one or more lasers configured to generate a pulse or short burst of light. In yet another example, the imaging sensor 300 may be RADAR and the active illumination source 340 may be one or more transducers configured to generate a pulse or short burst of radio waves.
In some instances, the imaging sensor 300 may receive and/or transmit sound waves in lieu of, or in addition to electromagnetic radiation. In this regard, the active illumination source 340 and image sensor 320 may be replaced or supplemented with one or more transducers. For instance, the imaging sensor 300 may be a sonar sensor configured to transmit and receive sound waves with one or more transducers.
Example Methods
In addition to the operations described above and illustrated in the figures, various operations will now be described. It should be understood that the following operations do not have to be performed in the precise order described below. Rather, various steps can be handled in a different order or simultaneously, and steps may also be added or omitted.
In operation, as received electromagnetic radiation (or sound waves) hit the image sensor 320 (or transducer), an image is captured and sensor data, representative of the image captured, is generated, as further shown in FIG. 3. The sensor data may be encoded by the image sensor 320 and/or controller 330, into an unprocessed and uncompressed image file. These uncompressed and unprocessed image files are typically referred to as raw image files. As raw image files are representative of all the sensor data generated by the image sensor 320 over the total exposure time of the imaging sensor, such as a camera, as it captures an image, the file size of raw image files may be large. For instance, the camera may generate raw image files having a size of around 1 megabyte per megapixel, or more or less.
To reduce the size of raw image files for storage and/or processing, a processor, such as a processor in the imaging sensor or processor 120 of computing device 110, may convert the format of the raw image files into compressed image files. In this regard, the processor may compress the raw image files into a lossy format or lossless format using a lossy compression or a lossless compression algorithm, respectively. A lossy compression algorithm may reduce the size of the resulting image file, but at the expense of irreversibly removing data from the raw image file. For instance, the raw image file may be converted into the lossy JPEG format using the JPEG compression algorithm, which irreversibly removes data to convert the raw image file into a compressed JPEG file. In contrast, a lossless compression algorithm may not reduce the size of the raw image file as much as a lossy compression algorithm, but all of the raw image data may be recovered by reversing the compression. For instance, the raw image file may be converted into the lossless TIFF format using the TIFF compression algorithm which reversibly removes data to convert the raw image file into a compressed TIFF file. In some instances, more general file compression algorithms, such as DEFLATE, can be used on images generated from a camera or other imaging sensors, such as LIDAR, radar, sonar, etc.
The file size of a compressed image file may be representative of the amount of repeated and/or similar data within the original, raw image file. In this regard, the compression algorithms, either lossy or lossless, may be able to generate a smaller file size in instances where the data contained in the raw image file corresponds to an image having repetitive and/or similar features such as areas having the same or similar colors and/or repeated spatial patterns, than in instances where the data of the raw image file corresponds to an image having irregular and/or dissimilar features. This is because compression algorithms leverage repetition within data to achieve data reduction. As such, data which includes more repetition, such as raw image files corresponding to images having repetitive and/or similar features of an image, may be more compactly compressed than raw image files corresponding to irregular and dissimilar features.
For instance, and as shown in FIG. 4A, data within a raw image file may correspond to an image 401, captured by implemented in this example as a camera, having irregular and dissimilar features such as trees 403-406, a roadway 407, and hills 408, 409. In contrast, and as shown in FIG. 4B, data within a raw image file may correspond to an image 411 captured by the camera having similar and repetitive features, such as a roadway 417 and a single, large hill 418. Compressing the raw image file corresponding to the image 401 shown in FIG. 4A may result in the generation of a compressed JPEG file having a file size of around, for example, a factor of two to five less than the raw image file, or more or less. Compressing the raw image file corresponding to image 411 shown in FIG. 4B may result in the generation of a compressed JPEG file having a smaller file size, such as around, for example, a factor of four to fifteen less than the original raw image, or more or less.
An image which is captured by an occluded imaging sensor 300 may contain one or more areas having a continual dark/blank (i.e., repetitive and similar) feature. For instance, the camera may capture an image 501 having continual dark/blank feature 511, such as shown in FIG. 5. As a result, a compressed image file generated from the raw image file corresponding to the image 501 may be more compact than a compressed image file generated from raw image files corresponding to a similar image not having an occlusion, such as image 401. For instance, and as shown in the graph of FIG. 6, compressed image files corresponding to images captured from a camera without occlusion may be around 27 MB, compressed image files corresponding to images captured when the camera was occluded may be around 9 MB, and compressed image files corresponding to images captured during the time period when the camera was becoming occluded may be some value between 27 MB and 9 MB. As described herein, the file sizes of the compressed image files graphed in FIG. 6 are based on images captured by a single camera. The file sizes of compressed image files captured by other cameras and/or other imaging sensors may be more or less than those shown in FIG. 6.
Although FIG. 5 illustrates an occlusion 511 which completely, or nearly completely, blocks light from reaching the image sensor of the imaging sensor (i.e., the camera) occlusions may also include other obstructions which block, alter, and/or otherwise obscure light as it reaches and/or passes through an imaging sensor's lens and/or housing. For example, and as illustrated in FIG. 7, occlusions caused by water droplets 703-706 on the lens of an imaging sensor, such as lens 310 of imaging sensor 300, may allow nearly all light to pass through, but may introduce blur over part, or all of the image 701 captured by the imaging sensor. As such, the image captured by the imaging sensor 300, implemented as a camera in the example shown in FIG. 7, may have a loss of spatial information at the locations where the water droplets 703-706 occluded the camera. As a result, the occluded portions of the image may look similar (i.e., have repetitive and similar features).
In order to determine whether an imaging sensor is possibly occluded, the file size of a compressed image file may be compared to a threshold value by one or more computing devices, such as computing device 110. The threshold value may represent a threshold file size. In this regard, compressed image files which have a file size that meets the threshold value (i.e., is smaller than the threshold value) may be identified as possibly being captured by an occluded imaging sensor.
The threshold value may be determined based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion. In this regard, the average file size of compressed image files may be determined in advance of detecting occlusions, such as at vehicle startup or during a calibration period. The average file size of the compressed image files may be stored for later use, such as in memory 130 of computing device 110.
The average file size of compressed image files captured by the one or more imaging sensors known to not have an occlusion, such as one or more of the imaging sensors on the vehicle or one or more imaging sensors on a plurality of vehicles, may be determined by averaging the file size of the compressed image files. The threshold value may be set as the average file size. In some instances, the threshold value may be some value below the average file size, such as within some percentage of the average file size, such as 50%, or more or less. Compressed image files generated by the imaging sensor which fall below the threshold value may be identified as possibly being captured by an occluded imaging sensor. In instances where the average file size is based on images files captured by more than one imaging sensor, the imaging sensors may preferably be the same, or rather, the same make and model. Alternatively, when different imaging sensors are used, the file size of raw image files generated from images captured by the different imaging sensors may be the same or nearly the same to allow for generally accurate threshold values to be determined.
In some instances, the threshold value may be determined based on the size of compressed image files previously generated from images captured by the imaging sensor. In this regard, the file sizes of compressed image files generated from images captured by the imaging sensor may be tracked over a period of time or for a number of frames (e.g., a training period), such as for one minute or 100 frames, or more or less. The smallest file size may be determined from the tracked compressed image files and set as the threshold value. Each newly generated compressed image file generated from an image captured by the imaging sensor may be identified as possibly containing an occlusion if the file size of the newly generated compressed image falls below the smallest file size, or rather, the threshold value.
Alternatively, a running average of the file sizes of compressed image files generated from images captured by an imaging sensor, such as imaging sensor 300, may be used to determine the threshold value. In this regard, the average file size a set of previously compressed image filed generated images captured by the imaging sensor may be determined. The set size of compressed image files may be 100 files, 1,000 files, 10,000 files, etc., or more or less. The threshold value may be based on the average file size of the set, such as within some percentage of the average file size (e.g., within 50% or more or less).
For each image captured by the imaging sensor, such as imaging sensor 300, which satisfies the threshold value, the image may be added to the set and the oldest image may be removed from the set. Each image captured by the imaging sensor which fails to satisfy the threshold value may be identified as possibly containing an occlusion.
The threshold value may also be determined based on historical data including previous images captured by imaging sensors at a similar time of day and/or location a new image is being captured by the imaging sensor. In this regard, a database may store file sizes or average file size for one or more compressed image files in association with locations and/or times of day at which they were captured. The threshold value may be based on the stored average file size or a determination of the average file size of the compressed image files stored in the database. For instance, the file size of a compressed image file generated from an image captured by an imaging sensors, such as image sensor 300, at a first location at a particular time of night may be compared the threshold value based on the stored average file size or a determination of the average file size of the compressed image files stored in the database. In instances where the threshold value is determined based on historical data including previous images captured by imaging sensors at a similar time of day and/or location a new image is captured, the imaging sensors may preferably be the same, or rather, the same make and model as the imaging sensors that possible occlusion is being determined. Alternatively, when different imaging sensors are used, the file size of raw image files generated from images captured by the different imaging sensors may be the same or nearly the same to allow for generally accurate threshold values to be determined for the imaging sensors that possible occlusion is being determined.
In some instances, rather than being a constant, fixed value, the threshold value may be adjusted to avoid generating false positive identifications of possible occlusions. In this regard, if after investigating a possible occlusion it is determined that the imaging sensor or imaging sensors which captured the images having possible occlusions were not occluded at the time the images were captured, the threshold value may be further adjusted to reduce the likelihood of false positives.
In addition to comparing the file size of the compressed image to a suitably chosen threshold value as discussed above, other techniques to determine if the image contains an occlusion based on the compressed file size are also possible. For example, a statistical time-series analysis of the file size could be used to detect anomalous changes in the file size of the compressed images that could correspond to the presence of an occlusion. Such an analysis could rely on examining the slope (derivative, or rather, rate of change) of the file size, whereby a rapid change in the slope could be indicative of occlusion regardless of the actual value of the signal.
Other classes of analyses for determining whether an imaging sensor is occluded may also be possible in conjunction with, or in place of the threshold value determinations discussed herein. As one example, a first-principles or phenomenological model for the file size of the compressed image as a function of time could be used to statistically fit the observed file-size data, where an occlusion is declared whenever a goodness-of-fit statistic passes some threshold value. As another example, machine-learning methods including, but not limited to long short-term memory networks, random decision forests, gradient boosting regressor techniques, and time delay neural networks, may be used directly on the time-series data to detect the presence of an occlusion. Any of the above analyses could also be combined with data from one or more other sensors to aid in the detection of an occlusion.
Once an imaging sensor is determined to possibly be occluded, remedial actions may be taken. For instance, the computing devices 110 may cause the imaging sensor to be cleaned, taken offline, flagged for maintenance, etc. For example, a signal may be sent to activate a cleaning system in order to clean the imaging sensor. As another example, a message may be sent to a human operator indicating that the imaging sensor is occluded. As another example, if the imaging sensor is attached to a vehicle having an autonomous driving mode, such as vehicle 100, the determination may be used by a control system of the vehicle to control the vehicle in the autonomous driving mode for instance, by driving slower and/or discarding information captured by the imaging sensor until the imaging sensor is cleaned or no longer occluded. In another example, the images captured by the potentially occluded imaging sensor may be compared with images captured by another imaging sensor to determine whether the images captured by the two imaging sensors are the same and/or nearly the same. In the event they are the same or nearly the same, the threshold value for detecting possible occlusion may be reduced and/or no further remedial actions may be taken.
FIG. 8 is an example flow diagram 800 for determining whether an imaging sensor is occluded in accordance with some of the aspects described herein. The example flow diagram refers to a system including an imaging sensor, such as imaging sensor 300 and one or more computing devices having one or more processors, such as one or more processors 120 of one or more computing devices 110. For instance, at block 810, first image data is captured using the image sensor of the imaging sensor. At block 820, the first image data is encoded into an uncompressed image file. At block 830, a compressed image file is generated based on the uncompressed image file. At block 840, a file size of the compressed image file is determined. At block 850, based on the file size of the compressed image file a determination is made that the imaging sensor is possibly occluded.
Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.

Claims (20)

The invention claimed is:
1. A method of detecting possible imaging sensor occlusion, the method comprising:
capturing first image data using an imaging sensor;
encoding, by one or more processors, the first image data into an uncompressed image file;
generating, by the one or more processors, a compressed image file based on the uncompressed image file;
determining, by the one or more processors, a file size of the compressed image file; and
determining, by the one or more processors, based on the file size of the compressed image file, that the imaging sensor is possibly occluded.
2. The method of claim 1, further comprising, comparing the file size of the compressed image file to a threshold file size, and
wherein determining that the imaging sensor is occluded further includes determining the file size of the compressed image file meets the threshold file size.
3. The method of claim 2, further comprising:
determining the threshold file size based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion.
4. The method of claim 2, further comprising:
determining the threshold file size based on the smallest file size of a compressed image captured during a training period by the imaging sensor, wherein the training period corresponds to a predefined number of frames or a predefined time period.
5. The method of claim 2, further comprising:
determining the threshold file size based on a running average file size of a set of compressed image files generated by the imaging sensor.
6. The method of claim 5, wherein the threshold file size is within a predefined range of the running average file size.
7. The method of claim 2, further comprising:
determining the threshold file size based on compressed image files corresponding to image data captured at a similar time and/or location of the first image data.
8. The method of claim 2, wherein subsequent to determining that the imaging sensor is occluded, adjusting the threshold file size in response to receiving input that the imaging sensor is not occluded.
9. The method of claim 1, further comprising:
generating one or more additional compressed image files based on additional image data captured subsequent to the first image data,
determining a file size of the one or more additional image files, and
wherein determining the imaging sensor is occluded is further based on the file size of the one or more additional images.
10. The method of claim 9, further comprising determining a rate of change between the file size of the compressed image and the file size of the one or more additional image files,
wherein determining the imaging sensor is occluded further includes determining the rate of change is below a threshold value.
11. The method of claim 1, further comprising, sending a signal to activate a cleaning system in order to clean the imaging sensor based on the determination that the imaging sensor is occluded.
12. The method of claim 1, wherein the imaging sensor is attached to a vehicle having an autonomous driving mode, and the method further comprises using the determination to control the vehicle in the autonomous driving mode.
13. The method of claim 1, wherein the imaging sensor is a LIDAR sensor, radar unit, or camera.
14. A system for detecting possible imaging sensor occlusion comprising: an imaging sensor; and
one or more processors, wherein the one or more processors are configured to:
capture first image data using the imaging sensor,
encode the first image data into an uncompressed image file,
generate a compressed image file based on the uncompressed image file,
determine a file size of the compressed image file, and
determine based on the file size of the compressed image file, that the imaging sensor is possibly occluded.
15. The system of claim 14, wherein the one or more processors are further configured to compare the file size of the compressed image file to a threshold file size, and
wherein determining that the imaging sensor is occluded further includes determining the file size of the compressed image file meets the threshold file size.
16. The system of claim 15, wherein the one or more processors are further configured to determine the threshold file size based on an average file size of compressed image files generated by one or more imaging sensors known to not have an occlusion.
17. The system of claim 15, wherein the one or more processors are further configured to determine the threshold file size based on the smallest file size of a compressed image captured during a training period by the imaging sensor, wherein the training period corresponds to a predefined number of frames or a predefined time period.
18. The system of claim 15, wherein the one or more processors are further configured to determine the threshold file size based on a running average file size of a set of compressed image files generated by the imaging sensor.
19. The system of claim 15, wherein the one or more processors are further configured to determine the threshold file size based on compressed image files corresponding to image data captured at a similar time and/or location of the first image data.
20. The system of claim 14, wherein the imaging sensor is a LIDAR sensor, RADAR unit, or camera.
US16/248,096 2019-01-15 2019-01-15 Detecting sensor occlusion with compressed image data Active 2039-04-13 US10867201B2 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
US16/248,096 US10867201B2 (en) 2019-01-15 2019-01-15 Detecting sensor occlusion with compressed image data
EP20742015.9A EP3881282A4 (en) 2019-01-15 2020-01-13 Detecting sensor occlusion with compressed image data
JP2021532044A JP7198358B2 (en) 2019-01-15 2020-01-13 Sensor occlusion detection using compressed image data
CN202080009199.8A CN113302651B (en) 2019-01-15 2020-01-13 Detecting sensor occlusion using compressed image data
CA3126389A CA3126389A1 (en) 2019-01-15 2020-01-13 Detecting sensor occlusion with compressed image data
IL284592A IL284592B2 (en) 2019-01-15 2020-01-13 Detecting sensor occlusion with compressed image data
PCT/US2020/013293 WO2020150127A1 (en) 2019-01-15 2020-01-13 Detecting sensor occlusion with compressed image data
KR1020217021819A KR102688017B1 (en) 2019-01-15 2020-01-13 Detection of sensor occlusion by compressed image data
US17/098,479 US11216682B2 (en) 2019-01-15 2020-11-16 Detecting sensor occlusion with compressed image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/248,096 US10867201B2 (en) 2019-01-15 2019-01-15 Detecting sensor occlusion with compressed image data

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/098,479 Continuation US11216682B2 (en) 2019-01-15 2020-11-16 Detecting sensor occlusion with compressed image data

Publications (2)

Publication Number Publication Date
US20200226403A1 US20200226403A1 (en) 2020-07-16
US10867201B2 true US10867201B2 (en) 2020-12-15

Family

ID=71516676

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/248,096 Active 2039-04-13 US10867201B2 (en) 2019-01-15 2019-01-15 Detecting sensor occlusion with compressed image data
US17/098,479 Active US11216682B2 (en) 2019-01-15 2020-11-16 Detecting sensor occlusion with compressed image data

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/098,479 Active US11216682B2 (en) 2019-01-15 2020-11-16 Detecting sensor occlusion with compressed image data

Country Status (8)

Country Link
US (2) US10867201B2 (en)
EP (1) EP3881282A4 (en)
JP (1) JP7198358B2 (en)
KR (1) KR102688017B1 (en)
CN (1) CN113302651B (en)
CA (1) CA3126389A1 (en)
IL (1) IL284592B2 (en)
WO (1) WO2020150127A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11328428B2 (en) * 2019-12-18 2022-05-10 Clarion Co., Ltd. Technologies for detection of occlusions on a camera

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7414367B2 (en) * 2018-05-21 2024-01-16 モビディウス リミテッド Methods, systems, articles of manufacture, and apparatus for reconstructing scenes using convolutional neural networks
DE102020119116B3 (en) 2020-07-21 2021-12-16 Daimler Ag Method for the detection of contamination of an optical sensor arrangement
EP4125063A1 (en) * 2021-07-29 2023-02-01 Aptiv Technologies Limited Methods and systems for occupancy class prediction and methods and systems for occlusion value determination
CN114964265B (en) * 2022-07-19 2022-10-25 山东亿华天产业发展集团有限公司 Indoor autonomous navigation system and method for micro unmanned aerial vehicle
KR20240054594A (en) 2022-10-19 2024-04-26 경북대학교 산학협력단 Apparatus and method for occlusion classification of lidar sensor and autonomus vehicle having the same
US20240239337A1 (en) * 2023-01-12 2024-07-18 Bendix Commercial Vehicle Systems Llc System and Method for Insight-Triggered Opportunistic Imaging in a Vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6002794A (en) 1996-04-08 1999-12-14 The Trustees Of Columbia University The City Of New York Encoding and decoding of color digital image using wavelet and fractal encoding
US20140293079A1 (en) * 2013-04-02 2014-10-02 Google Inc Camera Obstruction Detection
US20150146026A1 (en) 2003-12-24 2015-05-28 Walker Digital, Llc Method and apparatus for automatically capturing and managing images
US20150163400A1 (en) * 2013-12-06 2015-06-11 Google Inc. Camera Selection Based on Occlusion of Field of View
JP2015126441A (en) 2013-12-26 2015-07-06 ブラザー工業株式会社 Image output apparatus and program
US20170193641A1 (en) 2016-01-04 2017-07-06 Texas Instruments Incorporated Scene obstruction detection using high pass filters
US20170345129A1 (en) * 2016-05-26 2017-11-30 Gopro, Inc. In loop stitching for multi-camera arrays
US20180160071A1 (en) 2016-12-07 2018-06-07 Alcatel -Lucent Usa Inc. Feature Detection In Compressive Imaging
US20180174306A1 (en) 2016-12-21 2018-06-21 Axis Ab Method for and apparatus for detecting events
US20180260654A1 (en) * 2014-06-11 2018-09-13 Canon Kabushiki Kaisha Image processing method and image processing apparatus

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003153245A (en) * 2001-11-15 2003-05-23 Chuo Electronics Co Ltd Method for detecting abnormity of supervisory camera in supervisory system adopting still picture transmission system
US7280149B2 (en) * 2001-12-21 2007-10-09 Flextronics Sales & Marketing (A-P) Ltd. Method and apparatus for detecting optimum lens focus position
JP2008244940A (en) * 2007-03-28 2008-10-09 Fujifilm Corp Digital camera and control method thereof
CN102111532B (en) * 2010-05-27 2013-03-27 周渝斌 Camera lens occlusion detecting system and method
CN103139547B (en) * 2013-02-25 2016-02-10 昆山南邮智能科技有限公司 The method of pick-up lens occlusion state is judged based on video signal
US9731688B2 (en) 2014-10-31 2017-08-15 Waymo Llc Passive wiper system
US9245333B1 (en) * 2014-12-10 2016-01-26 Semiconductor Components Industries, Llc Systems and methods for detecting obstructions within the field-of-view of an image sensor
CN104504707B (en) * 2014-12-26 2017-08-25 深圳市群晖智能科技股份有限公司 A kind of foreign matter occlusion detection method of monitoring camera video pictures
US9412034B1 (en) * 2015-01-29 2016-08-09 Qualcomm Incorporated Occlusion handling for computer vision
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN109033951B (en) * 2017-06-12 2022-05-27 法拉第未来公司 System and method for detecting occluding objects based on graphics processing
JP6854890B2 (en) * 2017-06-27 2021-04-07 本田技研工業株式会社 Notification system and its control method, vehicle, and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6002794A (en) 1996-04-08 1999-12-14 The Trustees Of Columbia University The City Of New York Encoding and decoding of color digital image using wavelet and fractal encoding
US20150146026A1 (en) 2003-12-24 2015-05-28 Walker Digital, Llc Method and apparatus for automatically capturing and managing images
US20140293079A1 (en) * 2013-04-02 2014-10-02 Google Inc Camera Obstruction Detection
WO2014165472A1 (en) 2013-04-02 2014-10-09 Google Inc. Camera obstruction detection
US20150163400A1 (en) * 2013-12-06 2015-06-11 Google Inc. Camera Selection Based on Occlusion of Field of View
JP2015126441A (en) 2013-12-26 2015-07-06 ブラザー工業株式会社 Image output apparatus and program
US20180260654A1 (en) * 2014-06-11 2018-09-13 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20170193641A1 (en) 2016-01-04 2017-07-06 Texas Instruments Incorporated Scene obstruction detection using high pass filters
US20170345129A1 (en) * 2016-05-26 2017-11-30 Gopro, Inc. In loop stitching for multi-camera arrays
US20180160071A1 (en) 2016-12-07 2018-06-07 Alcatel -Lucent Usa Inc. Feature Detection In Compressive Imaging
US20180174306A1 (en) 2016-12-21 2018-06-21 Axis Ab Method for and apparatus for detecting events

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion for PCT/US2020/013293 dated May 11, 2020.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11328428B2 (en) * 2019-12-18 2022-05-10 Clarion Co., Ltd. Technologies for detection of occlusions on a camera

Also Published As

Publication number Publication date
JP2022517059A (en) 2022-03-04
KR20210092319A (en) 2021-07-23
EP3881282A4 (en) 2022-08-17
CN113302651B (en) 2024-05-24
KR102688017B1 (en) 2024-07-25
IL284592B1 (en) 2024-02-01
CA3126389A1 (en) 2020-07-23
US11216682B2 (en) 2022-01-04
WO2020150127A1 (en) 2020-07-23
EP3881282A1 (en) 2021-09-22
JP7198358B2 (en) 2022-12-28
CN113302651A (en) 2021-08-24
US20210133472A1 (en) 2021-05-06
IL284592A (en) 2021-08-31
US20200226403A1 (en) 2020-07-16
IL284592B2 (en) 2024-06-01

Similar Documents

Publication Publication Date Title
US11216682B2 (en) Detecting sensor occlusion with compressed image data
KR102515735B1 (en) Data Pipeline and Deep Learning System for Autonomous Driving
KR102448358B1 (en) Camera evaluation technologies for autonomous vehicles
JP7424140B2 (en) Sensor device, signal processing method
US10558868B2 (en) Method and apparatus for evaluating a vehicle travel surface
US8379924B2 (en) Real time environment model generation system
CN113167906B (en) Automatic vehicle false object detection
US12056898B1 (en) Camera assessment techniques for autonomous vehicles
US12014524B2 (en) Low-light camera occlusion detection
US11302125B2 (en) Information-enhanced off-vehicle event identification
CN116279527A (en) Vehicle running control method, control system and vehicle
CN116872840A (en) Vehicle anti-collision early warning method and device, vehicle and storage medium
US11869351B2 (en) Automated incident detection for vehicles
US12027050B2 (en) Hazard notification method and system for implementing
KR102594384B1 (en) Image recognition learning apparatus of autonomous vehicle using error data insertion and image recognition learning method using the same
KR102561976B1 (en) Switchable wheel view mirrors
US20230370701A1 (en) Optical sensor activation and fusion
KR20220028939A (en) Method and apparatus for recoding an image using sound detection
KR20230136830A (en) Driver assistance system and driver assistance method
KR20230170614A (en) Method, system and computer program product for detecting movements of the vehicle body in the case of a motor vehicle
CN118833059A (en) Image display method and device of electronic exterior rearview mirror, electronic exterior rearview mirror system and electronic equipment
JP2005044196A (en) Vehicle circumference monitoring device, automobile, vehicle circumference monitoring method, control program, and readable storage medium
KR20230087347A (en) Blind spot detection device, system and method
CN116443000A (en) Automatic driving vehicle drive test data acquisition system and method
CN118096562A (en) Image enhancement method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: WAYMO LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EVANS, RUFFIN;REEL/FRAME:048267/0847

Effective date: 20190131

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4