US20200172014A1 - Multicamera system for autonamous driving vehicles - Google Patents

Multicamera system for autonamous driving vehicles Download PDF

Info

Publication number
US20200172014A1
US20200172014A1 US16/208,483 US201816208483A US2020172014A1 US 20200172014 A1 US20200172014 A1 US 20200172014A1 US 201816208483 A US201816208483 A US 201816208483A US 2020172014 A1 US2020172014 A1 US 2020172014A1
Authority
US
United States
Prior art keywords
cameras
vehicle
camera
image data
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/208,483
Other versions
US10682955B1 (en
Inventor
Zafar TAKHIROV
Sen Hu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Didi Research America LLC
Original Assignee
Didi Research America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Didi Research America LLC filed Critical Didi Research America LLC
Priority to US16/208,483 priority Critical patent/US10682955B1/en
Priority to CN201880098263.7A priority patent/CN113196007B/en
Priority to PCT/US2018/067564 priority patent/WO2020117285A1/en
Priority to US16/869,465 priority patent/US11173841B2/en
Publication of US20200172014A1 publication Critical patent/US20200172014A1/en
Application granted granted Critical
Publication of US10682955B1 publication Critical patent/US10682955B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/002Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles specially adapted for covering the peripheral part of the vehicle, e.g. for viewing tyres, bumpers or the like
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • H04N5/247
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/107Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using stereoscopic cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning

Definitions

  • the present disclosure relates to a camera system for vehicles, and more particularly to, a multicamera system for autonomous driving vehicles.
  • an autonomous driving vehicle may be equipped with multiple integrated sensors such as one or more cameras, a Light Detection And Ranging (LiDAR), a Radio Detection And Ranging (RADAR) sensors, and sonic and ultra sonic sensors, to capture data such as images/videos, point clouds, vehicle pose information, etc.
  • the autonomous driving vehicle then processes the sensed data to learn information that may aid the control of various vehicle functions.
  • cameras may be used to capture surrounding scenes as the vehicle moves. By processing the captured scene images, the vehicle may learn the objects surrounding it and how far they are. For instance, if the vehicle detects that a pedestrian is about 10 feet in front it, it will control the braking system to apply an emergency braking to stop the vehicle.
  • CMOS complementary metal-oxide-semiconductor
  • CMOS complementary metal-oxide-semiconductor
  • a single monocular camera can only capture two-dimensional (2D) images but cannot provide depth information of an object.
  • depth information is usually critical to autonomous driving vehicles. Although more sophisticated cameras, such as a binocular camera, can provide depth information, they are typically more expensive and therefore increase the cost of the vehicle. Therefore, an improved system for sensing data is needed.
  • Embodiments of the disclosure address the above problems by a multicamera system.
  • Embodiments of the disclosure provide a camera system for a vehicle.
  • the camera system includes a plurality of cameras each configured with a different camera setting. The cameras collectively keep a predetermined image space in focus.
  • the predetermined image space includes at least one object.
  • the camera system further includes a controller. The controller is configured to receive image data captured by the plurality of cameras of the predetermined image space, and determine depth information of the at least one object based on the image data.
  • Embodiments of the disclosure also provide a vehicle.
  • the vehicle includes a body and at least one wheel.
  • the vehicle also includes a plurality of cameras equipped on the body and configured with a different camera setting. The cameras collectively keep a predetermined image space in focus.
  • the predetermined image space includes at least one object.
  • the vehicle further includes a controller.
  • the controller is configured to receive image data captured by the plurality of cameras of the predetermined image space, determine depth information of the at least one object based on the image data, and control at least one function of the vehicle based on the depth information.
  • Embodiments of the disclosure further provide a sensing method.
  • the sensing includes capturing image data of a predetermined image space including at least one object using a plurality of cameras. Each camera is configured with a different camera setting, and the cameras collectively keep a predetermined image space in focus.
  • the sensing method further includes determining depth information of the at least one object based on the image data.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle equipped with a camera system, according to embodiments of the disclosure.
  • FIG. 2 illustrates a schematic diagram of cameras in an exemplary camera system, according to embodiments of the disclosure.
  • FIG. 3 illustrates a block diagram of an exemplary camera system, according to embodiments of the disclosure.
  • FIG. 4 illustrates an exemplary “depth-from-focus” method, according to embodiments of the disclosure.
  • FIG. 5 illustrates a flowchart of an exemplary method performed by a camera system, according to embodiments of the disclosure.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle 100 equipped with a camera system, according to embodiments of the disclosure.
  • vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous. It is contemplated that vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. Vehicle 100 may have a body 110 and at least one wheel 120 . Body 110 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV), a minivan, or a conversion van.
  • SUV sports utility vehicle
  • vehicle 100 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 1 . However, it is contemplated that vehicle 100 may have more or less wheels or equivalent structures that enable vehicle 100 to move around. Vehicle 100 may be configured to be all wheel drive (AWD), front wheel drive (FWR), or rear wheel drive (RWD).
  • ATD all wheel drive
  • FWR front wheel drive
  • RWD rear wheel drive
  • vehicle 100 may be equipped with a camera system, including, among other things, multiple cameras 130 and a controller 150 .
  • Cameras 130 may be mounted or otherwise installed on or inside body 110 .
  • cameras 130 may be configured to capture data as vehicle 100 travels along a trajectory.
  • cameras 130 may be configured to take pictures or videos of the surrounding.
  • the cameras may be monocular or binocular cameras.
  • cameras 130 may continuously capture data. Each set of scene data captured at a certain time point is known as a data frame.
  • cameras 130 may record a video consisting of multiple image frames captured at multiple time points.
  • cameras 130 may include cameras configured with different camera settings.
  • each camera may have a different focal length, or angle of view.
  • the multiple cameras may keep the relevant image space in focus and would mitigate the artifacts introduced by lens imperfections.
  • cameras 130 may include cameras with focal lengths at 1 m, 5 m, 10 m, 20 m, and 30 m, etc. Therefore, a particular camera may cover a preset depth range and objects within the respective depth range may be in focus with that camera. As a result, the entire image space within 30 m of cameras 130 may be in focus, and covered by cameras 130 collectively.
  • FIG. 2 illustrates a schematic diagram of cameras 200 in an exemplary camera system, according to embodiments of the disclosure.
  • cameras 200 may include a total of 6 front-facing cameras installed at the front of vehicle 100 .
  • the 6 front-facing cameras may be divided into two groups, including 3 left cameras 210 (L 0 , L 1 , and L 2 ) and 3 right cameras 220 (R 0 , R 1 , and R 2 ). It is contemplated that more or less groups of cameras and/or more or less cameras within each group may be used than those shown in FIG. 2 .
  • cameras within each group may be configured with different focal lengths, and accordingly, angles of view.
  • a “focal length” refers to the distance between the camera lens and the image sensor when a subject is in focus.
  • the focal length is usually determined by the type of lens used (normal, long focus, wide angle, telephoto, macro, fisheye, or zoom).
  • the focal length is usually stated in millimeters (e.g., 28 mm, 50 mm, or 100 mm).
  • an “angle of view” of a camera is the visible extent of the scene captured by the image sensor, stated as an angle. A wide angle of view captures a greater area, and a small angle captures a smaller area.
  • a camera's angle of view reduces as its focal length is increased. Changing the focal length changes the angle of view. The shorter the focal length, the wider the angle of view and the greater the area captured. For example, at a nominal focal length (known as fisheye), a camera can capture image data in an angle of view close to 180 degrees. The longer the focal length is, the smaller the angle and the larger the subject appears to be. Lenses with a wide picture angle are referred to as wide-angle lenses, lenses with a small picture angle as telephoto lenses.
  • the [focal length, angle of view] pairs of an exemplary camera are listed in Table 1 below:
  • each of left cameras 210 i.e., L 0 , L 1 , or L 2 is configured with a different focal length.
  • cameras L 0 , L 1 , and L 2 may be set with focal lengths of 28 mm, 70 mm, and 100 mm, respectively. Accordingly, the angle of views of cameras L 0 , L 1 , and L 2 will be 75°, 34°, and 24°, respectively. It is contemplated that the cameras can be configured with other focal lengths. By using such settings, cameras 200 may keep the entire image space in front of vehicle 100 in focus. It is contemplated that different optical settings or lenses other than focal lengths could be used.
  • left cameras 210 and right cameras 220 may have orthogonal polarization.
  • left cameras 210 may have a polarization of ⁇ 45 degrees
  • right cameras 220 may have a polarization of +45 degrees.
  • the polarizations are 90 degrees apart and thus orthogonal with each other. Using orthogonal polarization between the two sets of cameras enables cameras 200 to collectively cover a wider field of view.
  • 45-degree polarizations are illustrated in FIG. 2 , it is contemplated that the polarization could be any other angle off each other set based on environmental conditions.
  • N N>2 groups of cameras
  • the polarizations of the different groups of cameras may be set to be 180°/N apart. For example, if there are three groups of cameras, a polarization scheme of 0°, 60°, 120° polarization, respectively, may be used.
  • cameras 130 may communicate with a controller 150 .
  • controller 150 may be a controller onboard of vehicle 100 , e.g., the electronic control unit.
  • controller 150 may be part of a local physical server, a cloud server (as illustrated in FIG. 1 ), a virtual server, a distributed server, or any other suitable computing device.
  • Controller 150 may communicate with cameras 130 , and/or other components of vehicle 100 via a network, such as a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., BluetoothTM).
  • WLAN Wireless Local Area Network
  • WAN Wide Area Network
  • wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., BluetoothTM).
  • controller 150 may be responsible for processing image data captured by cameras 130 and performing vehicle functions based on the image data. Due to the redundancy offered by cameras 130 , controller 150 can estimate depth information of an object within the image space based on the image data captured by cameras 130 using a combination of algorithms that are otherwise not possible to use. In some embodiments, controller 150 can estimate the distance to a point on a 2D projected image by identifying which of cameras 130 are in focus, and use that information to infer the distance.
  • FIG. 3 illustrates a block diagram of an exemplary controller 150 , according to embodiments of the disclosure.
  • controller 150 may receive image data 303 from cameras 130 .
  • Controller 150 may estimate depth of an object using image data 303 , correct artifacts in image data 303 , and/or make vehicle control decisions based on image data 303 .
  • controller 150 includes a communication interface 302 , a processor 304 , a memory 306 , and a storage 308 .
  • controller 150 includes different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA)), or separate devices with dedicated functions.
  • IC integrated circuit
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • one or more components of controller 150 may be located in a cloud, or may be alternatively in a single location (such as inside vehicle 100 or a mobile device) or distributed locations. Components of controller 150 may be in an integrated device, or distributed at different locations but communicate with each other through a network (not shown).
  • Communication interface 302 may send data to and receive data from components such as cameras 130 via communication cables, a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), wireless networks such as radio waves, a cellular network, and/or a local or short-range wireless network (e.g., BluetoothTM), or other communication methods.
  • communication interface 302 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection.
  • ISDN integrated services digital network
  • communication interface 302 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • Wireless links can also be implemented by communication interface 302 .
  • communication interface 302 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
  • communication interface 302 may receive image data 303 captured by cameras 130 .
  • Communication interface 302 may further provide the received data to storage 308 for storage or to processor 304 for processing.
  • Processor 304 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 304 may be configured as a separate processor module dedicated to performing vehicle functions based on image data captured by cameras 130 . Alternatively, processor 304 may be configured as a shared processor module for performing other functions.
  • processor 304 includes multiple modules, such as a depth estimation unit 310 , an artifacts correction unit 312 , and a decision unit 314 , and the like. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 304 designed for use with other components or software units implemented by processor 304 through executing at least part of a program.
  • the program may be stored on a computer-readable medium, and when executed by processor 304 , it may perform one or more functions.
  • FIG. 3 shows units 310 - 314 all within one processor 304 , it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
  • Depth estimation unit 310 is configured to estimate the distances between cameras 130 and the objects.
  • depth estimation unit 310 uses multiple focused images from real aperture cameras to estimate the depth of the scene from cameras 130 (referred to as a “depth-from-focus” method).
  • depth estimation unit 310 can also use multiple defocused images from the real aperture cameras to estimate the depth (referred to as a “depth-from-defocus” method).
  • Real aperture cameras have a relatively short depth of field and, resulting in images which appear focused only on a small 3D slice of the scene.
  • FIG. 4 illustrates an exemplary “depth-from-focus” method, according to embodiments of the disclosure. The method will be explained using an optical geometry 400 .
  • the lens of each camera 130 is modeled via the thin lens law:
  • Equation (2) the distance to focused object u can be determined using Equation (2):
  • Equation (3) a set of distances u 1 , . . . , u n between the focused scenes and n different cameras.
  • depth estimation unit 310 may determine, for each object in the image space, the camera (e.g., camera i) in which the object is in focus. The determination may be performed, e.g., through image processing methods. Depth estimation unit 310 may then determine the distance to the object as the distance u i of the camera in focus. In some embodiments, the object may be in focus in more than one cameras (e.g., cameras i and j). Accordingly, the distance to the object may be determined as within a range of u i -u j .
  • cameras 130 may include 3 real aperture cameras pointing at 3 objects.
  • Depth estimation unit 310 may first determine the distances to focus of the 3 cameras are 20, 40, and 60 meters. Depth estimation unit 310 may then determine in which camera(s) each object is in focus. Table 2 summarizes the information obtained.
  • depth estimation unit 310 may estimate the distances to objects. For example, object 1 is about 20 meters, object 2 is about 40-60 meters, and object 3 is about 60 meters away from the cameras.
  • Depth estimation unit 310 can use the stereoscopic images to further improve the distance estimation by using simple geometry.
  • left cameras 210 and right cameras 220 capture images of the same object/scene from two vantage points.
  • Depth estimation unit 310 can extract three-dimensional (3D) information by examining the relative positions of the object in the two images.
  • a left camera and a right camera may collectively act as a binocular camera.
  • Depth estimation unit 310 may compare the two images, and determine the relative depth information in the form of a disparity map.
  • a disparity map encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location. Therefore, depth estimation unit 310 may determine additional depth information using the disparity map.
  • Artifacts correction unit 312 may be configured to correct artifacts in image data 303 .
  • the image data captured by the cameras may contain artifacts caused by the lens properties, such as lens flares caused by bright light sources, green rays or “ghosts” caused by self-reflection in a lens.
  • the image data may additionally or alternatively contain other artifacts such as discolorations or over-bright/under-bright images caused by the CMOS settings.
  • Artifacts correction unit 312 may correct the artifacts using methods taking advantage of the redundancy provided by the multiple cameras. For example, the images taken by the different cameras may be averaged or otherwise aggregated to remove or reduce an artifact.
  • Decision unit 314 may make vehicle control decisions based on the processed image data. For example, decision unit 314 may make autonomous driving decisions, e.g., to avoid objects, based on the estimated distances of the objects. Examples of autonomous driving decisions include: accelerating, braking, changing lanes, changing driving directions, etc. For example, if a pedestrian is detected at 20 meters from vehicle 100 , decision unit 314 may automatically apply braking immediately. If a pedestrian is detected only 10 meters away and in the direction that vehicle 100 is moving towards, decision unit 314 may steer vehicle 100 away from pedestrian.
  • autonomous driving decisions include: accelerating, braking, changing lanes, changing driving directions, etc. For example, if a pedestrian is detected at 20 meters from vehicle 100 , decision unit 314 may automatically apply braking immediately. If a pedestrian is detected only 10 meters away and in the direction that vehicle 100 is moving towards, decision unit 314 may steer vehicle 100 away from pedestrian.
  • Memory 306 and storage 308 may include any appropriate type of mass storage provided to store any type of information that processor 304 may need to operate.
  • Memory 306 and storage 308 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM.
  • Memory 306 and/or storage 308 may be configured to store one or more computer programs that may be executed by processor 304 to perform image data processing and vehicle control functions disclosed herein.
  • memory 306 and/or storage 308 may be configured to store program(s) that may be executed by processor 304 to estimate depth information or otherwise make vehicle control functions based on the captured image data.
  • Memory 306 and/or storage 308 may be further configured to store information and data used by processor 304 .
  • memory 306 and/or storage 308 may be configured to store the various types of data (e.g., image data) captured by cameras 130 and data related to camera setting.
  • Memory 306 and/or storage 308 may also store intermediate data such as the estimated depths by depth estimation unit 310 .
  • the various types of data may be stored permanently, removed periodically, or disregarded immediately after each frame of data is processed.
  • FIG. 5 illustrates a flowchart of an exemplary method 500 performed by a camera system, according to embodiments of the disclosure.
  • method 500 may be implemented by controller 150 that includes, among other things, processor 304 .
  • controller 150 that includes, among other things, processor 304 .
  • method 500 is not limited to that exemplary embodiment.
  • Method 500 may include steps S 502 -S 512 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 5 .
  • cameras 130 captures image data of at least one object within a predetermined image space.
  • the predetermined image space may be a 3D scene within a certain distance of cameras 130 in the direction cameras 130 are pointing to.
  • the predetermined image space may be the 3D space within, e.g., 60 meters, in front of vehicle 100 .
  • the object may be another vehicle, a motorcycle, a bicycle, a pedestrian, a building, a tree, a traffic sign, a traffic light, etc.
  • One or more objects may be in the predetermined image space.
  • cameras 130 may be configured with different focal lengths. Cameras 130 may point to and take images of the same 3D scene, simultaneously or sequentially.
  • the image data captured by cameras 130 may be transmitted to controller 150 , e.g., via a network.
  • the one or more objects maybe in focus in images taken by one or more cameras 130 . For example, if an object is about 10 meters away, it will be in focus in the images taken by a camera with a focused distance of 10 meters.
  • the image data captured in step S 502 may include stereoscopic images.
  • the cameras may be divided into two or more groups (e.g., N groups) and placed at different locations of vehicle 100 .
  • the i th group of cameras may be configured to have an i*180°/N polarization.
  • controller 150 determines the focused distance of each camera 130 .
  • Parameters and settings of cameras 130 may be pre-stored in controller 150 or provided by cameras 130 along with the image data.
  • Camera parameters and settings may include, among other things, local length, angle of view, aperture, shutter speed, white balance, metering, and filters, etc.
  • the focal length is usually determined by the type of lens used (normal, long focus, wide angle, telephoto, macro, fisheye, or zoom).
  • Camera parameters may also include, e.g., a distance v between the camera's lens plane (e.g., 410 in FIG. 4 ) and the image plane (e.g., 430 in FIG. 4 ).
  • Controller 150 determines the focused distance u for each camera based on its focal length f and the distance v, e.g., according to Equation (2).
  • controller 150 identifies one or more cameras in which the object is in focus. In some embodiments, the determination may be performed, e.g., through image processing methods.
  • controller 150 determines depth information of the object. In some embodiments, controller 150 determines the distance between cameras 130 and the object. The distance may be estimated using the distance u i of camera i identified in step S 506 . In some embodiments, the object may be in focus in more than one cameras (e.g., cameras i and j). Accordingly, the distance to the object may be determined as within a range of u i -u j .
  • controller 150 may derive additional depth information from the stereoscopic images captured by the cameras. For example, controller 150 can determine the relative depth information of the object in the form of a disparity map that encodes the difference in horizontal coordinates of corresponding image points. Controller 150 can calculate the scene distance based on the inverse relationship between the values in this disparity map and the depths at corresponding pixel location.
  • controller 150 controls vehicle operations based on the image data. For example, controller 150 may make autonomous driving decisions, such as accelerating, braking, changing lanes, changing driving directions, etc. For example, controller 150 may make control decisions to avoid objects, based on the estimated distances of the objects. For instance, when an object (e.g., a pedestrian) is detected at a distance that still allows vehicle 100 to fully stop before colliding with it, controller 150 may control vehicle 100 to brake. Controller 150 may determine the braking force applied in order for vehicle 100 to stop within the estimated distance to the object. If the detected distance of the object no longer allows vehicle 100 to fully stop, controller 150 may steer vehicle 100 away from the direction it is moving towards, in addition or alternative to braking.
  • autonomous driving decisions such as accelerating, braking, changing lanes, changing driving directions, etc.
  • controller 150 may make control decisions to avoid objects, based on the estimated distances of the objects. For instance, when an object (e.g., a pedestrian) is detected at a distance that still allows vehicle 100 to fully stop before colliding with it
  • controller 150 corrects artifacts in the image data captured by cameras 130 using the redundancy provided by the disclosed camera system.
  • the artifacts may be caused by lens and/or CMOS settings.
  • Controller 150 may correct the artifacts by, e.g., averaging image taken by the different cameras to improve signal-to-noise (SNR) ratio.
  • Controller 150 may also use machine learning based methods to correct the artifacts.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the disclosure provide a camera system for a vehicle and a sensing method using such a camera system. The camera system includes a plurality of cameras each configured with a different camera setting. The cameras collectively keep a predetermined image space in focus. The predetermined image space includes at least one object. The camera system further includes a controller. The controller is configured to receive image data captured by the plurality of cameras of the predetermined image space, and determine depth information of the at least one object based on the image data.

Description

    TECHNICAL FIELD
  • The present disclosure relates to a camera system for vehicles, and more particularly to, a multicamera system for autonomous driving vehicles.
  • BACKGROUND
  • Autonomous driving technology relies on accurate sensing systems. For example, an autonomous driving vehicle may be equipped with multiple integrated sensors such as one or more cameras, a Light Detection And Ranging (LiDAR), a Radio Detection And Ranging (RADAR) sensors, and sonic and ultra sonic sensors, to capture data such as images/videos, point clouds, vehicle pose information, etc. The autonomous driving vehicle then processes the sensed data to learn information that may aid the control of various vehicle functions. For example, cameras may be used to capture surrounding scenes as the vehicle moves. By processing the captured scene images, the vehicle may learn the objects surrounding it and how far they are. For instance, if the vehicle detects that a pedestrian is about 10 feet in front it, it will control the braking system to apply an emergency braking to stop the vehicle.
  • However, cameras sensing in the context autonomous driving is challenging. Known problems include e.g., photographic artifacts, overfit field-of-view, aperture, and other camera settings. For example, some of the photographic problems may be lens flares caused by bright light sources. Others may be green rays or “ghosts” caused by self-reflection in a lens. Other problems may include discolorations or over-/under-bright images caused by the CMOS settings. In addition, a single monocular camera can only capture two-dimensional (2D) images but cannot provide depth information of an object. However, depth information is usually critical to autonomous driving vehicles. Although more sophisticated cameras, such as a binocular camera, can provide depth information, they are typically more expensive and therefore increase the cost of the vehicle. Therefore, an improved system for sensing data is needed.
  • Embodiments of the disclosure address the above problems by a multicamera system.
  • SUMMARY
  • Embodiments of the disclosure provide a camera system for a vehicle. The camera system includes a plurality of cameras each configured with a different camera setting. The cameras collectively keep a predetermined image space in focus. The predetermined image space includes at least one object. The camera system further includes a controller. The controller is configured to receive image data captured by the plurality of cameras of the predetermined image space, and determine depth information of the at least one object based on the image data.
  • Embodiments of the disclosure also provide a vehicle. The vehicle includes a body and at least one wheel. The vehicle also includes a plurality of cameras equipped on the body and configured with a different camera setting. The cameras collectively keep a predetermined image space in focus. The predetermined image space includes at least one object. The vehicle further includes a controller. The controller is configured to receive image data captured by the plurality of cameras of the predetermined image space, determine depth information of the at least one object based on the image data, and control at least one function of the vehicle based on the depth information.
  • Embodiments of the disclosure further provide a sensing method. The sensing includes capturing image data of a predetermined image space including at least one object using a plurality of cameras. Each camera is configured with a different camera setting, and the cameras collectively keep a predetermined image space in focus. The sensing method further includes determining depth information of the at least one object based on the image data.
  • It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle equipped with a camera system, according to embodiments of the disclosure.
  • FIG. 2 illustrates a schematic diagram of cameras in an exemplary camera system, according to embodiments of the disclosure.
  • FIG. 3 illustrates a block diagram of an exemplary camera system, according to embodiments of the disclosure.
  • FIG. 4 illustrates an exemplary “depth-from-focus” method, according to embodiments of the disclosure.
  • FIG. 5 illustrates a flowchart of an exemplary method performed by a camera system, according to embodiments of the disclosure.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle 100 equipped with a camera system, according to embodiments of the disclosure. Consistent with some embodiments, vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous. It is contemplated that vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. Vehicle 100 may have a body 110 and at least one wheel 120. Body 110 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV), a minivan, or a conversion van. In some embodiments, vehicle 100 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 1. However, it is contemplated that vehicle 100 may have more or less wheels or equivalent structures that enable vehicle 100 to move around. Vehicle 100 may be configured to be all wheel drive (AWD), front wheel drive (FWR), or rear wheel drive (RWD).
  • As illustrated in FIG. 1, vehicle 100 may be equipped with a camera system, including, among other things, multiple cameras 130 and a controller 150. Cameras 130 may be mounted or otherwise installed on or inside body 110. In some embodiments, cameras 130 may be configured to capture data as vehicle 100 travels along a trajectory. Consistent with the present disclosure, cameras 130 may be configured to take pictures or videos of the surrounding. For example, the cameras may be monocular or binocular cameras. As vehicle 100 travels along the trajectory, cameras 130 may continuously capture data. Each set of scene data captured at a certain time point is known as a data frame. For example, cameras 130 may record a video consisting of multiple image frames captured at multiple time points.
  • Consistent with the present disclosure, cameras 130 may include cameras configured with different camera settings. In some embodiments, each camera may have a different focal length, or angle of view. Collectively, the multiple cameras may keep the relevant image space in focus and would mitigate the artifacts introduced by lens imperfections. For example, cameras 130 may include cameras with focal lengths at 1 m, 5 m, 10 m, 20 m, and 30 m, etc. Therefore, a particular camera may cover a preset depth range and objects within the respective depth range may be in focus with that camera. As a result, the entire image space within 30 m of cameras 130 may be in focus, and covered by cameras 130 collectively.
  • In some embodiments, multiple cameras 130 may be all installed at the same location on body 110 or be divided into groups and installed at different locations on body 110. For example, FIG. 2 illustrates a schematic diagram of cameras 200 in an exemplary camera system, according to embodiments of the disclosure. As shown in FIG. 2, cameras 200 may include a total of 6 front-facing cameras installed at the front of vehicle 100. The 6 front-facing cameras may be divided into two groups, including 3 left cameras 210 (L0, L1, and L2) and 3 right cameras 220 (R0, R1, and R2). It is contemplated that more or less groups of cameras and/or more or less cameras within each group may be used than those shown in FIG. 2.
  • In some embodiments, cameras within each group may be configured with different focal lengths, and accordingly, angles of view. Consistent with this disclosure, a “focal length” refers to the distance between the camera lens and the image sensor when a subject is in focus. The focal length is usually determined by the type of lens used (normal, long focus, wide angle, telephoto, macro, fisheye, or zoom). The focal length is usually stated in millimeters (e.g., 28 mm, 50 mm, or 100 mm). Consistent with this disclosure, an “angle of view” of a camera is the visible extent of the scene captured by the image sensor, stated as an angle. A wide angle of view captures a greater area, and a small angle captures a smaller area.
  • It is well-known in the art that a camera's angle of view reduces as its focal length is increased. Changing the focal length changes the angle of view. The shorter the focal length, the wider the angle of view and the greater the area captured. For example, at a nominal focal length (known as fisheye), a camera can capture image data in an angle of view close to 180 degrees. The longer the focal length is, the smaller the angle and the larger the subject appears to be. Lenses with a wide picture angle are referred to as wide-angle lenses, lenses with a small picture angle as telephoto lenses.
  • In some embodiments, the [focal length, angle of view] pairs of an exemplary camera are listed in Table 1 below:
  • TABLE 1
    FL 14 20 24 28 35 50 70 80 85 100 135 200 300 400 500
    (mm)
    AoV 114° 94° 84° 75° 63° 46° 34° 30° 28° 24° 18° 12°
  • In some embodiments, each of left cameras 210, i.e., L0, L1, or L2 is configured with a different focal length. For example, cameras L0, L1, and L2 may be set with focal lengths of 28 mm, 70 mm, and 100 mm, respectively. Accordingly, the angle of views of cameras L0, L1, and L2 will be 75°, 34°, and 24°, respectively. It is contemplated that the cameras can be configured with other focal lengths. By using such settings, cameras 200 may keep the entire image space in front of vehicle 100 in focus. It is contemplated that different optical settings or lenses other than focal lengths could be used.
  • In some embodiments, left cameras 210 and right cameras 220 may have orthogonal polarization. For example, left cameras 210 may have a polarization of −45 degrees, when right cameras 220 may have a polarization of +45 degrees. The polarizations are 90 degrees apart and thus orthogonal with each other. Using orthogonal polarization between the two sets of cameras enables cameras 200 to collectively cover a wider field of view. Although 45-degree polarizations are illustrated in FIG. 2, it is contemplated that the polarization could be any other angle off each other set based on environmental conditions. In the event that N (N>2) groups of cameras are used, in some embodiments, the polarizations of the different groups of cameras may be set to be 180°/N apart. For example, if there are three groups of cameras, a polarization scheme of 0°, 60°, 120° polarization, respectively, may be used.
  • Returning to FIG. 1, in some embodiments, cameras 130 may communicate with a controller 150. In some embodiments, controller 150 may be a controller onboard of vehicle 100, e.g., the electronic control unit. In some embodiments, controller 150 may be part of a local physical server, a cloud server (as illustrated in FIG. 1), a virtual server, a distributed server, or any other suitable computing device. Controller 150 may communicate with cameras 130, and/or other components of vehicle 100 via a network, such as a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth™).
  • Consistent with the present disclosure, controller 150 may be responsible for processing image data captured by cameras 130 and performing vehicle functions based on the image data. Due to the redundancy offered by cameras 130, controller 150 can estimate depth information of an object within the image space based on the image data captured by cameras 130 using a combination of algorithms that are otherwise not possible to use. In some embodiments, controller 150 can estimate the distance to a point on a 2D projected image by identifying which of cameras 130 are in focus, and use that information to infer the distance.
  • For example, FIG. 3 illustrates a block diagram of an exemplary controller 150, according to embodiments of the disclosure. Consistent with the present disclosure, controller 150 may receive image data 303 from cameras 130. Controller 150 may estimate depth of an object using image data 303, correct artifacts in image data 303, and/or make vehicle control decisions based on image data 303.
  • In some embodiments, as shown in FIG. 3, controller 150 includes a communication interface 302, a processor 304, a memory 306, and a storage 308. In some embodiments, controller 150 includes different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA)), or separate devices with dedicated functions. In some embodiments, one or more components of controller 150 may be located in a cloud, or may be alternatively in a single location (such as inside vehicle 100 or a mobile device) or distributed locations. Components of controller 150 may be in an integrated device, or distributed at different locations but communicate with each other through a network (not shown).
  • Communication interface 302 may send data to and receive data from components such as cameras 130 via communication cables, a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), wireless networks such as radio waves, a cellular network, and/or a local or short-range wireless network (e.g., Bluetooth™), or other communication methods. In some embodiments, communication interface 302 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection. As another example, communication interface 302 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented by communication interface 302. In such an implementation, communication interface 302 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
  • Consistent with some embodiments, communication interface 302 may receive image data 303 captured by cameras 130. Communication interface 302 may further provide the received data to storage 308 for storage or to processor 304 for processing.
  • Processor 304 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 304 may be configured as a separate processor module dedicated to performing vehicle functions based on image data captured by cameras 130. Alternatively, processor 304 may be configured as a shared processor module for performing other functions.
  • As shown in FIG. 3, processor 304 includes multiple modules, such as a depth estimation unit 310, an artifacts correction unit 312, and a decision unit 314, and the like. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 304 designed for use with other components or software units implemented by processor 304 through executing at least part of a program. The program may be stored on a computer-readable medium, and when executed by processor 304, it may perform one or more functions. Although FIG. 3 shows units 310-314 all within one processor 304, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
  • Depth estimation unit 310 is configured to estimate the distances between cameras 130 and the objects. In some embodiments, depth estimation unit 310 uses multiple focused images from real aperture cameras to estimate the depth of the scene from cameras 130 (referred to as a “depth-from-focus” method). Alternatively, depth estimation unit 310 can also use multiple defocused images from the real aperture cameras to estimate the depth (referred to as a “depth-from-defocus” method). Real aperture cameras have a relatively short depth of field and, resulting in images which appear focused only on a small 3D slice of the scene.
  • FIG. 4 illustrates an exemplary “depth-from-focus” method, according to embodiments of the disclosure. The method will be explained using an optical geometry 400. The lens of each camera 130 is modeled via the thin lens law:
  • 1 f = 1 v + 1 u ( 1 )
  • where f is the focal length of camera 130, u is the distance between lens plane 410 and the object in focus 420, and v is the distance from lens plane 410 to the image plane 430. Based on Equation (1), the distance to focused object u can be determined using Equation (2):
  • u = 1 / ( 1 f - 1 v ) ( 2 )
  • By using multiple cameras 130 with different focal lengths f1, f2, . . . , fn pointing at the same scene, a set of distances u1, . . . , un between the focused scenes and n different cameras can be obtained using Equation (3):
  • { u 1 = 1 / ( 1 f 1 - 1 v ) u n = 1 / ( 1 f n - 1 v n ) ( 3 )
  • where, and {f1, . . . fn, v1, . . . , vn} are parameters for the respective cameras. By determining which object is in focus in which camera (e.g., camera i), the distance of the object to the camera can be estimated as the respective distance ui.
  • Referring back to FIG. 3, depth estimation unit 310 may determine, for each object in the image space, the camera (e.g., camera i) in which the object is in focus. The determination may be performed, e.g., through image processing methods. Depth estimation unit 310 may then determine the distance to the object as the distance ui of the camera in focus. In some embodiments, the object may be in focus in more than one cameras (e.g., cameras i and j). Accordingly, the distance to the object may be determined as within a range of ui-uj.
  • For example, cameras 130 may include 3 real aperture cameras pointing at 3 objects. Depth estimation unit 310 may first determine the distances to focus of the 3 cameras are 20, 40, and 60 meters. Depth estimation unit 310 may then determine in which camera(s) each object is in focus. Table 2 summarizes the information obtained.
  • TABLE 2
    Object is in focus?
    Focus distance u (m) Camera Object 1 Object 2 Object 3
    20 Camera 1 Yes No No
    40 Camera 2 No Yes No
    60 Camera 3 No Yes Yes
  • Based on Table 2, depth estimation unit 310 may estimate the distances to objects. For example, object 1 is about 20 meters, object 2 is about 40-60 meters, and object 3 is about 60 meters away from the cameras.
  • In some embodiments, when cameras 130 are set up to locate at different locations, such as in FIG. 2, the image data captured becomes stereoscopic images that provide a stereo vision. Depth estimation unit 310 can use the stereoscopic images to further improve the distance estimation by using simple geometry. In some embodiments, e.g., left cameras 210 and right cameras 220 capture images of the same object/scene from two vantage points. Depth estimation unit 310 can extract three-dimensional (3D) information by examining the relative positions of the object in the two images. In other words, a left camera and a right camera may collectively act as a binocular camera. Depth estimation unit 310 may compare the two images, and determine the relative depth information in the form of a disparity map. A disparity map encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location. Therefore, depth estimation unit 310 may determine additional depth information using the disparity map.
  • Artifacts correction unit 312 may be configured to correct artifacts in image data 303. The image data captured by the cameras may contain artifacts caused by the lens properties, such as lens flares caused by bright light sources, green rays or “ghosts” caused by self-reflection in a lens. The image data may additionally or alternatively contain other artifacts such as discolorations or over-bright/under-bright images caused by the CMOS settings. Artifacts correction unit 312 may correct the artifacts using methods taking advantage of the redundancy provided by the multiple cameras. For example, the images taken by the different cameras may be averaged or otherwise aggregated to remove or reduce an artifact.
  • Decision unit 314 may make vehicle control decisions based on the processed image data. For example, decision unit 314 may make autonomous driving decisions, e.g., to avoid objects, based on the estimated distances of the objects. Examples of autonomous driving decisions include: accelerating, braking, changing lanes, changing driving directions, etc. For example, if a pedestrian is detected at 20 meters from vehicle 100, decision unit 314 may automatically apply braking immediately. If a pedestrian is detected only 10 meters away and in the direction that vehicle 100 is moving towards, decision unit 314 may steer vehicle 100 away from pedestrian.
  • Memory 306 and storage 308 may include any appropriate type of mass storage provided to store any type of information that processor 304 may need to operate. Memory 306 and storage 308 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM. Memory 306 and/or storage 308 may be configured to store one or more computer programs that may be executed by processor 304 to perform image data processing and vehicle control functions disclosed herein. For example, memory 306 and/or storage 308 may be configured to store program(s) that may be executed by processor 304 to estimate depth information or otherwise make vehicle control functions based on the captured image data.
  • Memory 306 and/or storage 308 may be further configured to store information and data used by processor 304. For instance, memory 306 and/or storage 308 may be configured to store the various types of data (e.g., image data) captured by cameras 130 and data related to camera setting. Memory 306 and/or storage 308 may also store intermediate data such as the estimated depths by depth estimation unit 310. The various types of data may be stored permanently, removed periodically, or disregarded immediately after each frame of data is processed.
  • FIG. 5 illustrates a flowchart of an exemplary method 500 performed by a camera system, according to embodiments of the disclosure. In some embodiments, method 500 may be implemented by controller 150 that includes, among other things, processor 304. However, method 500 is not limited to that exemplary embodiment. Method 500 may include steps S502-S512 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 5.
  • In step S502, cameras 130 captures image data of at least one object within a predetermined image space. In some embodiments, the predetermined image space may be a 3D scene within a certain distance of cameras 130 in the direction cameras 130 are pointing to. For example, when cameras 130 are front-facing cameras installed at the front of vehicle 100, the predetermined image space may be the 3D space within, e.g., 60 meters, in front of vehicle 100. In some embodiments, the object may be another vehicle, a motorcycle, a bicycle, a pedestrian, a building, a tree, a traffic sign, a traffic light, etc. One or more objects may be in the predetermined image space.
  • In some embodiments, cameras 130 may be configured with different focal lengths. Cameras 130 may point to and take images of the same 3D scene, simultaneously or sequentially. The image data captured by cameras 130 may be transmitted to controller 150, e.g., via a network. The one or more objects, depending on their distances from cameras 130, maybe in focus in images taken by one or more cameras 130. For example, if an object is about 10 meters away, it will be in focus in the images taken by a camera with a focused distance of 10 meters.
  • In some embodiments, when cameras 130 are in a stereo setting, e.g., as shown in FIG. 2, the image data captured in step S502 may include stereoscopic images. The cameras may be divided into two or more groups (e.g., N groups) and placed at different locations of vehicle 100. In some embodiments, the ith group of cameras may be configured to have an i*180°/N polarization.
  • In step S504, controller 150 determines the focused distance of each camera 130. Parameters and settings of cameras 130 may be pre-stored in controller 150 or provided by cameras 130 along with the image data. Camera parameters and settings may include, among other things, local length, angle of view, aperture, shutter speed, white balance, metering, and filters, etc. The focal length is usually determined by the type of lens used (normal, long focus, wide angle, telephoto, macro, fisheye, or zoom). Camera parameters may also include, e.g., a distance v between the camera's lens plane (e.g., 410 in FIG. 4) and the image plane (e.g., 430 in FIG. 4). Controller 150 determines the focused distance u for each camera based on its focal length f and the distance v, e.g., according to Equation (2).
  • In step S506, controller 150 identifies one or more cameras in which the object is in focus. In some embodiments, the determination may be performed, e.g., through image processing methods. In step S508, controller 150 determines depth information of the object. In some embodiments, controller 150 determines the distance between cameras 130 and the object. The distance may be estimated using the distance ui of camera i identified in step S506. In some embodiments, the object may be in focus in more than one cameras (e.g., cameras i and j). Accordingly, the distance to the object may be determined as within a range of ui-uj.
  • In some embodiments, when cameras 130 are in a stereo setting, e.g., as shown in FIG. 2, controller 150 may derive additional depth information from the stereoscopic images captured by the cameras. For example, controller 150 can determine the relative depth information of the object in the form of a disparity map that encodes the difference in horizontal coordinates of corresponding image points. Controller 150 can calculate the scene distance based on the inverse relationship between the values in this disparity map and the depths at corresponding pixel location.
  • In step S510, controller 150 controls vehicle operations based on the image data. For example, controller 150 may make autonomous driving decisions, such as accelerating, braking, changing lanes, changing driving directions, etc. For example, controller 150 may make control decisions to avoid objects, based on the estimated distances of the objects. For instance, when an object (e.g., a pedestrian) is detected at a distance that still allows vehicle 100 to fully stop before colliding with it, controller 150 may control vehicle 100 to brake. Controller 150 may determine the braking force applied in order for vehicle 100 to stop within the estimated distance to the object. If the detected distance of the object no longer allows vehicle 100 to fully stop, controller 150 may steer vehicle 100 away from the direction it is moving towards, in addition or alternative to braking.
  • In step S512, controller 150 corrects artifacts in the image data captured by cameras 130 using the redundancy provided by the disclosed camera system. In some embodiments, the artifacts may be caused by lens and/or CMOS settings. Controller 150 may correct the artifacts by, e.g., averaging image taken by the different cameras to improve signal-to-noise (SNR) ratio. Controller 150 may also use machine learning based methods to correct the artifacts.
  • Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
  • It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and related methods.
  • It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims (20)

1. A camera system for a vehicle, comprising:
a plurality of cameras each configured with a different camera setting, collectively keeping a predetermined image space in focus, wherein the predetermined image space includes at least one object, wherein the predetermined image space is defined relative to the vehicle and wherein the plurality of cameras are located at a plurality of different positions on the vehicle; and
a controller, configured to:
receive image data captured by the plurality of cameras of the predetermined image space; and
determine depth information of the at least one object based on the image data, wherein the depth information comprises a position of the at least one object relative to the vehicle.
2. The system of claim 1, wherein the vehicle is an autonomous driving vehicle, and wherein the controller is further configured to make an autonomous driving decision based on the image data.
3. The system of claim 1, wherein each camera is configured with a different focal length.
4. The system of claim 3, wherein the controller is configured to:
determine focused distances of the plurality of cameras, based on focal lengths of the cameras;
identify at least one camera, among the plurality of cameras, in which an object of the at least one object is in focus; and
determine the depth information of the object based on the focused distance of the identified camera.
5. The system of claim 1, wherein each camera is configured with a different angle of view.
6. The system of claim 1, wherein the plurality of cameras include two sets of cameras, each set of cameras at a different location of the vehicle.
7. The system of claim 6, wherein the two sets of cameras are configured with orthogonal polarization.
8. The system of claim 1, wherein the plurality of cameras include a first camera and a second camera, wherein the controller is further configured to correct artifacts in image data captured by the first camera using image data captured by the second camera.
9. The system of claim 1, wherein the plurality of cameras are configured to point to a same direction.
10. A vehicle, comprising:
a body;
at least one wheel;
a plurality of cameras equipped on the body and configured with a different camera setting, collectively keeping a predetermined image space in focus, wherein the predetermined image space includes at least one object, wherein the predetermined image space is defined relative to the vehicle, and wherein the plurality of cameras are located at a plurality of different positions on the vehicle; and
a controller, configured to:
receive image data captured by the plurality of cameras of the predetermined image space;
determine depth information of the at least one object based on the image data wherein the depth information comprises a position of the at least one object relative to the vehicle; and
control at least one function of the vehicle based on the depth information.
11. The vehicle of claim 10, wherein the vehicle is an autonomous driving vehicle, and wherein the controller is further configured to make an autonomous driving decision based on the image data.
12. The vehicle of claim 10, wherein each camera is configured with a different focal length.
13. The vehicle of claim 12, wherein the controller is configured to:
determine the focused distances of the plurality of cameras, based on focal lengths of the cameras;
identify at least one camera, among the plurality of cameras, in which an object of the at least one object is in focus; and
determine the depth information of the object based on the focused distance of the identified camera.
14. The vehicle of claim 10, wherein each camera is configured with a different angle of view.
15. The vehicle of claim 10, wherein the plurality of cameras include two sets of cameras, each set of cameras at a different location of the vehicle, wherein the two sets of cameras are configured with orthogonal polarization.
16. A sensing method, comprising:
capturing image data of a predetermined image space including at least one object using a plurality of cameras each configured with a different camera setting, collectively keeping a predetermined image space in focus, wherein the predetermined image space is defined relative to a vehicle and wherein the plurality of cameras are located at a plurality of different positions on the vehicle; and
determining, by at least one processor, depth information of the at least one object based on the image data, wherein the depth information comprises a position of the at least one object relative to the vehicle.
17. The sensing method of claim 16, wherein the vehicle comprises an autonomous driving vehicle on which the plurality of cameras are equipped, and the method further includes:
making an autonomous driving decision based on the image data.
18. The sensing method of claim 16, wherein each camera is configured with a different focal length, and wherein determining depth information further includes:
determining the focused distances of the plurality of cameras, based on focal lengths of the cameras;
identifying at least one camera, among the plurality of cameras, in which an object of the at least one object is in focus; and
determining the depth information of the object based on the focused distance of the identified camera.
19. The sensing method of claim 16, wherein the plurality of cameras include a first camera and a second camera, wherein the method further includes:
correcting artifacts in image data captured by the first camera using image data captured by the second camera
20. The sensing method of claim 16, wherein each camera is configured with a different angle of view.
US16/208,483 2018-12-03 2018-12-03 Multicamera system for autonomous driving vehicles Active US10682955B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/208,483 US10682955B1 (en) 2018-12-03 2018-12-03 Multicamera system for autonomous driving vehicles
CN201880098263.7A CN113196007B (en) 2018-12-03 2018-12-26 Camera system applied to vehicle
PCT/US2018/067564 WO2020117285A1 (en) 2018-12-03 2018-12-26 A multicamera system for autonamous driving vehicles
US16/869,465 US11173841B2 (en) 2018-12-03 2020-05-07 Multicamera system for autonamous driving vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/208,483 US10682955B1 (en) 2018-12-03 2018-12-03 Multicamera system for autonomous driving vehicles

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/869,465 Continuation US11173841B2 (en) 2018-12-03 2020-05-07 Multicamera system for autonamous driving vehicles

Publications (2)

Publication Number Publication Date
US20200172014A1 true US20200172014A1 (en) 2020-06-04
US10682955B1 US10682955B1 (en) 2020-06-16

Family

ID=70851138

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/208,483 Active US10682955B1 (en) 2018-12-03 2018-12-03 Multicamera system for autonomous driving vehicles
US16/869,465 Active US11173841B2 (en) 2018-12-03 2020-05-07 Multicamera system for autonamous driving vehicles

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/869,465 Active US11173841B2 (en) 2018-12-03 2020-05-07 Multicamera system for autonamous driving vehicles

Country Status (3)

Country Link
US (2) US10682955B1 (en)
CN (1) CN113196007B (en)
WO (1) WO2020117285A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330526A1 (en) * 2017-05-10 2018-11-15 Fotonation Limited Multi-camera vehicle vision system and method
US20210084235A1 (en) * 2019-09-16 2021-03-18 Tusimple, Inc. Sensor layout for autonomous vehicles
US11227398B2 (en) * 2019-01-30 2022-01-18 Baidu Usa Llc RGB point clouds based map generation system for autonomous vehicles
US20220381899A1 (en) * 2021-05-28 2022-12-01 Beijing Tusen Zhitu Technology Co., Ltd. Sensor layout of vehicles
DE102021212292A1 (en) 2021-11-02 2023-05-04 Siemens Mobility GmbH Device and method for environmental monitoring

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220207769A1 (en) * 2020-12-28 2022-06-30 Shenzhen GOODIX Technology Co., Ltd. Dual distanced sensing method for passive range finding
US12014508B2 (en) 2021-10-18 2024-06-18 Ford Global Technologies, Llc Distance determination from image data
US20240040269A1 (en) * 2022-07-26 2024-02-01 Tusimple, Inc. Sensor configuration for autonomous vehicles

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070090311A1 (en) * 2005-10-21 2007-04-26 C.R.F. Societa Consortile Per Azioni Orbassano (Torino), Italy Optical sensor device to be installed on board a motor-vehicle for aid in driving and/or for automatic activation of systems provided on the motor-vehicle
US20110085789A1 (en) * 2009-10-13 2011-04-14 Patrick Campbell Frame Linked 2D/3D Camera System
US20140016016A1 (en) * 2012-07-16 2014-01-16 Alexander Berestov System And Method For Effectively Implementing A Lens Array In An Electronic Device
US20140111650A1 (en) * 2012-10-19 2014-04-24 Qualcomm Incorporated Multi-camera system using folded optics
US20170032197A1 (en) * 2015-07-29 2017-02-02 Mando Corporation Camera device for vehicle

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334837B (en) * 2008-07-31 2012-02-29 重庆大学 Multi-method integrated license plate image positioning method
CN101900551A (en) * 2009-05-27 2010-12-01 上海欣纳电子技术有限公司 Vehicle-mounted panoramic safety monitoring system
US8509982B2 (en) * 2010-10-05 2013-08-13 Google Inc. Zone driving
CN103582846B (en) * 2012-05-28 2017-03-22 松下知识产权经营株式会社 Depth estimation imaging device
DE102012105436B4 (en) * 2012-06-22 2021-12-16 Conti Temic Microelectronic Gmbh Vehicle camera for distance measurement
JP2014074632A (en) * 2012-10-03 2014-04-24 Isuzu Motors Ltd Calibration apparatus of in-vehicle stereo camera and calibration method
JP2015232442A (en) * 2012-10-04 2015-12-24 アルプス電気株式会社 Image processor and vehicle front monitoring device
US9438794B2 (en) * 2013-06-25 2016-09-06 Omnivision Technologies, Inc. Method and apparatus for distributed image processing in cameras for minimizing artifacts in stitched images
US10572744B2 (en) * 2014-06-03 2020-02-25 Mobileye Vision Technologies Ltd. Systems and methods for detecting an object
JP6585006B2 (en) * 2016-06-07 2019-10-02 株式会社東芝 Imaging device and vehicle
US10706569B2 (en) * 2016-06-08 2020-07-07 Amazon Technologies, Inc. Selectively paired imaging elements for stereo images
CN106952303B (en) * 2017-03-09 2020-04-24 北京旷视科技有限公司 Vehicle distance detection method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070090311A1 (en) * 2005-10-21 2007-04-26 C.R.F. Societa Consortile Per Azioni Orbassano (Torino), Italy Optical sensor device to be installed on board a motor-vehicle for aid in driving and/or for automatic activation of systems provided on the motor-vehicle
US20110085789A1 (en) * 2009-10-13 2011-04-14 Patrick Campbell Frame Linked 2D/3D Camera System
US20140016016A1 (en) * 2012-07-16 2014-01-16 Alexander Berestov System And Method For Effectively Implementing A Lens Array In An Electronic Device
US20140111650A1 (en) * 2012-10-19 2014-04-24 Qualcomm Incorporated Multi-camera system using folded optics
US20170032197A1 (en) * 2015-07-29 2017-02-02 Mando Corporation Camera device for vehicle

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330526A1 (en) * 2017-05-10 2018-11-15 Fotonation Limited Multi-camera vehicle vision system and method
US11615566B2 (en) * 2017-05-10 2023-03-28 Fotonation Limited Multi-camera vehicle vision system and method
US11227398B2 (en) * 2019-01-30 2022-01-18 Baidu Usa Llc RGB point clouds based map generation system for autonomous vehicles
US20210084235A1 (en) * 2019-09-16 2021-03-18 Tusimple, Inc. Sensor layout for autonomous vehicles
US11076109B2 (en) * 2019-09-16 2021-07-27 Tusimple, Inc. Sensor layout for autonomous vehicles
US20210344849A1 (en) * 2019-09-16 2021-11-04 Tusimple, Inc. Sensor layout for autonomous vehicles
US11729520B2 (en) * 2019-09-16 2023-08-15 Tusimple, Inc. Sensor layout for autonomous vehicles
US20220381899A1 (en) * 2021-05-28 2022-12-01 Beijing Tusen Zhitu Technology Co., Ltd. Sensor layout of vehicles
DE102021212292A1 (en) 2021-11-02 2023-05-04 Siemens Mobility GmbH Device and method for environmental monitoring

Also Published As

Publication number Publication date
WO2020117285A1 (en) 2020-06-11
CN113196007B (en) 2022-07-22
US10682955B1 (en) 2020-06-16
US11173841B2 (en) 2021-11-16
CN113196007A (en) 2021-07-30
WO2020117285A9 (en) 2021-06-24
US20200262350A1 (en) 2020-08-20

Similar Documents

Publication Publication Date Title
US11173841B2 (en) Multicamera system for autonamous driving vehicles
US10937231B2 (en) Systems and methods for updating a high-resolution map based on binocular images
US10909395B2 (en) Object detection apparatus
CN107122770B (en) Multi-camera system, intelligent driving system, automobile, method and storage medium
US20220180483A1 (en) Image processing device, image processing method, and program
US11004233B1 (en) Intelligent vision-based detection and ranging system and method
KR102118066B1 (en) Vehicle control method for safety driving
TWI530409B (en) Automatic tracking collision avoidance system and method thereof
JP2004258266A (en) Stereoscopic adapter and distance image input device using the same
WO2017043331A1 (en) Image processing device and image processing method
CN109658451B (en) Depth sensing method and device and depth sensing equipment
JP2015179499A (en) Parallax value derivation device, apparatus control system, moving body, robot, parallax value derivation method and program
CN110012215B (en) Image processing apparatus, image processing method, and program
WO2011016257A1 (en) Distance calculation device for vehicle
CN111260538B (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
US20190333384A1 (en) Parking space search device, program, and recording medium
CN113959398B (en) Distance measurement method and device based on vision, drivable equipment and storage medium
WO2015182771A1 (en) Image capturing device, image processing device, image processing method, and computer program
JP2013161187A (en) Object recognition device
TWI794207B (en) Camera device, camera module, camera system, and method for controlling camera device
CN111260698B (en) Binocular image feature matching method and vehicle-mounted terminal
EP2919191B1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, and disparity value producing method
WO2023067867A1 (en) Vehicle-mounted control device, and three-dimensional information acquisition method
EP4300423A1 (en) Robust stereo camera image processing method and system
JP6626737B2 (en) Stereo camera device and vehicle

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4