US11184531B2 - Dynamic image blending for multiple-camera vehicle systems - Google Patents

Dynamic image blending for multiple-camera vehicle systems Download PDF

Info

Publication number
US11184531B2
US11184531B2 US16/325,171 US201616325171A US11184531B2 US 11184531 B2 US11184531 B2 US 11184531B2 US 201616325171 A US201616325171 A US 201616325171A US 11184531 B2 US11184531 B2 US 11184531B2
Authority
US
United States
Prior art keywords
image
video
electronic processor
vehicle
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/325,171
Other versions
US20190230282A1 (en
Inventor
Greg Sypitkowski
Patrick Graf
James Stephen Miller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Priority to US16/325,171 priority Critical patent/US11184531B2/en
Publication of US20190230282A1 publication Critical patent/US20190230282A1/en
Assigned to ROBERT BOSCH GMBH reassignment ROBERT BOSCH GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SYPITKOWSKI, GREG, GRAF, PATRICK, MILLER, JAMES STEPHEN
Application granted granted Critical
Publication of US11184531B2 publication Critical patent/US11184531B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • H04N5/23238
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • G06T5/005
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • vehicle imaging systems include one or more video cameras positioned on an exterior of a vehicle.
  • the video cameras monitor an area surrounding the vehicle for objects and hazards.
  • Some vehicle imaging systems provide a display on or near the dashboard for viewing by a driver. As a consequence, perception by a driver of the area surrounding the vehicle is enhanced.
  • the vehicle imaging systems may combine multiple camera video streams into a composite video for viewing by the driver.
  • the multiple camera video streams are processed by video processing equipment to provide multiple different views including wide-angle views, top-down views, and the like.
  • camera feeds from four wide-angle (e.g., omnidirectional) cameras spaced on different sides of a vehicle 100 are combined into a composite video that provides a virtual top-down view of the vehicle 100 .
  • the top-down video displays the vehicle 100 and surrounding objects as illustrated in FIG. 1 .
  • the composite video contains distorted objects 105 with stretched dimensions, cropped edges, flattened surfaces, and the like.
  • blind spots 115 in the top-down video may occur due to blocking objects 120 that block a field of view 125 of a video camera 130 .
  • Embodiments provide systems and methods that enhance images displayed to a driver of the vehicle for improved clarity and field of view.
  • the systems and methods augment distorted objects to provide improved views of the vehicle and surrounding area.
  • the systems and methods also dynamically blend multiple video streams to reduce blind spots caused from objects in the image.
  • One embodiment provides a method of generating a composite video for display in a vehicle.
  • the method includes generating a plurality of video streams from a plurality of video cameras configured to be positioned on the vehicle.
  • the one or more of the plurality of video streams are transformed by an electronic processor to create a virtual camera viewpoint.
  • the plurality of transformed video streams are combined to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras.
  • the electronic processor detects an object external to the vehicle and determines whether the object at least partially obscures the portion of the first image. When the object at least partially obscures the portion of the first image, the electronic processor supplements the portion of the first image with a portion of a second image that is generated by a second one of the plurality of video cameras.
  • the system includes a plurality of video cameras that generate a plurality of video streams and that are configured to be positioned on the vehicle.
  • the system also includes a display and an electronic processor communicatively coupled to the plurality of video cameras and the display.
  • the electronic processor is configured to transform the plurality of video streams to create a virtual camera viewpoint.
  • the electronic processor combines the plurality of transformed video streams to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras.
  • the electronic processor detects an object external to the vehicle and determines whether the object at least partially obscures the portion of the first image. When the object at least partially obscures the portion of the first image, the electronic processor supplements the portion of the first image with a portion of a second image that is generated by a second one of the plurality of video cameras.
  • FIG. 1 is a top-down perspective view of a vehicle and surrounding area as displayed on a vehicle display.
  • FIG. 2 is a block diagram of a vehicle equipped with a dynamic image blending and augmentation system according to one embodiment.
  • FIG. 3 is a block diagram of an electronic control unit and associated connections of the vehicle of FIG. 2 in accordance with one embodiment.
  • FIG. 4 is a flowchart of a method of augmenting a video with the dynamic image blending and augmentation system of FIG. 2 .
  • FIG. 5 is a flowchart of a method of dynamically blending a video with the dynamic image blending and augmentation system of FIG. 2 .
  • FIGS. 6A and 6B are top-down perspective views of the vehicle and surrounding area as displayed on the vehicle display after enhancing the video with the dynamic image blending and augmentation system of FIG. 2 .
  • a plurality of hardware and software based devices, as well as a plurality of different structural components may be used to implement embodiments of the invention.
  • embodiments may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware.
  • aspects of the invention may be implemented in software (e.g., stored on non-transitory computer-readable medium) executable by one or more processors. Accordingly, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components may be utilized to implement various embodiments.
  • control units” and “controllers” described in the specification can include one or more electronic processors, one or more memory modules including non-transitory computer-readable medium, one or more input/output interfaces, and various connections (e.g., a system bus) connecting the components.
  • FIG. 2 illustrates a vehicle 200 equipped with an dynamic image blending and augmentation system 205 according to one embodiment.
  • the vehicle 200 includes a driver-side camera 210 (e.g., attached to or located near to a driver-side mirror), a passenger-side camera 215 (e.g., attached to or located near to a passenger-side mirror), a front camera 220 , and a rear camera 225 .
  • the vehicle 200 also includes an electronic control unit (ECU) 230 and a vehicle display 240 .
  • ECU electronice control unit
  • Each of the components of the dynamic image blending and augmentation system 205 may be communicatively coupled.
  • the electronic control unit 230 is coupled to the vehicle display 240 , the driver-side camera 210 , the passenger-side camera 215 , the front camera 220 , and the rear camera 225 via a wired or wireless connection.
  • the electronic control unit 230 includes a plurality of electrical and electronic components that provide power, operation control, and protection to the components and modules within the electronic control unit 230 .
  • the electronic control unit 230 includes, among other things, an electronic processor 305 (such as a programmable electronic microprocessor, microcontroller, or similar device), a memory 310 (e.g., non-transitory, machine readable memory), a video input interface 315 , and a video output interface 325 .
  • the electronic processor 305 is communicatively coupled to the memory 310 and executes instructions which are capable of being stored on the memory 310 .
  • the electronic processor 305 is configured to retrieve from memory 310 and execute, among other things, instructions related to processes and methods described herein.
  • the electronic control unit 230 includes additional, fewer, or different components.
  • the electronic control unit 230 may be implemented in several independent electronic control units each configured to perform specific functions or sub-functions.
  • the electronic control unit 230 may contain sub-modules that input and process video data (e.g., video streams) and perform related processes.
  • a video analysis module located within or communicatively coupled to the electronic control unit 230 may input one or more video streams, detect objects and features in the image, track objects and features within the image, classify objects and features in the image, and send data outputs from these processes to other electronic control units or modules of the vehicle 200 .
  • the driver-side camera 210 , the passenger-side camera 215 , the front camera 220 , and the rear camera 225 are collectively illustrated and described as video cameras 320 .
  • the video cameras 320 are communicatively coupled to the video input interface 315 .
  • the video input interface 315 receives and processes multiple video streams from the video cameras 320 .
  • the video input interface 315 in coordination with the electronic processor 305 and memory 310 , transforms and combines the multiple video streams into a composite video.
  • the video output interface 325 in conjunction with the electronic processor 305 and memory 310 , generates and sends the composite video to the vehicle display 240 for viewing by a driver.
  • the vehicle display 240 may be positioned on a dashboard, on a center console, or other locations visible to the driver.
  • the vehicle display 240 may include various types of displays including liquid crystal displays (LCDs), light emitting diodes (LEDs), touchscreens, and the like.
  • the electronic control unit 230 , the video input interface 315 , the video output interface 325 , and the vehicle display 240 may be communicatively linked through a direct wired or wireless connection. In other embodiments, these components may be communicatively linked by a vehicle communication bus 115 and communication modules.
  • each of the video cameras 320 has approximately a 180 degree field of view. In some embodiments, the combination of video cameras 320 provides a field of view that reach 360 degrees around the vehicle 200 .
  • the video cameras 320 may be omnidirectional cameras (e.g., fisheye lens cameras) with a wide-angle field of view.
  • the electronic processor 305 combines each of the plurality of video streams such that edges of each of the plurality of video streams overlap, for example, in a process where images from the video streams are stitched together. In this way, the edges of images of each of the plurality of video streams overlaps with the edges of images from adjacent cameras.
  • the electronic processor 305 transforms one or more of the plurality of video streams to create a virtual camera viewpoint.
  • the transformation may include transforming images from each of the plurality of video streams into rectilinear images before or after stitching the images from the video cameras 22 .
  • the electronic processor 305 may create a composite video with rectilinear images from a perspective above, and looking down at, the vehicle 100 .
  • the driver-side camera 210 may have a field of view that overlaps with a field of view of the rear camera 225 .
  • either the driver-side camera 210 or the rear camera 225 may supply a portion of the video output of the overlapping region.
  • an object may be visible by both the front camera 220 and the passenger-side camera 215 .
  • the electronic processor 305 designates the front camera 220 as providing the portion of the composite video that contains the object.
  • the electronic processor 305 supplements the composite video with a portion of an image from, in this example, the passenger-side camera 215 to generate a blocked region behind the detected object.
  • omnidirectional cameras provide video streams with higher resolution and less distortion in the center of the image and lower resolutions and higher distortion at the edges of the image.
  • the electronic processor 305 generates the composite video based on, at least in part, the location of the object within the plurality of video streams.
  • each of the video cameras 320 provide images of portions of an area surrounding the vehicle 200 . By combining these portions, the electronic processor 305 forms the composite video.
  • the electronic processor 305 may align the video streams, adjust viewing angles, crop the image, and orientate the video stream to display the composite video as a continuous and integrated top-down field of view.
  • the images of portions of the area surrounding the vehicle 200 as well as objects surrounding the vehicle 200 may be distorted or obscured in that portion of the composite video.
  • the object when an object is generated in the composite video, the object may be distorted due to the wide angle view of the primary camera.
  • the electronic processor 305 augments the composite video to clarify portions of the area surrounding the vehicle 200 based on detected objects as described in detail below.
  • the object when an object is located near to the vehicle 200 , the object may obstruct a region of the primary field of view of the primary video camera on the side opposite of the object.
  • the electronic processor 305 dynamically blends the blocked portion of the field of view with the video stream from another of the video cameras 320 as also described in more detail below.
  • the primary field of view is supplemented with the secondary field of view from a secondary camera.
  • the electronic processor 305 supplements the composite video by overlaying images from the secondary camera on images from the primary video camera.
  • the electronic processor 305 may replace pixels representing the detected object in the video stream from the primary video camera with pixels representing at area around the detected object from the secondary video camera that best matches the virtual camera viewpoint.
  • the portion of the image with the overlapping region is provided solely by the primary video camera.
  • FIG. 4 illustrates an augmentation method 400 according to one embodiment.
  • the plurality of video cameras 320 generates the plurality of video streams and sends the plurality of video streams to the electronic processor 305 (block 405 ).
  • the electronic processor 305 detects an object external to the vehicle in at least one of the plurality of video streams (block 410 ).
  • the electronic processor 305 determines which of the plurality of video streams includes the detected object and may analyze the object based on a single video stream or on multiple video streams.
  • the electronic processor 305 may detect multiple objects in the plurality of video streams and process each object independently and simultaneously.
  • the electronic processor 305 obtains features and parameters that define the object from multiple sources. For example, image data from the plurality of video streams may be used to generate features of the object, and other sensors on the vehicle 200 may help define the parameters of the detected object such as size, dimensions, position, color, shape, texture, and the like.
  • the other sensors may include radar sensors, ultrasonic sensors, light detection and ranging sensors (LIDAR), and the like.
  • the electronic processor 305 classifies the detected object using predetermined classifications based on at least one of the plurality of video streams, and in some embodiments, on the features, parameters, or both of the detected object (block 415 ). In some embodiments, the electronic processor 305 determines whether the detected object is distorted in the image, and the electronic processor 305 only augments the video stream when the detected object is distorted.
  • Classifying the detected object may include comparing the detected object in the at least one video stream to a database of predetermined images of objects in the memory 310 and determining whether the database of predetermined images has an image that matches the detected object (block 420 ).
  • the electronic processor 305 may classify the object or refine the classification of the object by comparisons of the parameters of the detected object to a plurality of known objects in a look-up table. In this way, the electronic processor 305 selects one of the plurality of known objects to associate with the detected object when the detected object matches the one of the plurality of known objects. This may include comparing the parameters of the detected object to ranges of parameters that define known objects.
  • the electronic processor 305 determines that the object is subject to augmentation based on the classification.
  • the object does not match an image in the database, and therefore is not subject to augmentation (block 425 )
  • the object is displayed within the composite video on the vehicle display 240 without being augmented.
  • the electronic processor 305 augments the composite video with the image from the database (block 430 ). Then, the electronic processor 305 sends the composite video to the vehicle display 240 for viewing (block 435 ).
  • the electronic processor 305 uses the electronic processor 305 to generate a matching image.
  • the matching image may be based on features of the detected object within the video stream.
  • the electronic processor 305 determines a best match of the detected object with the database of predetermined images. For example, when the detected object is classified as a vehicle, the electronic processor 305 searches the database of images including images of vehicles and finds a matching image (e.g., a best match) based on the features obtained from the video stream. Also, in some embodiments, the electronic processor 305 searches the database of images based on the features obtained from the video stream and parameters obtained from the other vehicle sensors.
  • the electronic processor 305 adjusts the matching image based on the features and/or parameters of the detected object. For example, the electronic processor 305 may adjust the matching image based on the size, dimensions, color, etc. of the detected object. The electronic processor 305 then overlays the matching image over the detected object, and thus forms an augmented object in the composite video. As such, the matching image covers the detected object and effectively replaces the detected image with the matching image in the composite video. Then, the electronic processor 305 sends the composite video with the augmented object to the vehicle display 240 .
  • FIG. 5 illustrates a dynamic blending method 500 according to one embodiment.
  • the plurality of video streams are generated from the plurality of video cameras 22 (block 505 ).
  • the electronic processor 305 transforms the one or more of the plurality of video streams to create a virtual camera viewpoint (block 510 ).
  • the electronic processor 305 combines the plurality of video streams to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras 22 (block 515 ).
  • the electronic processor 305 detects an object external to the vehicle 200 in at least one of the plurality of video streams (block 520 ). In some situations, the detected object may be visible by multiple cameras.
  • the electronic processor 305 may determine which one of the plurality of video streams has a primary field of view of the detected object.
  • the primary field of view describes a field of view of a camera with a better view (for example, a view with less of a viewing angle, a closer view, a higher resolution view, a lower distortion view, etc.) of the detected object than the other cameras.
  • a secondary field of view describes a field of view of a camera with a field of view that is not as good (for example, a longer view, a view with a greater viewing angle, a lower resolution view, a higher distortion view, etc.) of the detected object.
  • the particular camera with the primary field of view of the region typically generates an image of better quality than the camera with the secondary field of view. However, some of the region may not be visible in the primary field of view of the primary camera.
  • the electronic processor 305 determines whether the object at least partially obscures the portion of the first image (block 525 ). In some embodiments, determining whether the object partially obscures the portion of the first image includes determining that a region of the primary field of view is blocked by the detected object. The electronic processor 305 may also determine whether the secondary field of view captures a blocked region of the primary field of view. When the secondary field of view captures the blocked region of the primary field of view, the electronic processor 305 determines that the composite video should be dynamically blended.
  • the electronic processor 305 supplements the portion of the first image (e.g., from the primary field of view) with a portion of a second image that is generated by a second one of the plurality of video cameras 22 (e.g., the secondary field of view of the secondary video camera) (block 535 ).
  • the electronic processor 305 may blend the primary field of view of the blocked region with the secondary field of view of the blocked region by overlaying the secondary field of view of the blocked region onto the primary field of view of the blocked region to generate the composite video.
  • the electronic processor 305 may not dynamically blend the image. In either case, in the next step, the electronic processor 305 sends the composite video to the vehicle display 240 (block 540 ).
  • FIGS. 6A and 6B illustrate examples of the composite video displayed on the vehicle display 240 after performing the methods of FIGS. 4 and 5 .
  • FIG. 6A illustrates a virtual top-down view of the vehicle 200 including augmented objects generated by the electronic processor 305 .
  • FIG. 6B illustrates the virtual top-down view of the vehicle 200 including an image that is dynamically blended by the electronic processor 305 .
  • this virtual top-down view is formed based on the video streams from the video cameras 22 and the video processing that occurs within the electronic processor 305 .
  • the electronic processor 305 detects an object 605 (for example, an adjacent vehicle) and a plurality of objects 610 (for example, curbs).
  • the object 605 is detected by the driver-side camera 210 , which also provides the primary field of view.
  • the object 605 may also be detected by the front camera 220 and the rear camera 225 , which provide secondary fields of view of the object 605 .
  • the electronic processor 305 detects the object 605 , classifies the object 605 as a vehicle, determines that the object 605 is within the first predetermined classification, and augments the composite video based on the object 605 .
  • the electronic processor 305 matches the object 605 to a top-down image of a particular vehicle including, in some embodiments, a particular make and model of the vehicle.
  • the electronic processor 305 may also determine the color of the vehicle based on the primary field of view or on parameters gathered from other sensors in the vehicle 200 .
  • the electronic processor 305 then overlays the image of the particular vehicle with the appropriate color onto the object 605 and displays the composite video on the vehicle display 240 .
  • the electronic processor 305 detects the objects 610 with at least the front camera 220 , classifies the objects 610 as curbs, determines that the objects 610 are able to be augmented based on the classification (i.e., that there are matching images in the database), and augments the composite video based on the objects 610 .
  • the electronic processor 305 may determine the features and parameters of the objects 610 such as the heights, lengths, thicknesses, etc. in order to classify the objects 610 .
  • the electronic processor 305 may also augment the composite video with images based on the video stream from the front camera 220 and from other sensors. For example, the electronic processor 305 may adjust images of curbs within the database to correspond to the features and parameters and then overlay the adjusted images onto the objects 610 in the composite video.
  • FIG. 6B illustrates an example of the composite video after performing the dynamic blending method 500 .
  • the composite video includes an object 630 (e.g., a light pole) that blocks a portion of the field of view (i.e., a blocked region 640 ) of the rear camera 225 .
  • the blocked region 640 of the field of view is visible by the driver-side camera 210 .
  • the blocked region 640 as captured in the video stream of the driver-side camera 210 is blended into the video stream from the rear camera 225 .
  • the portion of the video stream from the secondary camera is overlaid onto the portion of the video stream containing the blocked region from the primary camera.
  • the blocked region 640 of the video stream from the rear camera 225 is overlaid with the portion of the video stream from the driver-side camera 210 .
  • the electronic processor 305 sends the composite video with the blocked region 640 visible to the vehicle display 240 .
  • the blocked region 640 is visible in the vehicle display 240 .
  • embodiments provide, among other things, a system and a method for augmenting and dynamically blending one or more video streams from one or more video cameras positioned on a vehicle to generate a composite video for a vehicle display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method and system for generating a composite video for display in a vehicle. A plurality of video streams are generated from a plurality of video cameras configured to be positioned on the vehicle. The video streams are transformed by an electronic processor to create a virtual camera viewpoint. The transformed video streams are combined to generate a composite video including a portion of a first image that is generated from a first one of the video cameras. The electronic processor detects an object external to the vehicle and determines whether the object at least partially obscures the portion of the first image. When the object at least partially obscures the portion of the first image, the electronic processor supplements the portion of the first image with a portion of a second image that is generated by a second one of the video cameras.

Description

RELATED APPLICATIONS
The present application claims priority to U.S. Provisional Application No. 62/270,445 filed on Dec. 21, 2015, the entire contents of which are incorporated herein by reference.
BACKGROUND
Traditionally, vehicle imaging systems include one or more video cameras positioned on an exterior of a vehicle. The video cameras monitor an area surrounding the vehicle for objects and hazards. Some vehicle imaging systems provide a display on or near the dashboard for viewing by a driver. As a consequence, perception by a driver of the area surrounding the vehicle is enhanced. In some constructions, the vehicle imaging systems may combine multiple camera video streams into a composite video for viewing by the driver. The multiple camera video streams are processed by video processing equipment to provide multiple different views including wide-angle views, top-down views, and the like.
In one known system illustrated in FIG. 1, camera feeds from four wide-angle (e.g., omnidirectional) cameras spaced on different sides of a vehicle 100 are combined into a composite video that provides a virtual top-down view of the vehicle 100. In such a system, the top-down video displays the vehicle 100 and surrounding objects as illustrated in FIG. 1. However, since the virtual top-down view is generated from multiple omnidirectional cameras, the composite video contains distorted objects 105 with stretched dimensions, cropped edges, flattened surfaces, and the like. In addition, blind spots 115 in the top-down video may occur due to blocking objects 120 that block a field of view 125 of a video camera 130.
SUMMARY
Embodiments provide systems and methods that enhance images displayed to a driver of the vehicle for improved clarity and field of view. The systems and methods augment distorted objects to provide improved views of the vehicle and surrounding area. The systems and methods also dynamically blend multiple video streams to reduce blind spots caused from objects in the image.
One embodiment provides a method of generating a composite video for display in a vehicle. The method includes generating a plurality of video streams from a plurality of video cameras configured to be positioned on the vehicle. The one or more of the plurality of video streams are transformed by an electronic processor to create a virtual camera viewpoint. The plurality of transformed video streams are combined to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras. The electronic processor detects an object external to the vehicle and determines whether the object at least partially obscures the portion of the first image. When the object at least partially obscures the portion of the first image, the electronic processor supplements the portion of the first image with a portion of a second image that is generated by a second one of the plurality of video cameras.
Another embodiment provides a system for generating a composite video to display in a vehicle. The system includes a plurality of video cameras that generate a plurality of video streams and that are configured to be positioned on the vehicle. The system also includes a display and an electronic processor communicatively coupled to the plurality of video cameras and the display. The electronic processor is configured to transform the plurality of video streams to create a virtual camera viewpoint. The electronic processor combines the plurality of transformed video streams to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras. The electronic processor detects an object external to the vehicle and determines whether the object at least partially obscures the portion of the first image. When the object at least partially obscures the portion of the first image, the electronic processor supplements the portion of the first image with a portion of a second image that is generated by a second one of the plurality of video cameras.
Other aspects of the invention will become apparent by consideration of the detailed description and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a top-down perspective view of a vehicle and surrounding area as displayed on a vehicle display.
FIG. 2 is a block diagram of a vehicle equipped with a dynamic image blending and augmentation system according to one embodiment.
FIG. 3 is a block diagram of an electronic control unit and associated connections of the vehicle of FIG. 2 in accordance with one embodiment.
FIG. 4 is a flowchart of a method of augmenting a video with the dynamic image blending and augmentation system of FIG. 2.
FIG. 5 is a flowchart of a method of dynamically blending a video with the dynamic image blending and augmentation system of FIG. 2.
FIGS. 6A and 6B are top-down perspective views of the vehicle and surrounding area as displayed on the vehicle display after enhancing the video with the dynamic image blending and augmentation system of FIG. 2.
DETAILED DESCRIPTION
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways.
A plurality of hardware and software based devices, as well as a plurality of different structural components may be used to implement embodiments of the invention. In addition, embodiments may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, one of ordinary skill in the art, based on a reading of this detailed description, would recognize that, in at least one embodiment, aspects of the invention may be implemented in software (e.g., stored on non-transitory computer-readable medium) executable by one or more processors. Accordingly, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components may be utilized to implement various embodiments. For example, “control units” and “controllers” described in the specification can include one or more electronic processors, one or more memory modules including non-transitory computer-readable medium, one or more input/output interfaces, and various connections (e.g., a system bus) connecting the components.
FIG. 2 illustrates a vehicle 200 equipped with an dynamic image blending and augmentation system 205 according to one embodiment. The vehicle 200 includes a driver-side camera 210 (e.g., attached to or located near to a driver-side mirror), a passenger-side camera 215 (e.g., attached to or located near to a passenger-side mirror), a front camera 220, and a rear camera 225. The vehicle 200 also includes an electronic control unit (ECU) 230 and a vehicle display 240. Each of the components of the dynamic image blending and augmentation system 205 may be communicatively coupled. For example, the electronic control unit 230 is coupled to the vehicle display 240, the driver-side camera 210, the passenger-side camera 215, the front camera 220, and the rear camera 225 via a wired or wireless connection.
An example configuration of the dynamic image blending and augmentation system 205 is illustrated in FIG. 3. In this example, the electronic control unit 230 includes a plurality of electrical and electronic components that provide power, operation control, and protection to the components and modules within the electronic control unit 230. The electronic control unit 230 includes, among other things, an electronic processor 305 (such as a programmable electronic microprocessor, microcontroller, or similar device), a memory 310 (e.g., non-transitory, machine readable memory), a video input interface 315, and a video output interface 325. The electronic processor 305 is communicatively coupled to the memory 310 and executes instructions which are capable of being stored on the memory 310. The electronic processor 305 is configured to retrieve from memory 310 and execute, among other things, instructions related to processes and methods described herein. In other embodiments, the electronic control unit 230 includes additional, fewer, or different components. For example, the electronic control unit 230 may be implemented in several independent electronic control units each configured to perform specific functions or sub-functions. Additionally, the electronic control unit 230 may contain sub-modules that input and process video data (e.g., video streams) and perform related processes. For example, a video analysis module located within or communicatively coupled to the electronic control unit 230 may input one or more video streams, detect objects and features in the image, track objects and features within the image, classify objects and features in the image, and send data outputs from these processes to other electronic control units or modules of the vehicle 200.
As illustrated, the driver-side camera 210, the passenger-side camera 215, the front camera 220, and the rear camera 225 are collectively illustrated and described as video cameras 320. The video cameras 320 are communicatively coupled to the video input interface 315. The video input interface 315 receives and processes multiple video streams from the video cameras 320. The video input interface 315, in coordination with the electronic processor 305 and memory 310, transforms and combines the multiple video streams into a composite video. The video output interface 325, in conjunction with the electronic processor 305 and memory 310, generates and sends the composite video to the vehicle display 240 for viewing by a driver. For example, the vehicle display 240 may be positioned on a dashboard, on a center console, or other locations visible to the driver. The vehicle display 240 may include various types of displays including liquid crystal displays (LCDs), light emitting diodes (LEDs), touchscreens, and the like.
The electronic control unit 230, the video input interface 315, the video output interface 325, and the vehicle display 240 may be communicatively linked through a direct wired or wireless connection. In other embodiments, these components may be communicatively linked by a vehicle communication bus 115 and communication modules.
In some embodiments, each of the video cameras 320 has approximately a 180 degree field of view. In some embodiments, the combination of video cameras 320 provides a field of view that reach 360 degrees around the vehicle 200. The video cameras 320 may be omnidirectional cameras (e.g., fisheye lens cameras) with a wide-angle field of view. The electronic processor 305 combines each of the plurality of video streams such that edges of each of the plurality of video streams overlap, for example, in a process where images from the video streams are stitched together. In this way, the edges of images of each of the plurality of video streams overlaps with the edges of images from adjacent cameras. Once the electronic processor 305 receives the plurality of video streams, the electronic processor 305 transforms one or more of the plurality of video streams to create a virtual camera viewpoint. The transformation may include transforming images from each of the plurality of video streams into rectilinear images before or after stitching the images from the video cameras 22. For example, the electronic processor 305 may create a composite video with rectilinear images from a perspective above, and looking down at, the vehicle 100.
For example, the driver-side camera 210 may have a field of view that overlaps with a field of view of the rear camera 225. In the overlapping region, either the driver-side camera 210 or the rear camera 225 may supply a portion of the video output of the overlapping region. For example, in the overlapping region, an object may be visible by both the front camera 220 and the passenger-side camera 215. In one particular example, if the object is at less of an angle from, the front camera 220 than the passenger-side camera 215, the electronic processor 305 designates the front camera 220 as providing the portion of the composite video that contains the object. However, as described below, the electronic processor 305 supplements the composite video with a portion of an image from, in this example, the passenger-side camera 215 to generate a blocked region behind the detected object.
In general, omnidirectional cameras provide video streams with higher resolution and less distortion in the center of the image and lower resolutions and higher distortion at the edges of the image. As a consequence, the electronic processor 305 generates the composite video based on, at least in part, the location of the object within the plurality of video streams. For example, each of the video cameras 320 provide images of portions of an area surrounding the vehicle 200. By combining these portions, the electronic processor 305 forms the composite video. When combining the plurality of video streams, the electronic processor 305 may align the video streams, adjust viewing angles, crop the image, and orientate the video stream to display the composite video as a continuous and integrated top-down field of view.
However, in some instances, the images of portions of the area surrounding the vehicle 200 as well as objects surrounding the vehicle 200 may be distorted or obscured in that portion of the composite video. In one example, when an object is generated in the composite video, the object may be distorted due to the wide angle view of the primary camera. In this case, the electronic processor 305 augments the composite video to clarify portions of the area surrounding the vehicle 200 based on detected objects as described in detail below. In another example, when an object is located near to the vehicle 200, the object may obstruct a region of the primary field of view of the primary video camera on the side opposite of the object. In such a case, the electronic processor 305 dynamically blends the blocked portion of the field of view with the video stream from another of the video cameras 320 as also described in more detail below. In this way, the primary field of view is supplemented with the secondary field of view from a secondary camera. In some embodiments, the electronic processor 305 supplements the composite video by overlaying images from the secondary camera on images from the primary video camera. For example, the electronic processor 305 may replace pixels representing the detected object in the video stream from the primary video camera with pixels representing at area around the detected object from the secondary video camera that best matches the virtual camera viewpoint. In some embodiments, when not dynamically blending the image, the portion of the image with the overlapping region is provided solely by the primary video camera.
FIG. 4 illustrates an augmentation method 400 according to one embodiment. In the augmentation method 400, the plurality of video cameras 320 generates the plurality of video streams and sends the plurality of video streams to the electronic processor 305 (block 405). The electronic processor 305 detects an object external to the vehicle in at least one of the plurality of video streams (block 410). In some embodiments, the electronic processor 305 determines which of the plurality of video streams includes the detected object and may analyze the object based on a single video stream or on multiple video streams. In addition, the electronic processor 305 may detect multiple objects in the plurality of video streams and process each object independently and simultaneously.
In some embodiments, once the object is detected, the electronic processor 305 obtains features and parameters that define the object from multiple sources. For example, image data from the plurality of video streams may be used to generate features of the object, and other sensors on the vehicle 200 may help define the parameters of the detected object such as size, dimensions, position, color, shape, texture, and the like. The other sensors may include radar sensors, ultrasonic sensors, light detection and ranging sensors (LIDAR), and the like. The electronic processor 305 classifies the detected object using predetermined classifications based on at least one of the plurality of video streams, and in some embodiments, on the features, parameters, or both of the detected object (block 415). In some embodiments, the electronic processor 305 determines whether the detected object is distorted in the image, and the electronic processor 305 only augments the video stream when the detected object is distorted.
Classifying the detected object may include comparing the detected object in the at least one video stream to a database of predetermined images of objects in the memory 310 and determining whether the database of predetermined images has an image that matches the detected object (block 420). For example, in some embodiments, the electronic processor 305 may classify the object or refine the classification of the object by comparisons of the parameters of the detected object to a plurality of known objects in a look-up table. In this way, the electronic processor 305 selects one of the plurality of known objects to associate with the detected object when the detected object matches the one of the plurality of known objects. This may include comparing the parameters of the detected object to ranges of parameters that define known objects. When the object matches an image in the database (block 425), the electronic processor 305 determines that the object is subject to augmentation based on the classification. When the object does not match an image in the database, and therefore is not subject to augmentation (block 425), the object is displayed within the composite video on the vehicle display 240 without being augmented. Conversely, when the object is subject to augmentation, the electronic processor 305 augments the composite video with the image from the database (block 430). Then, the electronic processor 305 sends the composite video to the vehicle display 240 for viewing (block 435).
Once the detected object is classified as subject to augmentation, the electronic processor 305 generates a matching image. The matching image may be based on features of the detected object within the video stream. In some embodiments, the electronic processor 305 determines a best match of the detected object with the database of predetermined images. For example, when the detected object is classified as a vehicle, the electronic processor 305 searches the database of images including images of vehicles and finds a matching image (e.g., a best match) based on the features obtained from the video stream. Also, in some embodiments, the electronic processor 305 searches the database of images based on the features obtained from the video stream and parameters obtained from the other vehicle sensors. Once the matching image is found, the electronic processor 305 adjusts the matching image based on the features and/or parameters of the detected object. For example, the electronic processor 305 may adjust the matching image based on the size, dimensions, color, etc. of the detected object. The electronic processor 305 then overlays the matching image over the detected object, and thus forms an augmented object in the composite video. As such, the matching image covers the detected object and effectively replaces the detected image with the matching image in the composite video. Then, the electronic processor 305 sends the composite video with the augmented object to the vehicle display 240.
FIG. 5 illustrates a dynamic blending method 500 according to one embodiment. In the dynamic blending method 500, the plurality of video streams are generated from the plurality of video cameras 22 (block 505). The electronic processor 305 transforms the one or more of the plurality of video streams to create a virtual camera viewpoint (block 510). The electronic processor 305 combines the plurality of video streams to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras 22 (block 515). The electronic processor 305 detects an object external to the vehicle 200 in at least one of the plurality of video streams (block 520). In some situations, the detected object may be visible by multiple cameras. In these situations, the electronic processor 305 may determine which one of the plurality of video streams has a primary field of view of the detected object. In particular, the primary field of view describes a field of view of a camera with a better view (for example, a view with less of a viewing angle, a closer view, a higher resolution view, a lower distortion view, etc.) of the detected object than the other cameras. Similarly, a secondary field of view describes a field of view of a camera with a field of view that is not as good (for example, a longer view, a view with a greater viewing angle, a lower resolution view, a higher distortion view, etc.) of the detected object. As a consequence, the particular camera with the primary field of view of the region typically generates an image of better quality than the camera with the secondary field of view. However, some of the region may not be visible in the primary field of view of the primary camera.
In the dynamic blending method 500, the electronic processor 305 determines whether the object at least partially obscures the portion of the first image (block 525). In some embodiments, determining whether the object partially obscures the portion of the first image includes determining that a region of the primary field of view is blocked by the detected object. The electronic processor 305 may also determine whether the secondary field of view captures a blocked region of the primary field of view. When the secondary field of view captures the blocked region of the primary field of view, the electronic processor 305 determines that the composite video should be dynamically blended. When the object as least partially obscures the portion of the first image (block 525), the electronic processor 305 supplements the portion of the first image (e.g., from the primary field of view) with a portion of a second image that is generated by a second one of the plurality of video cameras 22 (e.g., the secondary field of view of the secondary video camera) (block 535). In particular, the electronic processor 305 may blend the primary field of view of the blocked region with the secondary field of view of the blocked region by overlaying the secondary field of view of the blocked region onto the primary field of view of the blocked region to generate the composite video. When the secondary field of view does not reach the blocked region, the electronic processor 305 may not dynamically blend the image. In either case, in the next step, the electronic processor 305 sends the composite video to the vehicle display 240 (block 540).
FIGS. 6A and 6B illustrate examples of the composite video displayed on the vehicle display 240 after performing the methods of FIGS. 4 and 5. In particular, FIG. 6A illustrates a virtual top-down view of the vehicle 200 including augmented objects generated by the electronic processor 305. FIG. 6B illustrates the virtual top-down view of the vehicle 200 including an image that is dynamically blended by the electronic processor 305. As described above, this virtual top-down view is formed based on the video streams from the video cameras 22 and the video processing that occurs within the electronic processor 305.
In the illustrated example of FIG. 6A, the electronic processor 305 detects an object 605 (for example, an adjacent vehicle) and a plurality of objects 610 (for example, curbs). The object 605 is detected by the driver-side camera 210, which also provides the primary field of view. The object 605 may also be detected by the front camera 220 and the rear camera 225, which provide secondary fields of view of the object 605. According to the method of FIG. 4, the electronic processor 305 detects the object 605, classifies the object 605 as a vehicle, determines that the object 605 is within the first predetermined classification, and augments the composite video based on the object 605. In this case, the electronic processor 305 matches the object 605 to a top-down image of a particular vehicle including, in some embodiments, a particular make and model of the vehicle. The electronic processor 305 may also determine the color of the vehicle based on the primary field of view or on parameters gathered from other sensors in the vehicle 200. The electronic processor 305 then overlays the image of the particular vehicle with the appropriate color onto the object 605 and displays the composite video on the vehicle display 240.
Similarly, the electronic processor 305 detects the objects 610 with at least the front camera 220, classifies the objects 610 as curbs, determines that the objects 610 are able to be augmented based on the classification (i.e., that there are matching images in the database), and augments the composite video based on the objects 610. In this case, the electronic processor 305 may determine the features and parameters of the objects 610 such as the heights, lengths, thicknesses, etc. in order to classify the objects 610. The electronic processor 305 may also augment the composite video with images based on the video stream from the front camera 220 and from other sensors. For example, the electronic processor 305 may adjust images of curbs within the database to correspond to the features and parameters and then overlay the adjusted images onto the objects 610 in the composite video.
FIG. 6B illustrates an example of the composite video after performing the dynamic blending method 500. The composite video includes an object 630 (e.g., a light pole) that blocks a portion of the field of view (i.e., a blocked region 640) of the rear camera 225. The blocked region 640 of the field of view is visible by the driver-side camera 210. According to the dynamic blending method 500, the blocked region 640 as captured in the video stream of the driver-side camera 210 is blended into the video stream from the rear camera 225. As discussed above, the portion of the video stream from the secondary camera is overlaid onto the portion of the video stream containing the blocked region from the primary camera. In this example, the blocked region 640 of the video stream from the rear camera 225 is overlaid with the portion of the video stream from the driver-side camera 210. Once the portion is overlaid, the electronic processor 305 sends the composite video with the blocked region 640 visible to the vehicle display 240. Thus, the blocked region 640 is visible in the vehicle display 240.
Thus, embodiments provide, among other things, a system and a method for augmenting and dynamically blending one or more video streams from one or more video cameras positioned on a vehicle to generate a composite video for a vehicle display. Various features and advantages of the invention are set forth in the following claims.

Claims (16)

What is claimed is:
1. A method of generating a composite video for display in a vehicle, the method comprising:
generating a plurality of video streams from a plurality of video cameras configured to be positioned on the vehicle;
transforming one or more of the plurality of video streams to create a virtual camera viewpoint;
combining the plurality of transformed video streams to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras;
detecting, with an electronic processor, an object external to the vehicle;
determining whether the object at least partially obscures the portion of the first image;
supplementing the portion of the first image with a portion of a second image that is generated by a second one of the plurality of video cameras when the object at least partially obscures the portion of the first image to create a supplemented composite video, wherein the supplemented composite video supplemented composite video includes a top-down field of view of the portion of the first image supplemented with the portion of the second image;
classifying the object with the electronic processor;
generating a matching image based on the classification of the object;
overlaying the matching image on the detected object in the composite video as an augmented object, wherein the augmented object is the visual representation of the object;
receiving a feature of the object from at least one sensor;
determining, with the electronic processor, that the object is distorted in the supplemented composite video; and
in response to determining that the object is distorted, adjusting the matching image based on the feature before overlaying the matching image on the object in the composite video as an augmented object.
2. The method of claim 1, wherein generating the plurality of video streams from the plurality of video cameras includes generating a plurality of wide-angle images with the plurality of video cameras.
3. The method of claim 2, wherein transforming the one or more of the plurality of video streams includes transforming the plurality of wide-angle images to a plurality of rectilinear images.
4. The method of claim 1, wherein combining the plurality of transformed video streams to generate the composite video includes stitching together the plurality of transformed video streams such that a composite image is formed from the virtual camera viewpoint.
5. The method of claim 1, wherein determining whether the object at least partially obstructs the portion of the first image includes determining that a region of a first field of view of the first one of the plurality of video cameras is blocked by the object.
6. The method of claim 5, further comprising determining whether the second one of the plurality of video cameras has a second field of view that captures at least part of the region blocked by the object.
7. The method of claim 6, wherein supplementing the portion of the first image with the portion of the second image occurs when the second field of view captures the at least part of the region blocked by the object, wherein the portion of the second image includes the at least part of the region blocked by the object captured by the second field of view.
8. The method of claim 7, wherein supplementing the portion of the first image with the portion of the second image includes overlaying the portion of the second image onto the portion of the first image within the composite video.
9. A system for generating a composite video to display in a vehicle, the system comprising:
a plurality of video cameras that generate a plurality of video streams, the plurality of video cameras configured to be positioned on the vehicle;
a display; and
an electronic processor communicatively coupled to the plurality of video cameras and the display, the electronic processor configured to
transform the plurality of video streams to create a virtual camera viewpoint;
combine the plurality of transformed video streams to generate a composite video including a portion of a first image that is generated from a first one of the plurality of video cameras;
detect, with an electronic processor, an object external to the vehicle;
determine whether the object at least partially obscures the portion of the first image; and
supplement the portion of the first image with a portion of a second image that is generated by a second one of the plurality of video cameras when the object at least partially obscures the portion of the first image, wherein the supplemented composite video includes a top-down field of view of the portion of the first image supplemented with the portion of the second image
classify the object with the electronic processor;
generate a matching image based on the classification of the object;
overlay the matching image on the detected object in the composite video as an augmented object, wherein the augmented object is the visual representation of the object;
receive a feature of the object from at least one sensor;
determine whether the object is distorted in the supplemented composite video; and
in response to determining that the object is distorted, adjust the matching image based on the feature before overlaying the matching image on the object in the composite video as an augmented object.
10. The system of claim 9, wherein the electronic processor is configured to transform the plurality of video streams to create the virtual camera viewpoint by transforming a plurality of wide-angle images to a plurality of rectilinear images and stitching together the plurality of transformed video streams such that a composite image is formed from the virtual camera viewpoint.
11. The system of claim 9, wherein the electronic processor is configured to determine whether the object at least partially obstructs the portion of the first image by determining that a region of a first field of view of the first one of the plurality of video cameras is blocked by the object.
12. The system of claim 11, wherein the electronic processor is further configured to determine whether the second one of the plurality of video cameras has a second field of view that captures at least part of the region blocked by the object.
13. The system of claim 12, wherein the electronic processor is configured to supplement the portion of the first image with the portion of the second image when the second field of view captures the at least part of the region blocked by the object, wherein the portion of the second image includes the at least part of the region blocked by the object captured by the second field of view.
14. The system of claim 9, wherein the electronic processor is configured to overlay the matching image on the object in the composite video by replacing pixels representing the object with a model of the object from a viewpoint that best matches the virtual camera viewpoint.
15. The method of claim 1, wherein receiving the feature of the object includes receiving a color of the object.
16. The system of claim 9, wherein the matching image is a top-down image of a virtual vehicle, the virtual vehicle matching the make and model of the object.
US16/325,171 2015-12-21 2016-10-12 Dynamic image blending for multiple-camera vehicle systems Active 2036-10-28 US11184531B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/325,171 US11184531B2 (en) 2015-12-21 2016-10-12 Dynamic image blending for multiple-camera vehicle systems

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562270445P 2015-12-21 2015-12-21
US16/325,171 US11184531B2 (en) 2015-12-21 2016-10-12 Dynamic image blending for multiple-camera vehicle systems
PCT/EP2016/074401 WO2017108221A1 (en) 2015-12-21 2016-10-12 Dynamic image blending for multiple-camera vehicle systems

Publications (2)

Publication Number Publication Date
US20190230282A1 US20190230282A1 (en) 2019-07-25
US11184531B2 true US11184531B2 (en) 2021-11-23

Family

ID=57184412

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/325,171 Active 2036-10-28 US11184531B2 (en) 2015-12-21 2016-10-12 Dynamic image blending for multiple-camera vehicle systems

Country Status (5)

Country Link
US (1) US11184531B2 (en)
EP (1) EP3394833B1 (en)
KR (1) KR20180084952A (en)
CN (1) CN108369746A (en)
WO (1) WO2017108221A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12125225B1 (en) * 2023-04-04 2024-10-22 GM Global Technology Operations LLC Monocular camera system performing depth estimation of objects surrounding a vehicle

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016224905A1 (en) * 2016-12-14 2018-06-14 Conti Temic Microelectronic Gmbh Apparatus and method for fusing image data from a multi-camera system for a motor vehicle
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
JP7121470B2 (en) * 2017-05-12 2022-08-18 キヤノン株式会社 Image processing system, control method, and program
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
DE112018004150B4 (en) * 2017-09-13 2023-12-28 Robert Bosch Gesellschaft mit beschränkter Haftung TRAILER REVERSE ASSISTANCE SYSTEM
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
JP6704554B2 (en) * 2018-03-29 2020-06-03 三菱電機株式会社 Image processing apparatus, image processing method, and monitoring system
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
DE102018214874B3 (en) * 2018-08-31 2019-12-19 Audi Ag Method and arrangement for generating an environment map of a vehicle textured with image information and vehicle comprising such an arrangement
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11205093B2 (en) 2018-10-11 2021-12-21 Tesla, Inc. Systems and methods for training machine models with augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN111746796B (en) * 2019-03-29 2024-04-26 B/E航空公司 Apparatus and method for providing a gesture reference for a vehicle occupant
CA3064413A1 (en) * 2019-03-29 2020-09-29 B/E Aerospace, Inc. Apparatus and method for providing attitude reference for vehicle passengers
US11153010B2 (en) * 2019-07-02 2021-10-19 Waymo Llc Lidar based communication
US11076102B2 (en) 2019-09-24 2021-07-27 Seek Thermal, Inc. Thermal imaging system with multiple selectable viewing angles and fields of view for vehicle applications
US11891057B2 (en) 2019-09-24 2024-02-06 Seek Thermal, Inc. Thermal imaging system with multiple selectable viewing angles and fields of view for vehicle applications
US11481884B2 (en) 2020-06-04 2022-10-25 Nuro, Inc. Image quality enhancement for autonomous vehicle remote operations
DE102022204313A1 (en) 2022-05-02 2023-11-02 Volkswagen Aktiengesellschaft Method and device for generating an image of the environment for a parking assistant of a vehicle

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6429789B1 (en) * 1999-08-09 2002-08-06 Ford Global Technologies, Inc. Vehicle information acquisition and display assembly
US20080266142A1 (en) * 2007-04-30 2008-10-30 Navteq North America, Llc System and method for stitching of video for routes
US20090086015A1 (en) 2007-07-31 2009-04-02 Kongsberg Defence & Aerospace As Situational awareness observation apparatus
US20090110239A1 (en) * 2007-10-30 2009-04-30 Navteq North America, Llc System and method for revealing occluded objects in an image dataset
US20090195652A1 (en) 2008-02-05 2009-08-06 Wave Group Ltd. Interactive Virtual Window Vision System For Mobile Platforms
US20090273674A1 (en) 2006-11-09 2009-11-05 Bayerische Motoren Werke Aktiengesellschaft Method of Producing a Total Image of the Environment Surrounding a Motor Vehicle
US20100141736A1 (en) * 2007-04-22 2010-06-10 Jeffrey Hack Method of obtaining geographically related images using a vehicle
US20100259372A1 (en) 2009-04-14 2010-10-14 Hyundai Motor Japan R&D Center, Inc. System for displaying views of vehicle and its surroundings
US7859565B2 (en) 1993-02-26 2010-12-28 Donnelly Corporation Vision system for a vehicle including image processor
CN102045546A (en) 2010-12-15 2011-05-04 广州致远电子有限公司 Panoramic parking assist system
US20110228078A1 (en) 2010-03-22 2011-09-22 Institute For Information Industry Real-time augmented reality device, real-time augmented reality method and computer storage medium thereof
US8130270B2 (en) 2007-10-23 2012-03-06 Alpine Electronics, Inc. Vehicle-mounted image capturing apparatus
US8160391B1 (en) 2008-06-04 2012-04-17 Google Inc. Panoramic image fill
CN102577372A (en) 2009-09-24 2012-07-11 松下电器产业株式会社 Driving support display device
US20130250046A1 (en) 1996-05-22 2013-09-26 Donnelly Corporation Multi-camera vision system for a vehicle
US8576285B2 (en) 2009-03-25 2013-11-05 Fujitsu Limited In-vehicle image processing method and image processing apparatus
US20130293717A1 (en) 2012-05-02 2013-11-07 GM Global Technology Operations LLC Full speed lane sensing with a surrounding view system
US8633970B1 (en) 2012-08-30 2014-01-21 Google Inc. Augmented reality with earth data
US20140139676A1 (en) 2012-11-19 2014-05-22 Magna Electronics Inc. Vehicle vision system with enhanced display functions
US20140247352A1 (en) 2013-02-27 2014-09-04 Magna Electronics Inc. Multi-camera dynamic top view vision system
US20140333729A1 (en) 2011-12-09 2014-11-13 Magna Electronics Inc. Vehicle vision system with customized display
US20140354686A1 (en) 2013-06-03 2014-12-04 Daqri, Llc Data manipulation based on real world object manipulation
US20150042678A1 (en) 2013-08-09 2015-02-12 Metaio Gmbh Method for visually augmenting a real object with a computer-generated image
US20150331236A1 (en) 2012-12-21 2015-11-19 Harman Becker Automotive Systems Gmbh A system for a vehicle
US20160065944A1 (en) * 2013-03-19 2016-03-03 Hitachi Kokusai Electric Inc. Image display apparatus and image display method
US20170120817A1 (en) * 2015-10-30 2017-05-04 Bendix Commercial Vehicle Systems Llc Filling in surround view areas blocked by mirrors or other vehicle parts

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7859565B2 (en) 1993-02-26 2010-12-28 Donnelly Corporation Vision system for a vehicle including image processor
US20130250046A1 (en) 1996-05-22 2013-09-26 Donnelly Corporation Multi-camera vision system for a vehicle
US6429789B1 (en) * 1999-08-09 2002-08-06 Ford Global Technologies, Inc. Vehicle information acquisition and display assembly
US8908035B2 (en) 2006-11-09 2014-12-09 Bayerische Motoren Werke Aktiengesellschaft Method of producing a total image of the environment surrounding a motor vehicle
US20090273674A1 (en) 2006-11-09 2009-11-05 Bayerische Motoren Werke Aktiengesellschaft Method of Producing a Total Image of the Environment Surrounding a Motor Vehicle
US20100141736A1 (en) * 2007-04-22 2010-06-10 Jeffrey Hack Method of obtaining geographically related images using a vehicle
US20080266142A1 (en) * 2007-04-30 2008-10-30 Navteq North America, Llc System and method for stitching of video for routes
US7688229B2 (en) 2007-04-30 2010-03-30 Navteq North America, Llc System and method for stitching of video for routes
US20090086015A1 (en) 2007-07-31 2009-04-02 Kongsberg Defence & Aerospace As Situational awareness observation apparatus
US8130270B2 (en) 2007-10-23 2012-03-06 Alpine Electronics, Inc. Vehicle-mounted image capturing apparatus
US20090110239A1 (en) * 2007-10-30 2009-04-30 Navteq North America, Llc System and method for revealing occluded objects in an image dataset
JP2009134719A (en) 2007-10-30 2009-06-18 Navteq North America Llc System and method of revealing occluded object in image dataset
US8086071B2 (en) 2007-10-30 2011-12-27 Navteq North America, Llc System and method for revealing occluded objects in an image dataset
US20090195652A1 (en) 2008-02-05 2009-08-06 Wave Group Ltd. Interactive Virtual Window Vision System For Mobile Platforms
US8160391B1 (en) 2008-06-04 2012-04-17 Google Inc. Panoramic image fill
US8576285B2 (en) 2009-03-25 2013-11-05 Fujitsu Limited In-vehicle image processing method and image processing apparatus
US20100259372A1 (en) 2009-04-14 2010-10-14 Hyundai Motor Japan R&D Center, Inc. System for displaying views of vehicle and its surroundings
CN102577372A (en) 2009-09-24 2012-07-11 松下电器产业株式会社 Driving support display device
US20110228078A1 (en) 2010-03-22 2011-09-22 Institute For Information Industry Real-time augmented reality device, real-time augmented reality method and computer storage medium thereof
CN102045546A (en) 2010-12-15 2011-05-04 广州致远电子有限公司 Panoramic parking assist system
US20140333729A1 (en) 2011-12-09 2014-11-13 Magna Electronics Inc. Vehicle vision system with customized display
US20130293717A1 (en) 2012-05-02 2013-11-07 GM Global Technology Operations LLC Full speed lane sensing with a surrounding view system
US8633970B1 (en) 2012-08-30 2014-01-21 Google Inc. Augmented reality with earth data
US20140139676A1 (en) 2012-11-19 2014-05-22 Magna Electronics Inc. Vehicle vision system with enhanced display functions
US20150331236A1 (en) 2012-12-21 2015-11-19 Harman Becker Automotive Systems Gmbh A system for a vehicle
US20140247352A1 (en) 2013-02-27 2014-09-04 Magna Electronics Inc. Multi-camera dynamic top view vision system
US20160065944A1 (en) * 2013-03-19 2016-03-03 Hitachi Kokusai Electric Inc. Image display apparatus and image display method
US20140354686A1 (en) 2013-06-03 2014-12-04 Daqri, Llc Data manipulation based on real world object manipulation
US20150042678A1 (en) 2013-08-09 2015-02-12 Metaio Gmbh Method for visually augmenting a real object with a computer-generated image
US20170120817A1 (en) * 2015-10-30 2017-05-04 Bendix Commercial Vehicle Systems Llc Filling in surround view areas blocked by mirrors or other vehicle parts

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
English translation of Chinese Patent Office Action for Application No. 201680074727.1 dated Jan. 15, 2021 (13 pages).
Hughes, C., et al., "Wide-angle camera technology for automotiveapplications, a review.", IET. Intelligent Transport Systems, vol. 3.1, (2009).
International Search Report for Application No. PCT/EP2016/074401 dated Dec. 16, 2016 (3 pages).
Notice of Preliminary Rejection from the Korean Intellectual Property Office for Application No. 10-2018-7017408 dated Jul. 18, 2019 (9 pages).

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12125225B1 (en) * 2023-04-04 2024-10-22 GM Global Technology Operations LLC Monocular camera system performing depth estimation of objects surrounding a vehicle

Also Published As

Publication number Publication date
WO2017108221A1 (en) 2017-06-29
CN108369746A (en) 2018-08-03
EP3394833A1 (en) 2018-10-31
EP3394833B1 (en) 2020-01-29
US20190230282A1 (en) 2019-07-25
KR20180084952A (en) 2018-07-25

Similar Documents

Publication Publication Date Title
US11184531B2 (en) Dynamic image blending for multiple-camera vehicle systems
US10899277B2 (en) Vehicular vision system with reduced distortion display
US11505123B2 (en) Vehicular camera monitoring system with stereographic display
US10137836B2 (en) Vehicle vision system
US8144033B2 (en) Vehicle periphery monitoring apparatus and image displaying method
US11472338B2 (en) Method for displaying reduced distortion video images via a vehicular vision system
US11910123B2 (en) System for processing image data for display using backward projection
US10504241B2 (en) Vehicle camera calibration system
CN106537904B (en) Vehicle with an environmental monitoring device and method for operating such a monitoring device
US8902313B2 (en) Automatic image equalization for surround-view video camera systems
US9098928B2 (en) Image-processing system and image-processing method
US20120069153A1 (en) Device for monitoring area around vehicle
EP3326145B1 (en) Panel transform
JP2019110492A (en) Image display device
US20230103678A1 (en) Display control device, vehicle, and display control method
US11823467B2 (en) Display control apparatus, vehicle, and display control method
US20200294215A1 (en) Image processing apparatus and method
JP2016213759A (en) Rear monitor
KR101709009B1 (en) System and method for compensating distortion of around view
US20230199137A1 (en) Display control apparatus, vehicle, and display control method
KR101994721B1 (en) System for providing around image and method for providing around image

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: ROBERT BOSCH GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SYPITKOWSKI, GREG;GRAF, PATRICK;MILLER, JAMES STEPHEN;SIGNING DATES FROM 20190621 TO 20191018;REEL/FRAME:050761/0164

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE