US20220121889A1 - Methods and systems for bowl view stitching of images - Google Patents
Methods and systems for bowl view stitching of images Download PDFInfo
- Publication number
- US20220121889A1 US20220121889A1 US17/072,095 US202017072095A US2022121889A1 US 20220121889 A1 US20220121889 A1 US 20220121889A1 US 202017072095 A US202017072095 A US 202017072095A US 2022121889 A1 US2022121889 A1 US 2022121889A1
- Authority
- US
- United States
- Prior art keywords
- bowl
- depth
- width
- vehicle
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000005457 optimization Methods 0.000 description 24
- 238000010408 sweeping Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000000725 suspension Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G06K9/6289—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G06K9/00805—
-
- G06K9/629—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
Definitions
- This technical field generally relates to operator aid systems for vehicles, and more particularly, relates to methods and systems for providing of a view image of a vehicle environment in a predefined geometry shape such as a bowl view.
- Vehicles may incorporate and utilize numerous aids to assist the operator.
- various sensors may be disposed at various locations of the vehicle.
- the various sensors sense observable conditions of the environment of the vehicle.
- a plurality of cameras or other sensors may sense a condition of the road or environment that the vehicle is traveling or about to travel.
- a system and method are provided for aiding an operator in operating a vehicle.
- a system includes a sensor system configured to generate sensor data sensed from an environment of the vehicle.
- the system further includes a control module configured to, by a processor, generate a bowl view image of the environment based on the sensor data, identify at least one of a double object and a missing object in the bowl view image, generate a second bowl view image of the environment by re-stitching the sensor data based on the at least one of double object and missing object, and generate display data based on the second bowl view image.
- the sensor system includes a plurality of cameras.
- the control module is configured to perform the re-stitching of the sensor data based on a bowl width determined from the at least one of double object and missing object.
- control module is further configured to, by the processor, compute a sharpness value of an area associated with the double object, and determine the bowl width based on the sharpness value.
- control module is further configured to, by the processor, vary a bowl depth for a defined range of depths, re-stitch the sensor data from two sensors based on the varied bowl depths, compute the sharpness value of the area associated with the double object for each varied bowl depth, and select a bowl depth corresponding to a defined sharpness value to determine the bowl width.
- control module is further configured to, by the processor, determine feature points of the double object, and determine the bowl width based on the feature points. In various embodiments, the control module is further configured to, by the processor, adjust an initial bowl width, perform a re-stitching of the sensor data based on the adjusted initial bowl width, and evaluate the feature points of the double object to determine the bowl width. In various embodiments, the control module is further configured to, by the processor, select the adjusted initial bowl width that corresponds to the feature points of the double object merging as the bowl width.
- control module is further configured to, by the processor, determine a depth of the feature point based on triangulation, and determine the bowl width based on the depth of the feature point.
- the system further includes a display system within vehicle, and wherein the display system displays the bowl view image to the operator of the vehicle.
- a method includes: receiving sensor data from a sensor system that senses an environment of the vehicle; generating, by a processor, a bowl view image of the environment based on the sensor data; identifying, by the processor, at least one of a double object and a missing object in the bowl view image; generating, by the processor, a second bowl view image of the environment by re-stitching the sensor data based on the at least one of double object and missing object; and generating, by the processor, display data based on the second bowl view image.
- the sensor system includes a plurality of cameras.
- the method includes re-stitching of the sensor data based on a bowl width determined from the at least one of double object and missing object.
- the method includes computing a sharpness value of an area associated with the double object and determining the bowl width based on the sharpness value. In various embodiments, the method includes varying a bowl depth for a defined range of depths, re-stitching the sensor data from two sensors based on the varied bowl depths, computing the sharpness value of the area associated with the double object for each varied bowl depth, and selecting a bowl depth corresponding to a defined sharpness value to determine the bowl width.
- the method includes determining feature points of the double object, and determining the bowl width based on the feature points. In various embodiments, the method includes adjusting an initial bowl width, performing a re-stitching of the sensor data based on the adjusted initial bowl width, and evaluating the feature points of the double object to determine the bowl width. In various embodiments, the method includes selecting the adjusted initial bowl width that corresponds to the feature points of the double object merging as the bowl width.
- the method includes determining a depth of the feature point based on triangulation and determining the bowl width based on the depth of the feature point.
- the method includes displaying, by a display system within the vehicle, the bowl view image to the operator of the vehicle.
- FIG. 1 is an illustration of a top perspective schematic view of a vehicle having an image display system in accordance with various embodiments
- FIG. 2 is a functional block diagram illustrating an image display system in accordance with various embodiments
- FIG. 3 is a dataflow diagram illustrating the control module of the image display system in accordance with various embodiments.
- FIGS. 4 and 5 are flowcharts illustrating methods of controlling content to be displayed on a display screen of the image display system in accordance with various embodiments.
- system or module may refer to any combination or collection of mechanical and electrical hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), memory that contains one or more executable software or firmware programs and associated data, a combinational logic circuit, and/or other suitable components that provide the described functionality.
- ASIC application specific integrated circuit
- Embodiments may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number, combination or collection of mechanical and electrical hardware, software, and/or firmware components configured to perform the specified functions.
- an embodiment of the invention may employ various combinations of mechanical components, e.g., towing apparatus, indicators or telltales; and electrical components, e.g., integrated circuit components, memory elements, digital signal processing elements, logic elements, look-up tables, imaging systems and devices or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
- mechanical components e.g., towing apparatus, indicators or telltales
- electrical components e.g., integrated circuit components, memory elements, digital signal processing elements, logic elements, look-up tables, imaging systems and devices or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
- electrical components e.g., integrated circuit components, memory
- FIG. 1 is an illustration of a top view of a vehicle shown generally at 10 equipped with an image display system shown generally at 12 in accordance with various embodiments.
- the image display system 12 generally uses data from a sensor system 14 of the vehicle 10 along with customizable software to allow a user to experience a bowl view image of the environment of the vehicle 10 .
- a display screen 16 FIG. 2
- a bowl view image for example, includes surround camera images (front, rear, and sides) that are mapped to different parts of a defined bowl.
- the bowl is defined to start flat close to where the vehicle 10 is positioned and then adjust a curvature to extend vertical at some distance from the host vehicle.
- the distance at where the bowl is adjusted to vertical can be provided from a proximity sensor such as a radar or ultrasound that measure how far surround objects are.
- a proximity sensor such as a radar or ultrasound that measure how far surround objects are.
- double objects appear in the image when objects further than the point where the bowl becomes vertical.
- the image display system 12 recognizes and mitigates the double or missing objects based on the methods and systems disclosed herein.
- vehicle 10 being a passenger car
- teachings herein are compatible with all types of vehicle such as aircraft, watercraft, sport utility vehicles, and automobiles including, but not limited to, sedans, coupes, sport utility vehicles, pickup trucks, minivans, full-size vans, trucks, and buses as well as any type of towed vehicle such as a trailer.
- the vehicle 10 generally includes a body 13 , front wheels 18 , rear wheels 20 , a suspension system 21 , a steering system 22 , and a propulsion system 24 .
- the wheels 18 - 20 are each rotationally coupled to the vehicle 10 near a respective corner of the body 13 .
- the wheels 18 - 20 are coupled to the body 13 via the suspension system 21 .
- the wheels 18 and/or 20 are driven by the propulsion system 24 .
- the wheels 18 are steerable by the steering system 22 .
- the body 13 is arranged on or integrated with a chassis (not shown) and substantially encloses the components of the vehicle 10 .
- the body 13 is configured to separate a powertrain compartment 28 (that includes at least the propulsion system 24 ) from a passenger compartment 30 that includes, among other features, seating (not shown) for one or more occupants of the vehicle 10 .
- the vehicle 10 further includes a sensor system 14 and an operator selection device 15 .
- the sensor system 14 includes one or more sensing devices that sense observable conditions of components of the vehicle 10 and/or that sense observable conditions of the exterior environment of the vehicle 10 .
- the sensing devices can include, but are not limited to, radars, lidars, global positioning systems, optical cameras, thermal cameras, ultrasonic sensors, height sensors, pressure sensors, steering angle sensors, depth or proximity sensors, and/or other sensors.
- the operator selection device 15 includes one or more user manipulable devices that can be manipulated by a user in order to provide input. The input can relate to, for example, activation of the display of virtual reality content and a desired viewing angle of the content to be displayed.
- the operator selection device 15 can include a knob, a switch, a touch screen, a voice recognition module, etc.
- the image display system 12 includes a display screen 32 communicatively coupled to a control module 34 .
- the control module 34 is communicatively coupled to the sensor system 14 and the operator selection device 15 .
- the display screen 32 may be disposed within the passenger compartment 30 at a location that enables viewing by an operator of the vehicle 10 .
- the display screen 32 may integrated with an infotainment system (not shown) or instrument panel (not shown) of the vehicle 10 .
- the display screen 32 displays content such that the bowl view is experienced by the viewer.
- the control module 34 may be dedicated to the display screen 32 , may control the display screen 32 and other features of the vehicle 10 (e.g., a body control module, an instrument control module, or other feature control module), and/or may be implemented as a combination of control modules that control the display screen 32 and other features of the vehicle 10 .
- the control module 34 will be discussed and illustrated as a single control module that is dedicated to the display screen 32 .
- the control module 34 controls the display screen 32 directly and/or communicates data to the display screen 32 such that bowl view content can be displayed.
- the control module 34 includes at least memory 36 and a processor 38 . As will be discussed in more detail below, the control module 34 includes instructions that when processed by the processor 38 control the content to be displayed on the display screen 32 based on sensor data received from the sensor system 14 and user input received from the operator selection device 15 . The control module further includes instructions that when processed by the processor 38 control the content to be displayed based on the methods and systems disclosed herein.
- a dataflow diagram illustrates various embodiments of the control module 34 in greater detail.
- Various embodiments of the control module 34 may include any number of sub-modules.
- the sub-modules shown in FIG. 4 may be combined and/or further partitioned to similarly generate bowl view content to be viewed by an operator.
- Inputs to the control module 34 may be received from the sensor system 14 , received from the operator selection device 15 , received from other control modules (not shown) of the vehicle 10 , and/or determined by other sub-modules (not shown) of the control module 34 .
- control module 34 includes a bowl image determination module 50 , a depth sweeping optimization module 52 , a feature point optimization module 54 , and a display determination module 56 .
- depth sweeping optimization module 52 and the feature point optimization module 54 can be implemented in the control module 34 , in various embodiments, in order to optimize the stitching of the images in the bowl view.
- the bowl image determination module 50 receives as input image data from the sensor system 14 of the vehicle 10 .
- the image data includes images taken by the various sensors of the vehicle 10 .
- the image data 58 can include a front image, a left side image, a right side image, and a rear side image. As can be appreciated, any number of images can be included in various embodiments.
- the bowl image determination module 50 maps pixels of the images to pixels of a defined bowl. For example, the front image is mapped to front pixels of the bowl; the left side image is mapped to left side pixels of the bowl; the right side image is mapped to right side pixels of the bowl; and the rear image is mapped to rear side pixels of the bowl.
- the bowl image determination module 50 then stitches the pixels of the images in sections of the bowl where the images overlap using one or more stitching and alpha blending techniques known in the art to produce bowl image data 60 .
- the depth sweeping optimization module 52 receives as input the bowl image data 60 .
- the depth sweeping optimization module 52 evaluates the bowl image data 60 for double and/or missing objects. When double or missing objects occur, the depth sweeping optimization module 52 mitigates the double or missing object by optimizing the stitching of the images that creates the double or missing objects.
- the depth sweeping optimization module 52 enhances the bowl view stitching by adjusting a width of the bowl until the best sharpness of the missing or double object is achieved.
- the depth sweeping optimization module 52 synthesizes images from two or more of the sensors at various depths (di) and computes a sharpness value of an area around the object at each depth di.
- the depth sweeping optimization module 52 then identifies an actual depth (da) of the object as the area that yields the highest sharpness among the depths (di).
- the actual depth (da) is then used by the depth sweeping optimization module 52 to determine the bowl width to re-stitch the images to produce improved bowl image data 62 .
- the feature point optimization module 54 receives as input the bowl image data 60 .
- the feature point optimization module 54 evaluates the bowl image data 60 for double objects. When double objects occur, the feature point optimization module 54 mitigates the double object by optimizing the stitching of the images that create the double objects.
- the feature point optimization module 54 enhances the bowl view image by adjusting the bowl width until redundant feature points within a region of interest are eliminated.
- the feature point optimization module 54 identifies points of features of the double objects within the stitching region of the bowl image data.
- the feature point optimization module 54 selects a first bowl width that shows the double objects and then adjusts (e.g., by expanding) the bowl width to a width (dm) at which the points of the features merge.
- the bowl depth (dm) is then used by the feature optimization module to re-stitch the images to produce improved bowl image data 64 .
- the feature point optimization module 54 identifies a depth of a feature point using triangulation or some other method using known dimensions of an object and a location from a sensor such as a radar or lidar. The feature point optimization module 54 uses the depth of the feature point to determine the bowl width.
- the display determination module 56 receives as input the improved bowl image data 62 , and the improved bowl image data 64 . Based on the received data 62 , 64 , the display determination module 56 generates display data 66 that includes a bowl view image that mitigates double and/or missing objects. In various embodiments, when both of the depth sweeping optimization module 52 and the feature point optimization module 54 are implemented, the display determination module 56 generates the display data 66 based on one of the improved bowl image data 62 and the improved bowl image data 64 (e.g., the one providing the best results), or based on a combination of the improved bowl image data 62 and the improved bowl image data 64 . As can be appreciated, the improved bowl image data 64 can apply to dynamic events and one or more filtering methods can be performed to smooth transitions between images with different stitching before displaying the images.
- FIGS. 4 and 5 and with continued reference to FIGS. 1-3 flowcharts illustrate methods 100 , 200 that can be performed by the image display system 12 in accordance with various embodiments.
- the order of operation within the methods 100 , 200 is not limited to the sequential execution as illustrated in FIGS. 4 and 5 but may be performed in one or more varying orders as applicable and in accordance with the present disclosure.
- the methods 100 , 200 of FIGS. 4 and 5 may be scheduled to run at predetermined time intervals during operation of the vehicle 10 and/or may be scheduled to run based on predetermined events.
- the method 100 determines the improved bowl image data using depth sweeping.
- the method 100 may be performed by the control module 34 of FIGS. 1 and 2 and, more particularly, by the depth sweeping optimization module 52 of FIG. 3 .
- the method may begin at 105 .
- the image data is received at 110 .
- the two images creating the double or missing object are synthesized at 130 .
- the highest sharpness value is determined at 150 ; and the depth (di) corresponding to the highest sharpness value is selected at 160 .
- Stitching is then performed using the selected depth for determining the width of the bowl at 170 to produce the improved bowl image data. Thereafter, the method may end at 180 .
- the method 200 determines the improved bowl image data using feature points.
- the method 200 may be performed by the control module 34 of FIGS. 1 and 2 and, more particularly, by the feature point optimization module 54 of FIG. 3 .
- the method may begin at 205 .
- the image data is received at 210 .
- Points of features of the double objects are identified within the stitching region of the bowl image data at 220 .
- a first bowl width that shows the double object is selected at 230 .
- the first bowl width is adjusted (e.g., by expanding) and the images are re-stitched at 240 . It is determined whether the feature points of the double object merge at 250 . When the feature points do not merge at 250 , a new width is selected at 240 and the method continues.
- the width is selected at 260 .
- Stitching is then performed using the selected width of the bowl at 270 to produce the improved bowl image data. Thereafter, the method may end at 280 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Vascular Medicine (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
A system and method are provided for aiding an operator in operating a vehicle. In one embodiment, a system includes a sensor system configured to generate sensor data sensed from an environment of the vehicle. The system further includes a control module configured to, by a processor, generate a bowl view image of the environment based on the sensor data, identify at least one of a double object and a missing object in the bowl view image, generate a second bowl view image of the environment by re-stitching the sensor data based on the at least one of double object and missing object, and generate display data based on the second bowl view image.
Description
- This technical field generally relates to operator aid systems for vehicles, and more particularly, relates to methods and systems for providing of a view image of a vehicle environment in a predefined geometry shape such as a bowl view.
- Vehicles may incorporate and utilize numerous aids to assist the operator. For example, various sensors may be disposed at various locations of the vehicle. The various sensors sense observable conditions of the environment of the vehicle. For example, a plurality of cameras or other sensors may sense a condition of the road or environment that the vehicle is traveling or about to travel. In some instances, it is desirable to present to an operator a view of the environment of the vehicle. In such instances, the images provided by the various cameras must be stitched together to create the full view.
- Accordingly, it is desirable to provide methods and systems for improved stitching of a plurality of images into a single image. Other desirable features and characteristics of the herein described embodiments will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
- In one exemplary embodiment, a system and method are provided for aiding an operator in operating a vehicle. In one embodiment, a system includes a sensor system configured to generate sensor data sensed from an environment of the vehicle. The system further includes a control module configured to, by a processor, generate a bowl view image of the environment based on the sensor data, identify at least one of a double object and a missing object in the bowl view image, generate a second bowl view image of the environment by re-stitching the sensor data based on the at least one of double object and missing object, and generate display data based on the second bowl view image.
- In various embodiments, the sensor system includes a plurality of cameras. In various embodiments, the control module is configured to perform the re-stitching of the sensor data based on a bowl width determined from the at least one of double object and missing object.
- In various embodiments, the control module is further configured to, by the processor, compute a sharpness value of an area associated with the double object, and determine the bowl width based on the sharpness value. In various embodiments, the control module is further configured to, by the processor, vary a bowl depth for a defined range of depths, re-stitch the sensor data from two sensors based on the varied bowl depths, compute the sharpness value of the area associated with the double object for each varied bowl depth, and select a bowl depth corresponding to a defined sharpness value to determine the bowl width.
- In various embodiments, the control module is further configured to, by the processor, determine feature points of the double object, and determine the bowl width based on the feature points. In various embodiments, the control module is further configured to, by the processor, adjust an initial bowl width, perform a re-stitching of the sensor data based on the adjusted initial bowl width, and evaluate the feature points of the double object to determine the bowl width. In various embodiments, the control module is further configured to, by the processor, select the adjusted initial bowl width that corresponds to the feature points of the double object merging as the bowl width.
- In various embodiments, the control module is further configured to, by the processor, determine a depth of the feature point based on triangulation, and determine the bowl width based on the depth of the feature point.
- In various embodiments, the system further includes a display system within vehicle, and wherein the display system displays the bowl view image to the operator of the vehicle.
- In another embodiment, a method includes: receiving sensor data from a sensor system that senses an environment of the vehicle; generating, by a processor, a bowl view image of the environment based on the sensor data; identifying, by the processor, at least one of a double object and a missing object in the bowl view image; generating, by the processor, a second bowl view image of the environment by re-stitching the sensor data based on the at least one of double object and missing object; and generating, by the processor, display data based on the second bowl view image.
- In various embodiments, the sensor system includes a plurality of cameras. In various embodiments, the method includes re-stitching of the sensor data based on a bowl width determined from the at least one of double object and missing object.
- In various embodiments, the method includes computing a sharpness value of an area associated with the double object and determining the bowl width based on the sharpness value. In various embodiments, the method includes varying a bowl depth for a defined range of depths, re-stitching the sensor data from two sensors based on the varied bowl depths, computing the sharpness value of the area associated with the double object for each varied bowl depth, and selecting a bowl depth corresponding to a defined sharpness value to determine the bowl width.
- In various embodiments, the method includes determining feature points of the double object, and determining the bowl width based on the feature points. In various embodiments, the method includes adjusting an initial bowl width, performing a re-stitching of the sensor data based on the adjusted initial bowl width, and evaluating the feature points of the double object to determine the bowl width. In various embodiments, the method includes selecting the adjusted initial bowl width that corresponds to the feature points of the double object merging as the bowl width.
- In various embodiments, the method includes determining a depth of the feature point based on triangulation and determining the bowl width based on the depth of the feature point.
- In various embodiments, the method includes displaying, by a display system within the vehicle, the bowl view image to the operator of the vehicle.
- The exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
-
FIG. 1 is an illustration of a top perspective schematic view of a vehicle having an image display system in accordance with various embodiments; -
FIG. 2 is a functional block diagram illustrating an image display system in accordance with various embodiments; -
FIG. 3 is a dataflow diagram illustrating the control module of the image display system in accordance with various embodiments; and -
FIGS. 4 and 5 are flowcharts illustrating methods of controlling content to be displayed on a display screen of the image display system in accordance with various embodiments. - The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. As used herein, the term system or module may refer to any combination or collection of mechanical and electrical hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), memory that contains one or more executable software or firmware programs and associated data, a combinational logic circuit, and/or other suitable components that provide the described functionality.
- Embodiments may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number, combination or collection of mechanical and electrical hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of the invention may employ various combinations of mechanical components, e.g., towing apparatus, indicators or telltales; and electrical components, e.g., integrated circuit components, memory elements, digital signal processing elements, logic elements, look-up tables, imaging systems and devices or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that the herein described embodiments may be practiced in conjunction with any number of mechanical and/or electronic systems, and that the vehicle systems described herein are merely exemplary.
- For the sake of brevity, conventional components and techniques and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the invention.
-
FIG. 1 is an illustration of a top view of a vehicle shown generally at 10 equipped with an image display system shown generally at 12 in accordance with various embodiments. As will be discussed in more detail below, theimage display system 12 generally uses data from asensor system 14 of thevehicle 10 along with customizable software to allow a user to experience a bowl view image of the environment of thevehicle 10. A display screen 16 (FIG. 2 ) can be placed in any location of thevehicle 10 and can display images and/or videos that create the image of the environment of thevehicle 10. A bowl view image, for example, includes surround camera images (front, rear, and sides) that are mapped to different parts of a defined bowl. The bowl is defined to start flat close to where thevehicle 10 is positioned and then adjust a curvature to extend vertical at some distance from the host vehicle. The distance at where the bowl is adjusted to vertical can be provided from a proximity sensor such as a radar or ultrasound that measure how far surround objects are. In some instances, double objects appear in the image when objects further than the point where the bowl becomes vertical. Theimage display system 12 recognizes and mitigates the double or missing objects based on the methods and systems disclosed herein. - Although the context of the discussion herein is with respect to the
vehicle 10 being a passenger car, it should be understood that the teachings herein are compatible with all types of vehicle such as aircraft, watercraft, sport utility vehicles, and automobiles including, but not limited to, sedans, coupes, sport utility vehicles, pickup trucks, minivans, full-size vans, trucks, and buses as well as any type of towed vehicle such as a trailer. - As shown in the example of
FIG. 1 , thevehicle 10 generally includes abody 13,front wheels 18,rear wheels 20, asuspension system 21, asteering system 22, and apropulsion system 24. The wheels 18-20 are each rotationally coupled to thevehicle 10 near a respective corner of thebody 13. The wheels 18-20 are coupled to thebody 13 via thesuspension system 21. Thewheels 18 and/or 20 are driven by thepropulsion system 24. Thewheels 18 are steerable by thesteering system 22. - The
body 13 is arranged on or integrated with a chassis (not shown) and substantially encloses the components of thevehicle 10. Thebody 13 is configured to separate a powertrain compartment 28 (that includes at least the propulsion system 24) from apassenger compartment 30 that includes, among other features, seating (not shown) for one or more occupants of thevehicle 10. - The
vehicle 10 further includes asensor system 14 and anoperator selection device 15. Thesensor system 14 includes one or more sensing devices that sense observable conditions of components of thevehicle 10 and/or that sense observable conditions of the exterior environment of thevehicle 10. The sensing devices can include, but are not limited to, radars, lidars, global positioning systems, optical cameras, thermal cameras, ultrasonic sensors, height sensors, pressure sensors, steering angle sensors, depth or proximity sensors, and/or other sensors. Theoperator selection device 15 includes one or more user manipulable devices that can be manipulated by a user in order to provide input. The input can relate to, for example, activation of the display of virtual reality content and a desired viewing angle of the content to be displayed. Theoperator selection device 15 can include a knob, a switch, a touch screen, a voice recognition module, etc. - As shown in more detail in
FIG. 2 and with continued reference toFIG. 1 , theimage display system 12 includes adisplay screen 32 communicatively coupled to acontrol module 34. Thecontrol module 34 is communicatively coupled to thesensor system 14 and theoperator selection device 15. - The
display screen 32 may be disposed within thepassenger compartment 30 at a location that enables viewing by an operator of thevehicle 10. For example, thedisplay screen 32 may integrated with an infotainment system (not shown) or instrument panel (not shown) of thevehicle 10. Thedisplay screen 32 displays content such that the bowl view is experienced by the viewer. - With reference to
FIG. 2 , thecontrol module 34 may be dedicated to thedisplay screen 32, may control thedisplay screen 32 and other features of the vehicle 10 (e.g., a body control module, an instrument control module, or other feature control module), and/or may be implemented as a combination of control modules that control thedisplay screen 32 and other features of thevehicle 10. For exemplary purposes, thecontrol module 34 will be discussed and illustrated as a single control module that is dedicated to thedisplay screen 32. Thecontrol module 34 controls thedisplay screen 32 directly and/or communicates data to thedisplay screen 32 such that bowl view content can be displayed. - The
control module 34 includes atleast memory 36 and aprocessor 38. As will be discussed in more detail below, thecontrol module 34 includes instructions that when processed by theprocessor 38 control the content to be displayed on thedisplay screen 32 based on sensor data received from thesensor system 14 and user input received from theoperator selection device 15. The control module further includes instructions that when processed by theprocessor 38 control the content to be displayed based on the methods and systems disclosed herein. - Referring now to
FIG. 3 and with continued reference toFIG. 1-2 , a dataflow diagram illustrates various embodiments of thecontrol module 34 in greater detail. Various embodiments of thecontrol module 34 according to the present disclosure may include any number of sub-modules. As can be appreciated, the sub-modules shown inFIG. 4 may be combined and/or further partitioned to similarly generate bowl view content to be viewed by an operator. Inputs to thecontrol module 34 may be received from thesensor system 14, received from theoperator selection device 15, received from other control modules (not shown) of thevehicle 10, and/or determined by other sub-modules (not shown) of thecontrol module 34. - In various embodiments, the
control module 34 includes a bowlimage determination module 50, a depthsweeping optimization module 52, a featurepoint optimization module 54, and adisplay determination module 56. As can be appreciated, one or both of the depthsweeping optimization module 52 and the featurepoint optimization module 54 can be implemented in thecontrol module 34, in various embodiments, in order to optimize the stitching of the images in the bowl view. - In various embodiments, the bowl
image determination module 50 receives as input image data from thesensor system 14 of thevehicle 10. For example, the image data includes images taken by the various sensors of thevehicle 10. In various embodiments, theimage data 58 can include a front image, a left side image, a right side image, and a rear side image. As can be appreciated, any number of images can be included in various embodiments. - The bowl
image determination module 50 maps pixels of the images to pixels of a defined bowl. For example, the front image is mapped to front pixels of the bowl; the left side image is mapped to left side pixels of the bowl; the right side image is mapped to right side pixels of the bowl; and the rear image is mapped to rear side pixels of the bowl. The bowlimage determination module 50 then stitches the pixels of the images in sections of the bowl where the images overlap using one or more stitching and alpha blending techniques known in the art to producebowl image data 60. - The depth
sweeping optimization module 52 receives as input thebowl image data 60. The depthsweeping optimization module 52 evaluates thebowl image data 60 for double and/or missing objects. When double or missing objects occur, the depthsweeping optimization module 52 mitigates the double or missing object by optimizing the stitching of the images that creates the double or missing objects. - For example, the depth
sweeping optimization module 52 enhances the bowl view stitching by adjusting a width of the bowl until the best sharpness of the missing or double object is achieved. In various embodiments, for example, the depthsweeping optimization module 52 synthesizes images from two or more of the sensors at various depths (di) and computes a sharpness value of an area around the object at each depth di. The depthsweeping optimization module 52 then identifies an actual depth (da) of the object as the area that yields the highest sharpness among the depths (di). The actual depth (da) is then used by the depthsweeping optimization module 52 to determine the bowl width to re-stitch the images to produce improvedbowl image data 62. - The feature
point optimization module 54 receives as input thebowl image data 60. The featurepoint optimization module 54 evaluates thebowl image data 60 for double objects. When double objects occur, the featurepoint optimization module 54 mitigates the double object by optimizing the stitching of the images that create the double objects. - For example, the feature
point optimization module 54 enhances the bowl view image by adjusting the bowl width until redundant feature points within a region of interest are eliminated. In various embodiments, for example, the featurepoint optimization module 54 identifies points of features of the double objects within the stitching region of the bowl image data. The featurepoint optimization module 54 selects a first bowl width that shows the double objects and then adjusts (e.g., by expanding) the bowl width to a width (dm) at which the points of the features merge. The bowl depth (dm) is then used by the feature optimization module to re-stitch the images to produce improvedbowl image data 64. - In another example, the feature
point optimization module 54 identifies a depth of a feature point using triangulation or some other method using known dimensions of an object and a location from a sensor such as a radar or lidar. The featurepoint optimization module 54 uses the depth of the feature point to determine the bowl width. - The
display determination module 56 receives as input the improvedbowl image data 62, and the improvedbowl image data 64. Based on the receiveddata display determination module 56 generatesdisplay data 66 that includes a bowl view image that mitigates double and/or missing objects. In various embodiments, when both of the depthsweeping optimization module 52 and the featurepoint optimization module 54 are implemented, thedisplay determination module 56 generates thedisplay data 66 based on one of the improvedbowl image data 62 and the improved bowl image data 64 (e.g., the one providing the best results), or based on a combination of the improvedbowl image data 62 and the improvedbowl image data 64. As can be appreciated, the improvedbowl image data 64 can apply to dynamic events and one or more filtering methods can be performed to smooth transitions between images with different stitching before displaying the images. - Referring now to
FIGS. 4 and 5 , and with continued reference toFIGS. 1-3 flowcharts illustratemethods image display system 12 in accordance with various embodiments. As can be appreciated in light of the disclosure, the order of operation within themethods FIGS. 4 and 5 but may be performed in one or more varying orders as applicable and in accordance with the present disclosure. - As can further be appreciated, the
methods FIGS. 4 and 5 may be scheduled to run at predetermined time intervals during operation of thevehicle 10 and/or may be scheduled to run based on predetermined events. - As shown in
FIG. 4 , themethod 100 determines the improved bowl image data using depth sweeping. In various embodiments, themethod 100 may be performed by thecontrol module 34 ofFIGS. 1 and 2 and, more particularly, by the depthsweeping optimization module 52 ofFIG. 3 . - In one example, the method may begin at 105. The image data is received at 110. For each depth (di) in a range of depths at 120, the two images creating the double or missing object are synthesized at 130. A sharpness value of an area around the object at each depth (di). Once each depth has been processed at 120, the highest sharpness value is determined at 150; and the depth (di) corresponding to the highest sharpness value is selected at 160. Stitching is then performed using the selected depth for determining the width of the bowl at 170 to produce the improved bowl image data. Thereafter, the method may end at 180.
- As shown in
FIG. 5 , themethod 200 determines the improved bowl image data using feature points. In various embodiments, themethod 200 may be performed by thecontrol module 34 ofFIGS. 1 and 2 and, more particularly, by the featurepoint optimization module 54 ofFIG. 3 . - In one example, the method may begin at 205. The image data is received at 210. Points of features of the double objects are identified within the stitching region of the bowl image data at 220. A first bowl width that shows the double object is selected at 230. The first bowl width is adjusted (e.g., by expanding) and the images are re-stitched at 240. It is determined whether the feature points of the double object merge at 250. When the feature points do not merge at 250, a new width is selected at 240 and the method continues.
- When the feature points merge at 250, the width is selected at 260. Stitching is then performed using the selected width of the bowl at 270 to produce the improved bowl image data. Thereafter, the method may end at 280.
- While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and the legal equivalents thereof.
Claims (20)
1. A system to aid an operator in operating a vehicle, comprising:
a sensor system disposed on the vehicle and configured to generate sensor data sensed from an environment of the vehicle; and
a control module configured to, by a processor, generate a bowl view image of the environment based on the sensor data, identify at least one of a double object and a missing object in the bowl view image, adjust a depth associated with the bowl view until a defined sharpness value of an area outside of the at least one of double object and missing object is achieved, generate a second bowl view image of the environment by re-stitching the sensor data based on the adjusted depth, and generate display data based on the second bowl view image.
2. The system of claim 1 , wherein the sensor system includes a plurality of cameras.
3. (canceled)
4. The system of claim 1 , wherein the control module is further configured to, by the processor, compute the sharpness value of the area at the adjusted bowl depth, and determine a bowl width based on the sharpness value and the bowl depth, and wherein the second bowl view image is generated based on the bowl width.
5. The system of claim 4 , wherein the control module is further configured to, by the processor, vary the bowl depth for a defined range of depths, re-stitch the sensor data from two sensors based on the varied bowl depths, compute the sharpness value of the area for each varied bowl depth, and select a bowl depth corresponding to the defined sharpness value to determine the bowl width.
6. The system of claim 4 , wherein the control module is further configured to, by the processor, determine feature points of the double object, and determine the bowl width based on the feature points.
7. The system of claim 6 , wherein the control module is further configured to, by the processor, adjust an initial bowl width, perform a re-stitching of the sensor data based on the adjusted initial bowl width, and evaluate the feature points of the double object to determine the bowl width.
8. The system of claim 7 , wherein the control module is further configured to, by the processor, select the adjusted initial bowl width that corresponds to the feature points of the double object merging as the bowl width.
9. The system of claim 6 , wherein the control module is further configured to, by the processor, determine a depth of the feature point based on triangulation, and determine the bowl width based on the depth of the feature point.
10. The system of claim 1 , further comprising a display system within vehicle, and wherein the display system displays the bowl view image to the operator of the vehicle.
11. A method for aiding an operator in operating a vehicle, comprising:
receiving sensor data from a sensor system of the vehicle that senses an environment of the vehicle;
generating, by a processor, a bowl view image of the environment based on the sensor data;
identifying, by the processor, at least one of a double object and a missing object in the bowl view image;
adjusting, by the processor, a depth associated with the bowl view until a defined sharpness value of an area outside of the at least one of double object and missing object is achieved;
generating, by the processor, a second bowl view image of the environment by re-stitching the sensor data based on the adjusted depth; and
generating, by the processor, display data based on the second bowl view image.
12. The method of claim 11 , wherein the sensor system includes a plurality of cameras.
13. (canceled)
14. The method of claim 11 , further comprising computing the sharpness value of the area at the adjusted bowl depth, and determine a bowl width based on the sharpness value and the bowl depth, and wherein the second bowl view image is generated based on the bowl width.
15. The method of claim 14 , further comprising varying the bowl depth for a defined range of depths, re-stitching the sensor data from two sensors based on the varied bowl depths, compute the sharpness value of the area for each varied bowl depth, and select a bowl depth corresponding to the defined sharpness value to determine the bowl width.
16. The method of claim 14 , further comprising determining feature points of the double object, and determining the bowl width based on the feature points.
17. The method of claim 16 , further comprising adjusting an initial bowl width, performing a re-stitching of the sensor data based on the adjusted initial bowl width, and evaluating the feature points of the double object to determine the bowl width.
18. The method of claim 17 , further comprising selecting the adjusted initial bowl width that corresponds to the feature points of the double object merging as the bowl width.
19. The method of claim 16 , further comprising: determining a depth of the feature point based on triangulation and determining the bowl width based on the depth of the feature point.
20. The method of claim 11 , further comprising displaying, by a display system within the vehicle, the bowl view image to the operator of the vehicle.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/072,095 US11288553B1 (en) | 2020-10-16 | 2020-10-16 | Methods and systems for bowl view stitching of images |
DE102021110869.1A DE102021110869B4 (en) | 2020-10-16 | 2021-04-28 | System and method for assisting an operator in operating a vehicle |
CN202110503067.7A CN114374818A (en) | 2020-10-16 | 2021-05-08 | Method and system for bowl-view stitching of images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/072,095 US11288553B1 (en) | 2020-10-16 | 2020-10-16 | Methods and systems for bowl view stitching of images |
Publications (2)
Publication Number | Publication Date |
---|---|
US11288553B1 US11288553B1 (en) | 2022-03-29 |
US20220121889A1 true US20220121889A1 (en) | 2022-04-21 |
Family
ID=80855405
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/072,095 Active US11288553B1 (en) | 2020-10-16 | 2020-10-16 | Methods and systems for bowl view stitching of images |
Country Status (3)
Country | Link |
---|---|
US (1) | US11288553B1 (en) |
CN (1) | CN114374818A (en) |
DE (1) | DE102021110869B4 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130250041A1 (en) * | 2012-03-26 | 2013-09-26 | Altek Corporation | Image capture device and image synthesis method thereof |
US20150254825A1 (en) * | 2014-03-07 | 2015-09-10 | Texas Instruments Incorporated | Method, apparatus and system for processing a display from a surround view camera solution |
US20180095533A1 (en) * | 2016-09-30 | 2018-04-05 | Samsung Electronics Co., Ltd. | Method for displaying an image and an electronic device thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105765966B (en) | 2013-12-19 | 2020-07-10 | 英特尔公司 | Bowl-shaped imaging system |
US20190349571A1 (en) | 2018-05-11 | 2019-11-14 | Ford Global Technologies, Llc | Distortion correction for vehicle surround view camera projections |
-
2020
- 2020-10-16 US US17/072,095 patent/US11288553B1/en active Active
-
2021
- 2021-04-28 DE DE102021110869.1A patent/DE102021110869B4/en active Active
- 2021-05-08 CN CN202110503067.7A patent/CN114374818A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130250041A1 (en) * | 2012-03-26 | 2013-09-26 | Altek Corporation | Image capture device and image synthesis method thereof |
US20150254825A1 (en) * | 2014-03-07 | 2015-09-10 | Texas Instruments Incorporated | Method, apparatus and system for processing a display from a surround view camera solution |
US20180095533A1 (en) * | 2016-09-30 | 2018-04-05 | Samsung Electronics Co., Ltd. | Method for displaying an image and an electronic device thereof |
Also Published As
Publication number | Publication date |
---|---|
DE102021110869B4 (en) | 2023-01-05 |
US11288553B1 (en) | 2022-03-29 |
CN114374818A (en) | 2022-04-19 |
DE102021110869A1 (en) | 2022-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10710504B2 (en) | Surroundings-monitoring device and computer program product | |
US8648881B2 (en) | Method and apparatus for image processing for in-vehicle cameras | |
US8830319B2 (en) | Device and method for detecting and displaying the rear and/or side view of a motor vehicle | |
JP7222254B2 (en) | Peripheral display controller | |
US20030197660A1 (en) | Image display apparatus, method, and program for automotive vehicle | |
US11648932B2 (en) | Periphery monitoring device | |
JP2018144526A (en) | Periphery monitoring device | |
US20190244324A1 (en) | Display control apparatus | |
JP5067169B2 (en) | Vehicle parking assistance apparatus and image display method | |
JP2019028920A (en) | Display control device | |
JP2018133712A (en) | Periphery monitoring device | |
CN112977465A (en) | Method and apparatus for determining trailer hitch articulation angle in motor vehicle | |
WO2019053922A1 (en) | Image processing device | |
CN113508574A (en) | Imaging system and method | |
CN112492262A (en) | Image processing apparatus | |
JP2022095303A (en) | Peripheral image display device, display control method | |
US20170297487A1 (en) | Vehicle door opening assessments | |
US11288553B1 (en) | Methods and systems for bowl view stitching of images | |
US20220126853A1 (en) | Methods and systems for stiching of images into a virtual image | |
JP7314518B2 (en) | Perimeter monitoring device | |
US10086871B2 (en) | Vehicle data recording | |
US11873023B2 (en) | Boundary memorization systems and methods for vehicle positioning | |
US20200133293A1 (en) | Method and apparatus for viewing underneath a vehicle and a trailer | |
JP2022101979A (en) | Image generation device and image generation method | |
US20220250652A1 (en) | Virtual lane methods and systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GM GLOBAL TECHNOLOGY OPERATIONS LLC, MICHIGAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MURAD, MOHANNAD;ZHANG, WENDE;HOANG, KEVIN K.;SIGNING DATES FROM 20201014 TO 20201015;REEL/FRAME:054073/0211 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |