GB2571923A - Apparatus and method for correcting for changes in vehicle orientation - Google Patents

Apparatus and method for correcting for changes in vehicle orientation Download PDF

Info

Publication number
GB2571923A
GB2571923A GB1803588.1A GB201803588A GB2571923A GB 2571923 A GB2571923 A GB 2571923A GB 201803588 A GB201803588 A GB 201803588A GB 2571923 A GB2571923 A GB 2571923A
Authority
GB
United Kingdom
Prior art keywords
vehicle
image
orientation
time
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1803588.1A
Other versions
GB2571923B (en
GB201803588D0 (en
Inventor
Close Joshua
Hardy Robert
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jaguar Land Rover Ltd
Original Assignee
Jaguar Land Rover Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jaguar Land Rover Ltd filed Critical Jaguar Land Rover Ltd
Priority to GB1803588.1A priority Critical patent/GB2571923B/en
Publication of GB201803588D0 publication Critical patent/GB201803588D0/en
Publication of GB2571923A publication Critical patent/GB2571923A/en
Application granted granted Critical
Publication of GB2571923B publication Critical patent/GB2571923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/806Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for aiding parking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Computer Graphics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A first image including a first region external to a vehicle and information indicating the orientation of the vehicle at a first time when the image was captured are obtained 1; information indicating the orientation of the vehicle at a second time is also obtained 2; a change in vehicle orientation is determined 3; a modified first image, representative of the first region from the perspective of the vehicle at the second time, is generated using the first image and the determined change in vehicle orientation 4; at least a part of the modified image is rendered. The orientation information may comprise a rotational position of the vehicle and generating the modified image may comprise rotating the first image about one or more axes. A second image including a second region external to the automobile may be obtained at the second time 2, a composite image may be generated by matching portions of the modified image and the second image 5. The composite image may comprise a 3D or 2D representation of the environment surrounding the car, extending at least partially underneath the vehicle.

Description

Apparatus and Method for Correcting for Changes in Vehicle Orientation
TECHNICAL FIELD
The present invention relates to an apparatus and method for correcting for changes in vehicle orientation. Particularly, but not exclusively, the present invention relates to a method for use in a vehicle and an apparatus for use in a vehicle arranged to correct images captured of the environment surrounding a vehicle to correct for changes in vehicle orientation. Such corrected images may be rendered or displayed. Aspects of the invention relate to a method, a computer program product, an apparatus, a system and a vehicle.
It is important for a driver of a vehicle to be able to view the environment surrounding the vehicle in order to drive the vehicle safely and accurately. In some vehicles, such as sports utility vehicles (SUVs), the view ahead of the vehicle is partially obscured by a bonnet or hood of the vehicle, particularly a region a short distance ahead of the vehicle. This can be exacerbated by the vehicle being on an incline or at a top of a descent, such as when driving off-road. Furthermore, especially when driving offroad, while obstacles and objects such as rocks may be viewed ahead of a vehicle before they are reached, once the vehicle is positioned over an object it can be difficult to ascertain where the vehicle is in relation to that object.
Vehicle mounted camera systems are known for capturing images of regions external to the vehicle and displaying those images to the user. It is known to delay the display of captured images so as to allow the user to view regions of the environment external to the vehicle that were within the field of view of a vehicle mounted camera at the time that the image was captured but are no longer within the field of view of any camera at the time the image is displayed, due to movement of the vehicle. However, it will be appreciated that the captured image of a certain region will have a different perspective relative to how the region would appear at the time that the image is displayed from the location at that time of the camera (or more generally, the vehicle). Such a time delayed image may appear unnatural to the user or give a false impression of the environment surrounding the vehicle due to the change in perspective.
It is an object of embodiments of the invention to at least mitigate one or more of the problems of the prior art. It is an object of certain embodiments of the invention to aid a driver of a vehicle. It is an object of embodiments of the invention to improve a driver’s understanding of the environment surrounding a vehicle.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide a display method, a computer program product, a display apparatus and a vehicle as claimed in the appended claims.
According to an aspect of the invention, there is provided a method for correcting for changes in orientation of a vehicle, the method comprising: obtaining (1) a first image including a first region external to the vehicle and first orientation information indicative of the orientation of the vehicle at a first time corresponding to when the first image is captured; obtaining (2) second orientation information indicative of the orientation of the vehicle at a second time later than the first time; determining (3) a change in vehicle orientation between the first and second times using the first orientation information and the second orientation information; generating (4) a modified first image using the first image and the determined change in vehicle orientation, the modified first image being representative of the first region from the perspective of the vehicle at the second time; and rendering at least part of the modified first image.
The orientation information may comprise a rotational position of the vehicle relative to a predetermined reference or a predetermined previous orientation of the vehicle. The orientation information may define a rotational position of the vehicle about three orthogonal axes. The determined change in vehicle orientation between the first and second times may comprise a rotation about one or more of the three orthogonal axes.
Generating a modified first image may comprise rotating the first image about one or more axes in dependence on the determined change in vehicle orientation.
The first image and the first orientation information may be stored until the second time.
The display method may comprise: obtaining (2, 600) a second image at the second time of a second region external to the vehicle; generating (5, 614) a composite image from the second image and the modified first image by matching (612) portions of the modified first image and the second image; and rendering (616) at least part of the composite image. The composite image comprises a first portion generated from the second image and a second portion generated from the modified first image, the second portion not being visible within the second image.
The first and second images may be captured by a common image sensor mounted upon the vehicle.
The composite image may comprise a 3-Dimensional, 3D, representation or a 2Dimensional, 2D, representation of the environment surrounding the vehicle and extending at least partially underneath the vehicle.
Obtaining a first image may comprise capturing a series of image frames; and wherein the method comprises: determining a position of the vehicle; and storing an indication of the position of the vehicle with at least a portion of an image frame.
Matching portions of the modified first image and the second image may comprise performing pattern matching to identify features present in overlapping portions of the modified first image and the second image such those features are correlated in the composite image.
According to a further aspect of the invention, there is provided a computer program product storing computer program code which is arranged when executed to implement the above method.
According to a further aspect of the invention, there is provided a vehicle orientation correction apparatus, the apparatus comprising: a processing means (704) arranged to: receive first orientation information indicative of the orientation of the vehicle at a first time and second orientation information indicative of the orientation of the vehicle at a second time later than the first time; determine (3) a change in vehicle orientation between the first and second times using the first orientation information and the second orientation information; and generate (4) a modified first image using a first image and the determined change in vehicle orientation; wherein the first image is captured at the first time and includes a first region external to the vehicle at the first time; and wherein the modified first image is representative of the first region from the perspective of the vehicle at the second time.
According to a further aspect of the invention, there is provided a system comprising: the above apparatus; an image obtaining means arranged to obtain the first image; and a sensor arranged to obtain the first and second orientation information.
The image obtaining means may comprise a camera or other form of image capture device arranged to generate and output still images or moving images. The display means may comprise a display screen, for instance a LCD display screen suitable for installation in a vehicle. Alternatively, the display may comprise a projector for forming a projected image, or any other form of display means such as a head-up display. The processing means may comprise a controller or processor and may be integrated within a vehicle ECU.
According to a further aspect of the invention, there is provided a vehicle comprising the above apparatus or system.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described by way of example only, with reference to the accompanying figures, in which:
Figure 1 shows an illustration of a typical view from a conventional vehicle;
Figure 2 shows an illustration of an improved view from a vehicle;
Figure 3 illustrates a portion of a vehicle operating to provide the improved view of Figure 2;
Figure 4 illustrates a composite 3D image derived from vehicle mounted cameras;
Figure 5 illustrates a composite 2D image, specifically a bird’s eye view, derived from vehicle mounted cameras;
Figure 6 illustrates a method for providing a composite image;
Figure 7 illustrates an apparatus for implementing the method of Figure 6;
Figure 8 illustrates a first image of a region in front of a vehicle captured by a vehicle mounted and forward facing camera at a first time;
Figure 9 illustrates a second image of a region in front of a vehicle captured by a vehicle mounted and forward facing camera at a second time later than the first time, the vehicle position and orientation having changed between the first time and the second time;
Figure 10 illustrates a composite image formed by stitching the second image and a portion of the first image;
Figure 11 illustrates the first and second images showing a rotation of the first image to reflect the change in orientation of the vehicle between the first time and the second time as part of a process of perspective correction in accordance with embodiments of the present invention;
Figure 12 illustrates a composite image formed by stitching the second image and a rotated and enlarged portion of the first image as part of a process of perspective correction in accordance with embodiments of the present invention; and
Figure 13 illustrates in the form of a flow chart a method of correcting the perspective of a time-delayed image during generation of a composite image according to the method of Figure 6 in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
Figure 1 illustrates a typical view 100 from a conventional vehicle. The view is from an interior of the vehicle through a windscreen or windshield 110 of the vehicle viewing forwards. A portion of a bonnet or hood 120 of the vehicle is visible extending forward from beneath the windscreen 110. The vehicle is travelling along a roadway 130 which is visible in front of the vehicle. As can be appreciated, the bonnet 120 obscures the view of the roadway 130 close the vehicle from a driver’s point of view, seated behind the windscreen 110. This problem is exacerbated when the vehicle is inclined with respect to the roadway 130 ahead of the vehicle i.e. when an angle between the vehicle and the roadway ahead is increased, such as when the vehicle is at a top of a descent (and not yet descending a slope) or is inclined upward on a small undulation or towards the top of an ascent. In these situations the roadway 130 may have reduced visibility from the vehicle driver’s perspective, particularly due to being obscured by the bonnet 120.
More generally, it may be considered that, from the viewing position of the driver, the view of the roadway 130 ahead is partially occluded both by external portions of the vehicle (especially the bonnet) and internal portions of the vehicle, for instance the bodywork surrounding the windscreen 110 and the dashboard. In the following description of the invention, where reference is made to the view of the driver or from the driver’s position, this should be considered to encompass the view of a passenger, though clearly for manually driven vehicles it is the driver’s view that is of paramount importance. In particular importance are those portions of the roadway 130 close to the front of the vehicle that are occluded. It will be appreciated that the driver’s view of the environment surrounding the vehicle on all sides is similarly restricted by the field of view available through each vehicle window from the driver’s viewing position. In general, a driver is unable to see portions of the environment close to the vehicle due to restrictions imposed by vehicle bodywork.
It is becoming commonplace for vehicles to be provided with one or more cameras to provide live video images (or still images) of portions or regions of the environment surrounding a vehicle. Such images may then be displayed for the benefit of the driver, for instance on a dashboard mounted display screen. In particular, it is wellknown to provide at least one camera towards the rear of the vehicle directed generally behind the vehicle and downwards to provide live video images to assist a driver who is reversing (it being the case that the driver’s natural view of the environment immediately behind the vehicle is particularly limited). It is known to provide multiple such cameras to provide live imagery of the environment surrounding the vehicle on multiple sides, for instance displayed on a dashboard or instrument panel mounted display screen. Such cameras may be positioned externally mounted upon the vehicle, or internally and directed outwards through the vehicle glazing in order to capture images. Such cameras may be provided at varying heights, for instance generally at roof level, driver’s eye level or some suitable lower location to avoid vehicle bodywork obscuring their view of the environment immediately adjacent to the vehicle.
One solution to the above described problem of poor visualisation of the roadway immediately ahead of a vehicle will now be described in relation to Figures 2 and 3. The vehicle may be a land-going vehicle, such as a wheeled vehicle. Figure 2 illustrates an improved forward view 200. The view 200 is from an interior of the vehicle forwards through a windshield or windscreen 210 of the vehicle, as in Figure 1. A portion of a bonnet or hood 220 of the vehicle is visible extending forward from beneath the windscreen 210.
The vehicle shown in Figure 2 comprises a display means which is arranged to display image 240 thereon. The image 240 is an image captured by a forward facing camera of the roadway 230 in front of the vehicle is displayed. The image 240 is captured and stored for later display, as is described in greater detail below. The image is displayed so as to overlie a portion of the vehicle. This is represented in Figure 2 by the image 240 revealing the edges of the roadway 230 extending underneath the bonnet 220. The overlaid image 240 provides the impression of a portion of the vehicle being at least partly transparent from the perspective of the driver. The displayed information provides a representation of what would be visible along a line of sight through the vehicle, were it not for the fact that a portion of the vehicle is occluding that line of sight.
As shown in Figure 2, the image 240 is displayed to overlie a portion of the vehicle’s body, in this case the bonnet 220 of the vehicle. It will be realised that by extension image 240 may be displayed to overlie other internal or external portions of the vehicle. The image 240 is arranged to overlie the bonnet 220 of the vehicle from a point of view of the driver of the vehicle. The display means may be arranged to translucently display image 240 thereon such that the portion of the vehicle body may still be perceived, at least faintly, underneath the displayed image 240.
The display means may comprise a head-up display means for displaying information in a head-up manner to at least the driver of the vehicle. The head-up display may form part of, consist of or be arranged proximal to the windscreen 210 such that the image 240 is displayed to overlie the bonnet 220 of the vehicle. By overlie it is meant that the displayed image 240 appears upon (or in front of) the bonnet 220. Where images of other portions of the environment surrounding the vehicle are to be displayed, the head-up display may be similarly arranged relative to another window of the vehicle. An alternative is for the display means to comprise a projection means. The projection means may be arranged to project an image onto an interior portion of the vehicle, such as onto a dashboard, door interior, or other interior components of the vehicle. The projection means may comprise a laser device for projecting the image onto the vehicle interior.
A method of providing the improved view of Figure 2 begins with obtaining image data. The image data may be obtained by a processing means, such as a processing device. The image data may be for a region ahead of the vehicle. The image data may be obtained by the processing device from one or more image sensing means, such as cameras, associated with the vehicle. As will be explained in connection with Figure 3, a camera may be mounted upon the front of the vehicle to view forwards there-from in a driving direction of the vehicle. Where the images concerned are to another side of the vehicle then clearly the camera position will be appropriately shifted. The camera may be arranged so as to obtain image data of the environment in front of the vehicle that is not obscured by the bonnet 220. As will be described in greater detail below, appropriate time shifting of the images as the vehicle moves forward allows for images corresponding to a view of the driver or a passenger without the bonnet 220 being present to be provided to the display means. That is, the display means may output image data that would be perceived by the driver if the bonnet 220 was not present i.e. not obstructing the driver’s view. In this way, the displayed image 240 comprises an appropriately time-shifted view of the roadway 130 immediately ahead of the vehicle.
As shown in Figure 3, which illustrates a front portion of a vehicle in side-view, a camera 310 may be mounted at a front of the vehicle lower than a plane of the bonnet 220, such as behind a grill of the vehicle 300. Alternatively, or in addition, a camera 320 may be positioned above the place of the bonnet 220, for instance at roof level or at an upper region of the vehicle’s windscreen 210. The field of view of each camera may be generally downward to output image data for a portion of ground ahead of the vehicle’s current location. Each camera outputs image data corresponding to a location ahead of the vehicle 300. It will be realised that the camera may be mounted in other locations. When appropriately delayed, as will be described, the display means may display image data corresponding to a region forward of the vehicle which is obscured from the driver’s view by the bonnet 220 or other parts of the vehicle. The region may, for instance, be a region which is up to 10 or 20m ahead of the vehicle.
The image processing operation comprises a delay being introduced into the image data. The delay time may be based upon a speed of travel of the vehicle. The delay may allow the displayed image data to correspond to a current location of the vehicle. For example, if the image data is for a location around 20m ahead of the vehicle the delay may allow the location of the vehicle to approach the location of the image data such that, when the representation is displayed, the location corresponding to the image data is that which is obscured from the passenger’s view by the bonnet 220. In this way the displayed representation matches a current view of the driver. It will be appreciated that the delay may also be variable according to the driver’s viewing position given that the driver’s viewing position affects the portion of the roadway occluded by the bonnet 220. The image processing operation may be performed by the processing device.
The image data - that is, a time delayed captured image - is displayed so as to overlie a portion of the vehicle’s body from the viewer’s point of view, such as the driver’s point of view. Where it is determined that the driver’s eyes, and thus their point of view, are positioned lower in the vehicle than the average driver, then the time delay applied to the captured image may need to be increased, as parts of the vehicle body such as the bonnet 220 will obscure a larger area immediately in front of the vehicle from the driver’s point of view. Where the driver is relatively tall and where it is determined that the driver’s eyes, and thus their point of view, are positioned higher in the vehicle than the average driver, then the time delay applied to the captured image may need to be reduced, as parts of the vehicle body such as the bonnet 220 will obscure a smaller area immediately in front of the vehicle from the driver’s point of view as compared a driver of average stature. The method may be performed continually in a loop until a predetermined event occurs, such as a user interrupting the method, for example by activating a control within the vehicle. It will be realised that the predetermined event may be provided from other sources. It will be appreciated that the position of the driver’s eyes relative to the vehicle may be determined automatically using known methods employing a driver facing camera, or may be adjusted manually by the driver themselves.
The captured image may be displayed upon a display apparatus provided within the vehicle such that the displayed information overlies a portion of the vehicle under the control of a processing device. The processing device may perform one or more image processing operations on the representation, such as altering a perspective of the image data as described in greater detail below and/or introducing a delay to the image data as described above. The perspective of the image data may be altered to match a viewing angle of the driver. The processing device may be arranged to receive image data corresponding to a view of the driver and to determine their viewing direction based thereon, such as based on the driver’s eye position, or may receive data indicative of the driver’s viewing direction from another sub-system of the vehicle. The image data may also be processed to adjust the representation to match a shape of the vehicle’s body, for example to adjust for contours in the vehicle’s bonnet shape.
The display device may comprises a projector for projecting light which is operably controlled by the processing device to project the representation by emitting light toward an optical combiner. The projection device and combiner together form a head-up display (HUD). When no light is being emitted by the projection device the combiner may be generally imperceptible to the driver of the vehicle, but when light is projected from the projection device and hits the combiner an image is viewed thereon by the passenger. The combiner is positioned such that an image viewed thereon by the driver appears to overlie a portion of the vehicle’s body as the bonnet. That is the image appears above the bonnet.
For the improved forwards driver view described above in connection with Figures 2 and 3 an image is displayed to overlie an external portion of the vehicle. As noted previously, this concept is extensible to displaying an image derived from a camera capturing images of the environment surrounding a vehicle so as to also overlie at least a portion of an interior of the vehicle. As previously discussed, from the perspective of a driver a view external to the vehicle can be obscured by both the vehicle’s body external to a passenger compartment of the vehicle, for instance the bonnet, and also by interior portions of the vehicle, such as the inside of a door. To address this broader problem, one or more further HUDs may be collocated with or incorporated into one or more vehicle windows other than the wind screen.
Alternatively or in addition, an interior display means may provide an image interior to the vehicle for displaying one or both of image data and/or a representation of one or more components of the vehicle. The interior display means may comprise at least one projection device for projecting an image onto interior surfaces of the vehicle. The interior surfaces may comprise a dashboard, door interior or other interior surfaces of the vehicle. The projection device may be arranged in an elevated position within the vehicle to project the images downward onto the interior surfaces of the vehicle. The head-up display means and interior display means may be both communicatively coupled to a control device such as that illustrated in Figure 5, which is arranged to divide image data for display there-between. By so doing, an image produced jointly between the head-up display means and interior display means provides a greater view of objects external to the vehicle. The view may be appreciated not only generally ahead of the driver, but also to a side of the driver or passenger when images are projected onto interior surfaces of the vehicle indicative of image data external to the vehicle and/or one or more components of the vehicle.
As will now be described, the improved forwards view illustrated in Figure 2 is extensible to a system which allows a driver view the terrain underneath a vehicle through the use of historic (that is, time delayed) images obtained from a vehicle camera system, for instance the vehicle camera system illustrated in Figure 3. A suitable vehicle camera system comprises one or more video cameras positioned upon a vehicle to capture video images of the environment surrounding the vehicle which may be displayed to aid the driver.
Referring to Figure 4, this schematically illustrates a composite image surrounding a vehicle 400. The composite image is formed by taking video images (or still images) derived from multiple vehicle mounted cameras and combining the images to form a composite image illustrating the environment surrounding the vehicle. Specifically, the multiple images may be combined to form a 3-Dimensional (3D) composite image that may, for instance, be generally hemispherical as illustrated by outline 402. This combination of images may be referred to as stitching. The composite image is formed by mapping the images obtained from each camera onto an appropriate portion of the hemisphere. Given a sufficient number of cameras, and their appropriate placement upon the vehicle to ensure appropriate fields of view, it will be appreciated that the composite image may thus extend all around the vehicle and from the bottom edge of the vehicle on all sides up to a predetermined horizon level illustrated by the top edge 404 of hemisphere 402. It will be appreciated that it is not essential that the composite image extends all of the way around the vehicle. For instance, in some circumstances it may be desirable to stitch only camera images projecting generally in the direction of motion of the vehicle and to either side directions where the vehicle may be driven. This hemispherical composite image may be referred to as a bowl. Of course, the composite image may not be mapped to an exact hemisphere as the images making up the composite may extend higher or lower, or indeed over the top of the vehicle to form substantially a composite image sphere. It will be appreciated that the images may alternatively be mapped to any 3D shape surrounding the vehicle, for instance a cube, cylinder or more complex 3D shape, which may be determined by the number and position of the cameras. It will appreciate that the extent of the composite image is determined by the number of cameras and their camera angles. The composite image may be formed by appropriately scaling and I or stretching the images derived from each camera to fit to one another without leaving gaps (though in some cases gaps may be left where the captured images do not encompass a 360 degree view around the vehicle).
The composite image may be displayed to the user according to any suitable display means, for instance the Head-Up Display (HUD), projection systems or dashboard mounted display systems described above. While it may be desirable to display at least a portion of the 3D composite image viewed for instance from an internal position in a selected viewing direction, optionally a 2-Dimensional (2D) representation of a portions of the 3D composite image may be displayed. Alternatively it may be that a composite 3D image is never formed - the video images derived from the cameras being mapped only to a 2D view of the environment surrounding the vehicle. This may be a side view extending from the vehicle, or a plan view such as is shown in Figure 5.
Figure 5 shows a composite image 501 giving a bird’s eye view of the environment surrounding a vehicle indicated generally at 500, also referred to as a plan view. Such a plan view may be readily displayed upon a conventional display screen mounted inside the vehicle 500 and provides useful information to the driver concerning the environment surrounding the vehicle 500 extending up close to the sides of the vehicle 500. From the driving position it is difficult or impossible to see the ground immediately adjacent to the vehicle 500 and so the plan view of Figure 5 is a significant aid to the driver. The ground underneath the vehicle 500 remains obscured from a camera live view and so may typically be represented in the composite image 501 by a blank region 502 at the position of the vehicle 500, or a representation of a vehicle to fill the blank. Without providing cameras underneath the vehicle 500, which is undesirable as discussed above, for a composite image formed solely from stitched live camera images, the ground underneath the vehicle 500 cannot be seen.
As will now be described, in addition to the cameras being used to provide a composite live image of the environment surrounding the vehicle, historic images may be incorporated into the composite image to provide imagery representing the terrain under the vehicle - that is, the terrain within the boundary of the vehicle. By historic images, it is meant images that were captured previously by the vehicle camera system, for instance images of the ground in front of or behind the vehicle; the vehicle subsequently having driven over that portion of the ground. The historic images may be still images or video images or frames from video images. Such historic images may be used to fill the blank region 502 in Figure 5.
The composite image may be formed by combining the live and historic video images, and in particular by performing pattern matching to fit the historic images to the live images thereby filling the blind spot in the composite image comprising the area under the vehicle. The use of pattern matching provides particular improvements in the combining of live and historic images. The camera system comprises at least one camera and a buffer arranged to buffer images as the vehicle progresses along a path. The vehicle path may be determined by any suitable means, including but not limited to a satellite positioning system such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), wheel ticks (tracking rotation of the wheels, combined with knowledge of the wheel circumference) and image processing to determine movement according to shifting of images between frames. At locations where the blind spot from the live images overlaps with buffered images, the area of the blind spot copied from delayed video images is pattern matched through image processing to be combined with the live camera images forming the remainder of the composite image.
Referring now to Figure 6, this illustrates a method of forming a composite image from live video images and historic video images. At step 600 live video frames are obtained from the vehicle camera system. The image data may be provided from one or more cameras facing outwards from the vehicle, as previously explained. In particular, one or more cameras may be arranged to view in a generally downward direction in front of or behind the vehicle at a viewing point a predetermined distance ahead of the vehicle. It will be appreciated that such cameras may be suitably positioned to capture images of portions of the ground which may later be obscured by the vehicle.
At step 602 the live frames are stitched together to form the composite 3D image (for instance, the image “bowl” described above in connection with Figure 6) or a composite 2D image. Suitable techniques for so combining video images will be known to the skilled person. It will be understood that the composite image may be formed continuously according to the 3D images or processed on a frame-by-frame basis. Each frame, or perhaps only a portion of the frames such as every nth frame for at least some of the cameras, is stored for use in fitting historical images into a blank region of the composite image currently displayed (this may be referred to as a live blind spot area). For instance, where the bird’s eye view composite image 500 of Figure 5 is displayed on a screen, the live blind spot area is portion 502.
According to certain embodiments, to constrain the image storage requirements, only video frames from cameras facing generally forwards (or forwards and backwards) may be stored as it may be only necessary to save images of the ground in front of the vehicle (or in front and behind) that the vehicle may subsequently drive over in order to supply historic images for inserting into the live blind spot area. To further reduce the storage requirements it may be that not the whole of every image frame is stored. For a sufficiently fast stored frame rate (or slow driving speed) there may be considerable overlap between consecutive frames (or intermittent frames determined for storage if only every nth frame is to be stored) and so only an image portion differing from one frame for storage to the next may be stored, together with sufficient information to combine that portion with the preceding frame. Such an image portion may be referred to as a sliver or image sliver. It will be appreciated that other than an initially stored frame, every stored frame may require only a sliver to be stored. It may be desirable to periodically store a whole frame image to mitigate the risk of processing errors preventing image frames from being recreated from stored image slivers. This identification of areas of overlap between images may be performed by suitable known image processing techniques that may include pattern matching that is, matching portions of images common to a pair of frames to be stored. For instance, pattern matching may use known image processing algorithms for detecting edge features in images, which may therefore suitably identify the outline of objects in images, those outlines being identified in a pair of images to determine the degree of image shift between the pair due to vehicle movement.
Each stored frame, or stored partial frame (or image sliver) is stored in combination with vehicle position information. Therefore, in parallel to the capturing of live images at step 600 and the live image stitching at step 602, at step 604 vehicle position information is received. The vehicle position information is used to determine the vehicle location at step 606. The vehicle position may be expressed as a coordinate, for instance a Cartesian coordinate giving X, Y and Z positions. The vehicle position may be absolute or may be relative to a predetermined point. The vehicle position information may be obtained from any suitable known positioning sensor, for instance GPS, IMU, knowledge of the vehicle steering position and wheel speed, wheel ticks (that is, information about wheel revolutions), vision processing or any other suitable technique. Vision processing may comprise processing images derived from the vehicle camera systems to determine the degree of overlap between captured frames, suitably processed to determine a distance moved through knowledge of the time between the capturing of each frame. This may be combined with the image processing for storing captured frames as described above, for instance pattern matching including edge detection. In some instances it may be desirable to calculate a vector indicating movement of the vehicle as well as the vehicle position, to aid in determining the historic images to be inserted into the live blind spot area, as described below.
Each frame that is to be stored (or sliver), from step 600, is stored in a frame store at step 608 along with the vehicle position obtained from step 606 at the time of image capture. That is, each frame is stored indexed by a vehicle position. The position may be an absolute position or relative to a reference datum. Furthermore, the position of image may be given relative only to a preceding stored frame allowing the position of the vehicle in respect of each historic frame to be determined relative to a current position of the vehicle by stepping backwards through the frame store and noting the shift in vehicle position until the desired historic frame is reached. Each record in the frame store may comprise image data for that frame (or image sliver) and the vehicle position at the time the frame was captured. That is, along with the image data, metadata may be stored including the vehicle position. The viewing angle of the frame relative to the vehicle position is known from the camera position and angle upon the vehicle (which as discussed above may be fixed or moveable). Such information concerning the viewing angle, camera position etc. may also be stored in frame store 608, which is shown representing the image and coordinate information as (frame <-> coord). It will be appreciated that there may be significant variation in the format in which such information is stored and the present invention is not limited to any particular image data or metadata storage technique, nor to the particulars of the position information that is stored.
At step 610 a pattern recognition area is determined. The pattern recognition area comprises the area under the vehicle that can’t be seen in the composite image formed solely from stitched live images. Referring back to Figure 5, the pattern recognition area comprises the blind spot 502. Coordinates for the pattern recognition area can be determined from the vehicle positioning information obtained at step 604 and as processed to determine the current vehicle position at step 606. Assuming highly accurate vehicle position information obtained at step 604, it will be appreciated that the current position of the vehicle may be exactly determined. Historic image data from frame store 608, that is, previously captured image frames, may be used to fill in blind spot 502 based on knowledge of the vehicle position at the time the historic images were captured. Specifically, the current blind spot may be mapped to an area of ground which is visible in historic images captured before the vehicle obscured that portion of the ground. The historic image data may be used through knowledge of the area of ground in the environment surrounding the vehicle in each camera image, as a result of the position of each camera upon the vehicle and the camera angle being known. As such, if the current vehicle position is known, image data showing the ground in the blind spot may be obtained from images captured at an earlier point in time before the vehicle obscures that portion of ground. Such image data may be suitably processed to fit the current blind spot and inserted into the stitched live frames. Such processing may include scaling and stretching the stored image data to account for a change in perspective from the outward looking camera angle to how the ground would appear if viewed directly from above. Additionally, such processing may include recombining multiple stored image slivers and/or images from multiple cameras.
However, the above described fitting of previous stored image data into a live stitched composite image is predicated on exact knowledge of the vehicle position both currently and when the image data is stored. It may be the case that it is not possible to determine the vehicle position to a sufficiently high degree of accuracy. As an example, with reference to Figure 5, the true current vehicle position is represented by box 502, whereas due to inaccurate position information the current vehicle position determined at step 606 may be represented by box 504. In the example of Figure 5 the inaccuracy comprises the determined vehicle position being rotated relative to the true vehicle position. Equally, translational errors may occur. Errors in calculating the vehicle position may arise due to the vehicle wheels sliding, where wheel ticks, wheel speed and/or steering input are used to determine relative changes in vehicle position. Where satellite positioning is used it may be the case that the required level of accuracy is not available.
It will be appreciated that where the degree of error in the vehicle position differs between the time at which an image is stored and the time at which it is fitted into a live composite image this may cause undesirable misalignment of the live and historic images.
Due to the risk of misalignment, at step 612 pattern matching is performed within the pattern recognition area to match regions of live and stored images. As noted above in connection with storing image frames, such pattern matching may include suitable edge detection algorithms. The determined pattern recognition region at step 610 is used to access stored images from the frame store 608. Specifically, historic images containing image data for ground within the pattern recognition area is retrieved. The pattern recognition area may comprise the expected vehicle blind spot and a suitable amount of overlap on at least one side to account for misalignment. Step 612 further takes as an input the live stitched composite image from step 602. The pattern recognition area may encompass portions of the live composite view adjacent to the blind spot 502. Pattern matching is performed to find overlapping portions of the live and historic images, such that close alignment between the two can be determined and used to select appropriate portions of the historic images to fill the blind spot. It will be appreciated that the amount of overlap between the live and historic images may be selected to allow for a predetermined degree of error between the determined vehicle position and its actual position. Additionally, to take account of possible changes in vehicle pitch and roll between a current position and a historic position as a vehicle traverses undulating terrain, the determination of the pattern recognition region may take account of information from sensor data indicating the vehicle pitch and roll. This may affect the degree of overlap of the pattern recognition area with the live images for one or more sides of the vehicle. It will be appreciated that according to some embodiments it may not be necessary to determine a pattern recognition area, rather the pattern matching may comprise a more exhaustive search through historic images (or historic images with an approximate time delay relative to the current images) relative to the whole composite live image. However, by constraining the region within the live composite image within which pattern matching to historic images is to be performed, and constraining the volume of historic images to be matched, the computational complexity of the task and the time taken may be reduced.
At step 614 selected portions of one or more historic images or slivers are inserted into the blind spot in the composite live images to form a composite image encompassing both live and historic images. Furthermore, in addition to displaying a representation of the ground under the vehicle, according to certain embodiments of the invention a representation of the vehicle may be added to the output composite image. For instance, a translucent vehicle image, an outline of the vehicle, or a shaded or coloured region may be added. This may assist a driver in recognising the position of the vehicle and the portion of the image representing the ground under the vehicle.
In some embodiments, where the composite image is to be displayed overlying portions of the vehicle to give the impression of the vehicle being transparent or translucent (for instance using a HUD or a projection means as described above), the generation of a composite image may also require that a viewing direction of a driver of the vehicle is determined. For instance, a camera is arranged to provide image data of the driver from which the viewing direction of the driver is determined. The viewing direction may be determined from an eye position of the driver, performed in parallel to the other steps of the method. It will be appreciated that where the composite image or a portion of the composite image is to be presented on a display in the vehicle which is not intended to show the vehicle being see-through, there may be no need to determine the driver’s viewing direction.
The combined composite image is output at step 616. As discussed above, the composite image output may be upon any suitable image display device, such as head-up display, dashboard mounted display screen or a separate display device carried by the driver. Alternatively, portions of the composite image may be projected onto portions of the interior of the vehicle to give the impression of the vehicle being transparent or translucent. The present invention is not limited to any particular type of display technology.
Figure 7 illustrates an apparatus suitable for implementing the method of Figure 6. The apparatus may be entirely contained within a vehicle. One or more vehicle mounted camera 700 (for instance, that of Figure 3) captures image frames used to form a live portion of a composite image or a historic portion of a composite image, or both. It will be appreciated that according to some embodiments of the invention separate cameras may be used to supply live and historic images or their roles may be combined. One or more position or movement sensor 702 may be used to sense the position of the vehicle or movement of the vehicle, including a rotational position of the vehicle. Camera 700 and sensor 702 supply data to a processor 704 and are under the control of the processor 704. The processor 704 buffers images from camera 700 in a buffer 706. The processor 704 further acts to generate a composite image including live images received from the one or more vehicle mounted camera 700 and historic images from the buffer 706. The processor 704 controls a display 708 to display the composite image. It will be appreciated that apparatus of Figure 7 may be incorporated within the vehicle of Figure 3, in which case camera 700 may be provided by one or more of cameras 310, 320. The display 708 is typically located in the vehicle cabin and may take the form of a dashboard mounted display, or any other suitable type, as described above. In some embodiments at least a portion of the image processing being performed by systems external to the vehicle.
The description above in connection with Figures 1 to 7 concerns the display of time delayed images or time delayed portions of images captured by one or more vehicle mounted cameras arranged to capture images of the environment surrounding the vehicle. The time delayed image may be displayed on its own, for instance in the example of Figure 2 (a see-through bonnet). Alternatively, one or more time delayed image (or portion thereof) may be stitched together with one or more live image to form a composite image, for example as described above in connection with Figure 4 to 6 (a see-through car).
It will be appreciated that when a time delayed image is displayed the vehicle orientation may have changed between the time at which the image is captured and the time at which it is displayed. The change in orientation may comprise one or more rotations about three orthogonal axes. Rotation about an axis generally extending from the front of a vehicle to the back of the vehicle may be referred to as roll. Rotation about an axis generally extending from the left hand side of a vehicle to the right hand side of the vehicle (left and right being defined relative to the front of the vehicle in the direction of travel) may be referred to as pitch. Rotation about an axis extending upwards from the vehicle may be referred to as yaw. It will be appreciated that a change in orientation is typically in combination with a change in position of the vehicle. The change in position of the vehicle is selected through appropriate control of the time delay in order to reveal portions of the environment surrounding the vehicle that are obscured at the display time, in particular the ground underneath the vehicle, as described above. A change in vehicle orientation may be particularly pronounced for a vehicle travelling off a paved road. For instance, a vehicle may be generally level at a first time when an image is captured, but rolled to the right at the time the image is displayed (rolled clockwise about the front-back axis when facing the direction of travel) as a result of driving forwards onto an area of ground which slopes downwards to the right. A further example for the case of a forward facing camera is where the image is captured at a time at which the vehicle is going up an incline and the image is displayed at the time when the incline levels off.
It will be appreciated that while as described above the captured image may be presented to the user to reveal the ground underneath the vehicle, the change in orientation between capture and display may cause the displayed time-delayed image to appear incorrect to the user. For the example of the see through bonnet the effect of a significant change of vehicle orientation may be that the time-delayed image displayed overlying the bonnet appears unnatural and does not match up with the environment surrounding the vehicle at the display time, and directly visible to the user looking past the bonnet. Furthermore, for the example of a see-through car where live and historic images are stitched together, the effect of a significant change of vehicle orientation may be to cause a mismatch at the join between live and historic images. Worse still, if the change in orientation is particularly pronounced, the pattern matching of live and historic images to enable the stitching may fail altogether.
In accordance with embodiments of the present invention the problem of a change in vehicle orientation is address by correcting the perspective of a time-delayed or historic image prior to its display (or prior to stitching the historic image with a live image). The perspective correction serves to cause the time-delayed image to appear as though it were captured at the time of display and showing the same portion of the environment surrounding the vehicle in what would be its current perspective. This perspective correction is achieved by capturing vehicle orientation information at the time that an image is captured by a vehicle mounted camera. The vehicle orientation information may be defined relative to a fixed reference, for instance a vertical axis and a North bearing or relative to a predetermined feature of the environment surrounding the vehicle. Alternatively the vehicle orientation may be defined relative to a previous orientation, such as the vehicle orientation at the time the image processing system is enabled or the vehicle is turned on. The vehicle orientation may be stored together with the captured image data or separately.
At the time the stored image is to be displayed the vehicle orientation may be captured again and used to determine a change in orientation. This change in vehicle orientation may be referred to as an orientation delta. For the example of capturing orientation information with respect to three orthogonal axes the delta may comprise computed rotations about each respective axis. It will be noted that once the orientation delta is computed it is no longer relevant whether the vehicle orientation is defined absolutely with respect to fixed reference points or relative to a previous vehicle orientation.
The orientation delta may then be used to adjust the perspective of the captured image prior to its display or its further use to generate a composite image. It will be appreciated that this change of perspective may comprise an appropriate rotation applied to the captured image. The rotation may also be applied in addition to skewing the captured image and any required scaling to account for movement of the vehicle. It will be appreciated that this perspective correction is distinct from the change of perspective described above, for instance to account for the viewing position of the driver or to generate a plan view from a composite 3D model, by virtue of the fact that it is based on an orientation delta calculated between the orientation of the vehicle at the time of image capture and the orientation of the vehicle at the time of image display. This contrasts with known, static changes in perspective. The effect of embodiments of the present invention is to ensure that time-delayed images provide a closer match to the changed perspective of the environment surrounding the vehicle either directly observed by the driver (for the see-through bonnet) or in the form of live images stitched together with the time-delayed images (for the seethrough car). Accordingly, time-delayed images are presented to the user so as to appear as if the image had been captured from within the vehicle in its position at the time of display, rather than at the time it was in fact captured.
Referring back to the flowchart of Figure 6, it will be understood that this may be modified in accordance with the present invention by vehicle orientation information being obtained in addition to vehicle position information at step 604 and stored along with the captured images in the frame store at step 608. Perspective correction may be applied to image data from frame store 608 using further vehicle orientation for the display time derived from step 604 prior to performing pattern recognition and stitching at step 612 and 614. A process of performing perspective correction is described in greater detail below in connection with the flow chart of Figure 13.
Referring back to Figure 7 it will be appreciated that the processor 704 is suitable for performing the perspective correction for image data obtained by camera 700 and further taking account of vehicle orientation information provided by position/movement sensor 702.
Perspective correction to provide an improved composite image will now be described with reference to Figures 8 to 12. Figure 8 illustrates a first image 800 of a region in front of a vehicle captured by a vehicle mounted and forward facing camera at a first time. It can be seen that the vehicle is proceeding along a track 802 and currently the vehicle is generally level: in particular there is no roll evident in the image of Figure 8. That there is no roll is illustrated by line 804 representing a horizontal plane.
Figure 9 illustrates a second image 900 of a region in front of a vehicle captured by a vehicle mounted and forward facing camera at a second time later than the first time, the vehicle position and orientation having changed between the first time and the second time. It is assumed that the direction of view of the camera capturing the first and second images is fixed relative to the vehicle. Specifically, in Figure 9 the vehicle has moved forwards along track 802 such that portions of the track 802 closest to the vehicle are no longer visible. The right hand side wheels of the vehicle have driven into a depression visible generally in region 806 of the image of Figure 8, which has caused the vehicle to roll to the right. The roll is evident from comparison of an edge 808 of the field, through which the track 802 passes in each of Figures 8 and 9, and also from fence lines 810 and 812 surrounding the field. The roll is also schematically represented by lines 902, 904 and angle 906, where line 902 is a horizontal plane as for line 804 in Figure 8 and angle 906 indicates the extent of the roll. In Figure 9 the appearance is that the image data is rotated anti-clockwise (when facing in the direction of travel) relative to Figure 8 as a result of the clockwise roll of the vehicle between the first time (Figure 8) and the second time (Figure 9).
Figure 10 illustrates a composite image formed by stitching the second image 900 and a portion of the first image 800. The stitched join is indicated by dashed line 1000. The first image 800 has been cropped and enlarged to match how that portion of the track would appear if directly visible at the time of capturing the second image 900 (owing to the fact that the vehicle is now closer to that portion of the track, for instance depression 806). It can be seen that there appears to be a mismatch between the first and second images 800, 900 at the join 1000 in that the track appears level in the portion of the first image 800 and sloping in the second image 900. Additionally, edges of track 802 do not closely line up at areas 1002 and 1004.
Turning now to Figure 11 this illustrates the first and second images 800, 900 showing a rotation of the first image 800 to counteract the effect of the change in orientation of the vehicle between the first time and the second time as part of a process of perspective correction in accordance with embodiments of the present invention. Specifically, to correct the clockwise roll of the vehicle between the time of capturing the first image 800 and the time of capturing the second image 900 the first image 800 has been rotated anti-clockwise by the same amount that the vehicle has appeared to roll clockwise, as illustrated schematically by lines 1100, 1102 and angle 1104, where line 1100 is a horizontal plane as for line 804 in Figure 8. The perspective correction is evident from comparison of the field edge 808 in each of the first and second images 800, 900 and also from fence lines 810 and 812. The rotation of the first image causes close alignment of field edge 808, fence lines 810 and 812 and also track 802.
Figure 12 illustrates a composite image formed by stitching the second image and a rotated and enlarged portion of the first image as part of a process of perspective correction in accordance with embodiments of the present invention. The first image is rotated as shown in Figure 11. The stitched join is indicated by dashed line 1200. The first image 800 has also been cropped and enlarged to match how that portion of the track would appear if directly visible at the time of capturing the second image 900 (owing to the fact that the vehicle is now closer to that portion of the track, for instance depression 806). It can be seen that there is much greater conformance between the first and second images 800, 900 at the join 1200 owing to the fact that the track has been rotated and so is sloping in both images 800, 900. Additionally, the edges of track 802 line up more closely in areas 1202 and 1204 than is apparent in Figure 10.
The focus of the present disclosure is upon perspective correction for time delayed images to account for changes in vehicle orientation between the time of image capture and the time of image display. It will be appreciated that this may comprise a constant value transformation for a given change in orientation. Further image manipulations such as cropping and enlargement may also comprise constant value transformations which may be readily determined through knowledge of the change in position and the change in orientation of the vehicle between the time of image capture and the time of image display.
Advantageously, by performing perspective correction of the first image prior to stitching live and time-delayed images the conformance between the images is improved and a more realistic appearance for the composite image can be achieved.
Figure 13 illustrates in the form of a flow chart a method of correcting the perspective of a time-delayed image during generation of a composite image according to the method of Figure 6 in accordance with an embodiment of the present invention.
At step 1 an image M is captured. This is labelled a “Previous Frame” as the method of Figure 13 concerns stitching of a live image (“Current Frame”) and a time-delayed image (Previous Frame). At the same time the orientation of the vehicle is captured, which is denoted by capturing the angle of the vehicle rotation about three orthogonal axes x,y,z. As described above this may be the rotation of the vehicle about the respective axes relative to an independent frame of reference or relative to an earlier orientation of the vehicle. The image M and orientation information M x,y,z is stored.
At step 2 at a later time a Current Frame, image N is captured along with the orientation information for the vehicle at that time N x,y,z.
At step 3 the orientation delta (A) is calculated according to equation 1. The orientation delta indicates the rotation of the vehicle about each respective axis from the time of image M to the time of image N.
A = [Nx - Mx, Ny - My, Nz - Mz] (1)
At step 4 the perspective of image M is corrected by applying an image transformation (T) to image M based on orientation delta A according to equation 2.
Ν' = T(M,A) (2)
Transformation T comprises rotating image M according to the calculated orientation delta. Specifically, transformation T comprises the delta of the 3D angles between the current frame and the previous frame converted to a rotation matrix which is applied to the previous frame. This may be represented in terms of an Euler angle transformation according to equation 3.
Rotated finale β
,φ.
all a 12 a 131 ) «21 «22 a23 .a31 a32 «331 / (3)
Where
φ., $., φ = phi, theta, psi respectively for frame M φζ $ζφ’ = phi, theta, psi respectively for frame N β! i = cos$ cos $ css 0 sfe $ sin $ at 5 = qos Q3S &SEIφ CSS
023 ~ “Sis $ sis 4 + cos £ c«s $ 023 avi = sis S sirs φ a$3 as ess 9
As an example, transformation T may be understood as follows. As shown in Figures 8, 9 and 11 for a roll about a front-back (longitudinal) axis of the vehicle (which may be a X axis) of x degrees the transformation comprises rotating the image in the plane of the image by -x degrees. Pitching and yawing of the vehicle results in a perspective correction in which the image is rotated in the opposite direction about orthogonal axes lying in the plane of the image. The three orthogonal axes may be defined for each camera relative to the central axis of the field of view such that. More generally the axes of rotation may be defined separately for each camera.
At step 5 the transformed Previous Frame image M' and the Current Frame image N are stitched together as described above in connection with Figure 6, which therefore is not repeated here. It will be appreciated that the stitching may include further processing of image M' such as cropping and enlarging.
At step 6 when a new frame is captured the Current Frame becomes the Previous Frame and the process is repeated. It will be appreciated that images may be captured and stored at predetermined intervals along with the vehicle orientation information at the time of image capture. To form a composite image the historic image data required to fill a blind spot may be derived from earlier captured and stored frames or frame slivers other than the immediately preceding frame (as described above in connection with Figure 6). The terms Current Frame and Previous Frame as used in connection with Figure 13 should not be considered to mean that only for consecutive frames can the perspective be corrected.
Figure 13 provides an example of the use of perspective correction where images are to be stitched, including live images and historic images. For the example of perspective correction for the improved view of Figure 2, where there is no stitching, the process is generally the same except that no captured Current Frame is required, only knowledge of the vehicle orientation at the time of capturing of a Previous Frame and the vehicle orientation at the time of displaying the Previous Frame. Steps 5 and 6 may also be omitted.
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. In particular, the method of Figures 6 and 13 may be implemented in hardware and/or software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement embodiments of the present invention. Accordingly, embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the 10 foregoing embodiments, but also any embodiments which fall within the scope of the claims.

Claims (16)

1. A method for correcting for changes in orientation of a vehicle, the method comprising:
obtaining a first image including a first region external to the vehicle and first orientation information indicative of the orientation of the vehicle at a first time corresponding to when the first image is captured;
obtaining second orientation information indicative of the orientation of the vehicle at a second time later than the first time;
determining a change in vehicle orientation between the first and second times using the first orientation information and the second orientation information;
generating a modified first image using the first image and the determined change in vehicle orientation, the modified first image being representative of the first region from the perspective of the vehicle at the second time; and rendering at least part of the modified first image.
2. A method according to claim 1, wherein the orientation information comprises a rotational position of the vehicle relative to a predetermined reference or a predetermined previous orientation of the vehicle.
3. A method according to claim 2, wherein the orientation information defines a rotational position of the vehicle about three orthogonal axes.
4. A method according to claim 3, wherein the determined change in vehicle orientation between the first and second times comprises a rotation about one or more of the three orthogonal axes.
5. A method according to claim 3 or claim 4, wherein generating a modified first image comprises rotating the first image about one or more axes in dependence on the determined change in vehicle orientation.
6. A method according to any one of the preceding claims wherein the first image and the first orientation information are stored until the second time.
7. A method according to any one of the preceding claims, comprising:
obtaining a second image at the second time of a second region external to the vehicle; and generating a composite image from the second image and the modified first image by matching portions of the modified first image and the second image; and rendering at least part of the composite image.
8. A method according to claim 7, wherein the first and second images are captured by a common image sensor mounted upon the vehicle.
9. A method according to claim 7 or claim 8, wherein the composite image comprises a 3-Dimensional, 3D, representation or a 2-Dimensional, 2D, representation of the environment surrounding the vehicle and extending at least partially underneath the vehicle.
10. A method according to any one of claims 7 to 9, wherein obtaining a first image comprises capturing a series of image frames; and wherein the method comprises:
determining a position of the vehicle; and storing an indication of the position of the vehicle with at least a portion of an image frame.
11. A method according to any one of claims 7 to 10, wherein matching portions of the modified first image and the second image comprises performing pattern matching to identify features present in overlapping portions of the modified first image and the second image such those features are correlated in the composite image.
12. A computer program product storing computer program code which is arranged when executed on a processor to implement the method of any one of claims 1 to 11.
13. A vehicle orientation correction apparatus, the apparatus comprising:
a processing means arranged to:
receive first orientation information indicative of the orientation of the vehicle at a first time and second orientation information indicative of the orientation of the vehicle at a second time later than the first time;
determine a change in vehicle orientation between the first and second times using the first orientation information and the second orientation information; and generate a modified first image using a first image and the determined change in vehicle orientation;
wherein the first image is captured at the first time and includes a first region external to the vehicle at the first time; and wherein the modified first image is representative of the first region from the perspective of the vehicle at the second time.
14. An apparatus according to claim 13, wherein the processing means is further arranged to implement the method of any one of claims 2 to 12.
15. A system comprising:
the apparatus of claim 13 or claim 14;
an image obtaining means arranged to obtain the first image; and a sensor arranged to obtain the first and second orientation information.
16. A vehicle comprising the apparatus of claim 13 or claim 14, or the system of claim 15.
GB1803588.1A 2018-03-06 2018-03-06 Apparatus and method for correcting for changes in vehicle orientation Active GB2571923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1803588.1A GB2571923B (en) 2018-03-06 2018-03-06 Apparatus and method for correcting for changes in vehicle orientation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1803588.1A GB2571923B (en) 2018-03-06 2018-03-06 Apparatus and method for correcting for changes in vehicle orientation

Publications (3)

Publication Number Publication Date
GB201803588D0 GB201803588D0 (en) 2018-04-18
GB2571923A true GB2571923A (en) 2019-09-18
GB2571923B GB2571923B (en) 2021-08-18

Family

ID=61903519

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1803588.1A Active GB2571923B (en) 2018-03-06 2018-03-06 Apparatus and method for correcting for changes in vehicle orientation

Country Status (1)

Country Link
GB (1) GB2571923B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4296955A1 (en) * 2022-06-20 2023-12-27 Samsung Electronics Co., Ltd. Device and method with vehicle blind spot visualization

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332789A (en) * 2020-09-30 2022-04-12 比亚迪股份有限公司 Image processing method, apparatus, device, vehicle, and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160291592A1 (en) * 2015-03-31 2016-10-06 Alcatel-Lucent Usa Inc. System And Method For Video Processing And Presentation
US20180170264A1 (en) * 2016-12-20 2018-06-21 Toyota Jidosha Kabushiki Kaisha Image display device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160291592A1 (en) * 2015-03-31 2016-10-06 Alcatel-Lucent Usa Inc. System And Method For Video Processing And Presentation
US20180170264A1 (en) * 2016-12-20 2018-06-21 Toyota Jidosha Kabushiki Kaisha Image display device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4296955A1 (en) * 2022-06-20 2023-12-27 Samsung Electronics Co., Ltd. Device and method with vehicle blind spot visualization

Also Published As

Publication number Publication date
GB2571923B (en) 2021-08-18
GB201803588D0 (en) 2018-04-18

Similar Documents

Publication Publication Date Title
US20190379841A1 (en) Apparatus and method for displaying information
US11420559B2 (en) Apparatus and method for generating a composite image from images showing adjacent or overlapping regions external to a vehicle
US20200086791A1 (en) Apparatus and method for displaying information
CN110001400B (en) Display device for vehicle
US11597318B2 (en) Apparatus and method for displaying information
JP6811106B2 (en) Head-up display device and display control method
US10255705B2 (en) Apparatus and method for displaying information
US11860372B2 (en) Display control device, image display system, mobile body, display control method, and non-transitory computer-readable medium
KR20130089139A (en) Augmented reality head-up display apparatus and method for vehicles
US11562576B2 (en) Dynamic adjustment of augmented reality image
EP3811326B1 (en) Heads up display (hud) content control system and methodologies
GB2571923A (en) Apparatus and method for correcting for changes in vehicle orientation
CN117916706A (en) Method for operating smart glasses in a motor vehicle during driving, correspondingly operable smart glasses and motor vehicle
US20180232956A1 (en) Method, Device, and Computer-Readable Storage Medium with Instructions for Controlling a Display of an Augmented Reality Head-Up Display Device
JPWO2004048895A1 (en) MOBILE NAVIGATION INFORMATION DISPLAY METHOD AND MOBILE NAVIGATION INFORMATION DISPLAY DEVICE
CN115176457A (en) Image processing apparatus, image processing method, program, and image presentation system
GB2587065A (en) Apparatus and method for displaying information
JP7228761B2 (en) Image display system, moving object, image display method and program
JP2008109283A (en) Vehicle periphery display device and method for presenting visual information
Kuang et al. Research on Correction Algorithm of Head-up Display System in Vehicle Vibration
EP4034841A1 (en) Method and system of vehicle driving assistance