WO2018037032A1 - A vehicle camera system - Google Patents

A vehicle camera system Download PDF

Info

Publication number
WO2018037032A1
WO2018037032A1 PCT/EP2017/071205 EP2017071205W WO2018037032A1 WO 2018037032 A1 WO2018037032 A1 WO 2018037032A1 EP 2017071205 W EP2017071205 W EP 2017071205W WO 2018037032 A1 WO2018037032 A1 WO 2018037032A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
camera
processing unit
delayed
Prior art date
Application number
PCT/EP2017/071205
Other languages
French (fr)
Inventor
Adam ADWAN
Giovanni Strano
Original Assignee
Jaguar Land Rover Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jaguar Land Rover Limited filed Critical Jaguar Land Rover Limited
Publication of WO2018037032A1 publication Critical patent/WO2018037032A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/002Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles specially adapted for covering the peripheral part of the vehicle, e.g. for viewing tyres, bumpers or the like
    • B60R1/003Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles specially adapted for covering the peripheral part of the vehicle, e.g. for viewing tyres, bumpers or the like for viewing trailer hitches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/04Rear-view mirror arrangements mounted inside vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1253Mirror assemblies combined with other articles, e.g. clocks with cameras, video cameras or video screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/808Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for facilitating docking to a trailer

Definitions

  • the present disclosure relates to a vehicle camera system. Aspects of the invention relate to a vehicle camera system suitable for assisting in the alignment of a vehicle tow point with a corresponding trailer coupling, a processing unit for a vehicle, a vehicle comprising a vehicle camera system, and a method of forming a composite image from a camera positioned within a vehicle which may improve or assist in the alignment a vehicle tow point with a corresponding coupling.
  • the projected graphic comprises a guide line which emanates from the tow point of the vehicle to display a predicted/recommended trajectory of the tow point to act as a guide in assisting the driver in positioning the vehicle tow point adjacent the trailer coupling ready for hitching the trailer.
  • the tow point on the rear of the vehicle is within the field of view of the rear facing camera. In such systems, the position of the tow point relative to the trailer hitch coupling can be seen. However, in some instances the tow point may not be within the field of view of the camera and therefore the position of the tow point relative to the trailer coupling cannot easily be determined. For instance, due to the relative positioning of the tow point and the camera on the rear of the vehicle, the tow point may be obscured in the image obtained by the camera by part of the bodywork of the vehicle.
  • the tow point is typically located lower than the camera and it therefore being necessary to point the camera downwards to capture the tow point, keeping the tow point within the field of view of the camera inevitably leads to a reduction in the distance behind the vehicle which can be seen. In some instances this may not be desirable or acceptable.
  • a vehicle camera system comprising: at least one camera arranged to capture image data; a display configured to show images formed from the captured image data; and a processing unit operable to receive and store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part formed from image data captured by the at least one camera at that point in time and a delayed image part formed from previously captured stored image data.
  • the system provides a means to generate and display a composite image including a delayed image part which may lie outside of the current field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part.
  • the system of the invention provides a means to effectively eliminate obstructions within the real-time image part and/or to effectively increase the field of view of the camera.
  • the processing unit is operable in a moving vehicle to select for use in the delayed image part, previously captured image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would be at the position relative the real-time image at which the delayed image part is displayed.
  • the at least one camera comprises at least one rear facing camera configured to be positioned at the rear of a vehicle.
  • the at least one camera may be arranged to capture image data to the rear of the vehicle.
  • the system may comprise more than one camera. In which case, at least two cameras may be rear facing cameras.
  • the processing unit may be operable to generate the composite image with the delayed image part positioned at a set location within the composite image relative to the real-time image part.
  • the delayed image part may in some embodiments be positioned within the composite image at the predicted position of a tow point of a vehicle in which the system is implemented.
  • the delayed image part may comprise image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
  • the area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area with respect to the vehicle or both the movement of the vehicle and the object.
  • the vehicle camera system may be suitable for assisting in the alignment of a vehicle tow point with a corresponding trailer coupling.
  • the vehicle camera system of the invention may assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
  • the processing unit may be operable to continuously update the delayed image part of the composite image. For example, each time the composite image is generated, the processing unit may be operable to form a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle in which the system is implemented.
  • previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move).
  • the processing unit may therefore be operable to determine which area of which previously captured image data is proximal to the tow point of the vehicle at any given time.
  • the processing unit may be operable to determine this on the basis of the movement of the vehicle.
  • the processing unit may be configured such that the rate at which the delayed image part of the composite image is updated may be equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image.
  • the vehicle camera system of the invention provides a means to effectively show to a user a simulated "live" image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
  • the camera system may comprise one or more sensors operable to measure one or more parameters relating to the position and/or motion of a vehicle in which the system is implemented.
  • the one or more sensors may be communicable with the processing unit to be able to input data relating to the one or more parameters into the processing unit.
  • the processing unit may be operable to generate the composite image in dependence on the values of the one or more measured parameters of the vehicle.
  • the processing unit may be operable to select a delayed image part to be presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the processing unit may be operable to control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
  • the camera system of the invention provides a means to select a delayed image part from the stored image data and/or position the delayed image part in relation to the realtime image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part.
  • This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively "see through” the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera.
  • At least one of the one or more sensors may comprise a motion sensor.
  • the system may comprise one or more motion sensors operable to determine the speed at which a vehicle in which the system is implemented is travelling.
  • the motion sensor may be operable to measure the speed of rotation of one or more wheels of the vehicle.
  • the motion sensor comprises a speedometer within the vehicle.
  • At least one of the one or more sensors may comprise a position sensor.
  • The, or each, position sensor may be operable to determine the position of one or more components of a vehicle within which the system is implemented or the position of the vehicle itself.
  • the system may comprise at least one position sensor operable to determine the angular position of the steering wheel of the vehicle.
  • the system may comprise at least one position sensor operable to determine the angular position of one or both of the front wheels of the vehicle.
  • the system may comprise at least one position sensor operable to determine the angular position of the vehicle relative to one or more objects identified in the captured image data.
  • the angular position of the vehicle may be the yaw angle of the vehicle, for example.
  • At least one of the one or more sensors may comprise a distance sensor.
  • The, or each, distance sensor may be operable to determine the distance between the sensor and one or more objects identified within the image data captured by the camera.
  • the, or each, distance sensor may be located on the rear of a vehicle and be operable to determine the distance between the rear of the vehicle and the one or more objects identified in the captured image data.
  • The, or each, distance sensor may comprise an ultrasonic sensor, an infra-red sensor or a radar sensor, for example.
  • the distance information obtained by the one or more distance sensors may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
  • the processing unit may be operable to calculate the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified.
  • the processing unit may be operable to generate the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed. In this way, the system provides a means to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera.
  • the processing unit may be operable to overlay on the composite image a graphical representation of a suggested trajectory for a vehicle within which the system is implemented.
  • the processing unit may be operable to display a graphical representation on the composite image which comprises a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the composite image to illustrate the trajectory of the tow point of the vehicle which needs to be taken to align the tow point within an identified object.
  • the system may comprise two or more cameras.
  • the processing unit may be operable to receive and store image data from each camera.
  • the processing unit may be operable to form a single composite image from the image data from each of the two or more cameras.
  • the delayed image part of the composite image may be formed from image data from at least one of the cameras.
  • the delayed image part of the composite image may be formed from image data from each of the two or more cameras. Providing two or more cameras in the system may increase the field of view of the composite image.
  • the display may comprise an LCD or LED monitor, for example.
  • the display may be configured to be positioned within the interior of a vehicle.
  • the display is configured to be positioned within the dashboard of a vehicle.
  • the display may be configured to be positioned within the centre console of a vehicle.
  • the display may be configured to form at least part of a mirror of a vehicle, which may be a rear-view mirror such as an interior mirror of a vehicle, for example.
  • a vehicle camera system configured to be implemented within a vehicle comprising: at least one rear-facing camera arranged to capture image data to the rear of the vehicle; a display configured to show images formed from the image data captured by the at least one camera; and a processing unit operable to store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part formed from image data captured at that point in time and a delayed image part formed from previously captured stored image data; wherein the delayed image part is positioned within the composite image with respect to the real-time image part at a predicted position of a tow point of the vehicle.
  • the vehicle camera system of this aspect of the invention may incorporate any or all of the features of the preceding aspect of the invention as desired or appropriate.
  • a processing unit for a vehicle configured to receive and store image data captured by at least one camera within or on the vehicle and to generate images to be shown on a display; wherein the processing unit is operable to generate a composite image on the display comprising a real-time image part formed from image data captured at that point in time and a delayed image part formed from previously captured stored image data.
  • the processing unit of the invention is able to generate and display a composite image including a delayed image part which may lie outside of the field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part.
  • the processing unit of the invention provides a means to effectively eliminate obstructions within the real-time image part or to effectively increase the field of view of the camera.
  • the processing unit may be operable to generate the composite image with the delayed image part positioned at a set location within the composite image relative to the real-time image part.
  • the delayed image part may in some embodiments be positioned within the composite image at the predicted position of a tow point of the vehicle.
  • the delayed image part may comprise image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
  • the area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area or both the movement of the vehicle and the object/area.
  • the processing unit may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
  • the processing unit may be operable to continuously update the delayed image part of the composite image. For example, each time the composite image is generated, the processing unit may be operable to form a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle.
  • the processing unit may therefore be operable to determine which area of which previously captured image data is proximal to the tow point of the vehicle at any given time.
  • the processing unit may be operable to determine this on the basis of the movement of the vehicle.
  • the processing unit may be configured such that the rate at which the delayed image part of the composite image is updated may be equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the processing unit of the invention is able to effectively generate a "live" image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
  • the processing unit may be communicable with one or more sensors.
  • the one or more sensors may be operable to measure one or more parameters relating to the position and/or motion of the vehicle.
  • the processing unit may be configured to receive data from the one or more sensors relating to the one or more parameters of the vehicle.
  • the processing unit may be operable to generate a composite image in dependence on the values of the one or more measured parameters of the vehicle.
  • the processing unit may be operable to select a delayed image part presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters.
  • the processing unit may be operable to control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
  • the processing unit of the invention is capable of selecting a delayed image part from the stored image data and/or position the delayed image part in relation to the real-time image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part.
  • the processing unit may be communicable with at least one motion sensor which may be operable to determine the speed at which the vehicle is travelling.
  • the speed information received by the processing unit from the or each motion sensor may be used by the processing unit to determine the closing speed between any identified object and the vehicle.
  • the processing unit may be communicable with at least one position sensor which may be operable to determine the position of one or more components of the vehicle or the position of the vehicle itself.
  • the one or more components of the vehicle may comprise the steering wheel and the, or each, position sensor may be operable to determine the angular position of the steering wheel.
  • position information received by the processing unit from the, or each, position sensor may be used by the processing unit to predict the trajectory of the vehicle.
  • the processing unit may be communicable with at least one distance sensor which may be operable to determine the distance between the sensor and one or more objects identified within the image data captured by the camera.
  • the distance information received by the processing unit from the or each distance sensor may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
  • the processing unit may be operable to calculate the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified.
  • the processing unit may be operable to generate the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
  • the processing unit is operable to generate a composite image to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera.
  • This is particularly beneficial in instances wherein the processing unit is implemented on a vehicle wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle.
  • the processing unit may be operable to overlay on the composite image a graphical representation of a suggested trajectory for a vehicle within which the system is implemented.
  • the processing unit may be operable to display a graphical representation on the composite image which comprises a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the composite image to illustrate a suggested trajectory of the tow point of the vehicle which needs to be taken in order to align the tow point with an identified object.
  • a vehicle comprising a vehicle camera system or a processing unit in accordance with any of the preceding aspects of the present invention.
  • the vehicle may comprise a motor vehicle.
  • the vehicle may comprise a road vehicle.
  • the vehicle may be a car.
  • a method of forming a composite image from at least one camera mounted within or on a vehicle comprising: using the at least one camera to capture image data;
  • generating a composite image from captured image data comprising a real-time image part formed from image data captured at that point in time and a delayed image part formed from stored captured image data.
  • the method of the invention provides a means to generate and display a composite image including a delayed image part which may lie outside of the field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part.
  • the method of the invention provides a means to effectively eliminate obstructions within the real-time image part or to effectively increase the field of view of the camera.
  • the method comprises using at least one rear facing camera positioned at the rear of a vehicle.
  • the method may comprise capturing image data to the rear of the vehicle. In this way, the method may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling.
  • the method may comprise positioning the delayed image part at a set location within the composite image relative to the real-time image part. In some embodiments the method may comprise positioning the delayed image part at the predicted position of a tow point of the vehicle.
  • the method may comprise forming the delayed image part from image data relating to an area of a previously captured image, or object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
  • the area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area or both the movement of the vehicle and the object/area.
  • the method may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
  • the method may comprise continuously updating the delayed image part of the composite image. For example, each time the composite image is generated, the method may comprise forming a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle in which the system is implemented.
  • Previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move).
  • the method may therefore comprise determining which area of which previously captured image data is proximal to the tow point of the vehicle at any given time.
  • the method may comprise determining which area of which previously captured image data is proximal to the tow point of the vehicle on the basis of the movement of the vehicle.
  • the method may comprise updating the delayed image part of the composite image at a rate which is equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the method may generate a "live" image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
  • the method may comprise using one or more sensors to measure one or more parameters relating to the position and/or motion of the vehicle.
  • the method may comprise generating a composite image in dependence on the values of the one or more measured parameters of the vehicle.
  • the method may comprise selecting a delayed image part presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters.
  • the method may comprise controlling the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
  • the method may comprise selecting as the delayed image part stored image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would occupy a position relative to the real-time image corresponding to the location of the delayed image part in the composite image, taking into account sensed movement of vehicle since the previously captured image data was captured.
  • the method may be used to select a delayed image part from the stored image data and/or position the delayed image part in relation to the real-time image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part.
  • This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively "see through” the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera.
  • the method may comprise using at least one motion sensor.
  • the method may comprise using the, or each, motion sensor to determine the speed at which the vehicle is travelling.
  • the method may comprise using at least one position sensor.
  • the method may comprise using the, or each, position sensor to determine the position of one or more components of the vehicle or the position of the vehicle itself.
  • the method may comprise using at least one position sensor to determine the angular position of the steering wheel of the vehicle.
  • the method may comprise using at least one position sensor to determine the angular position of one or both of the front wheels of the vehicle.
  • the method may comprise using at least one position sensor to determine the angular position of the vehicle relative to one or more objects identified in the captured image data.
  • the angular position of the vehicle may be the yaw angle of the vehicle, for example.
  • the method may comprise using at least one distance sensor.
  • the method may comprise using the, or each, distance sensor to determine the distance between the sensor and one or more objects identified within the image data captured by the camera.
  • the or each distance sensor may be located on the rear of a vehicle and the method may comprise using the or each distance sensor to determine the distance between the rear of the vehicle and the one or more objects identified in the captured image data.
  • The, or each, distance sensor may comprise an ultrasonic sensor, an infra-red sensor or a radar sensor, for example.
  • the distance information obtained by the one or more distance sensors may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
  • the method may additionally comprise calculating the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified.
  • the method may comprise generating the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
  • the method may be used to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle.
  • the method may comprise overlaying a graphical representation of the predicted trajectory of the vehicle on the composite image.
  • the method may comprise forming a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the generated composite image to illustrate the predicted trajectory of the tow point of the vehicle.
  • the method may comprise using two or more cameras.
  • the method may comprise receiving and storing image data from each camera.
  • the method may comprise forming a single composite image from the image data from each of the two or more cameras.
  • the delayed image part of the composite image may be formed from image data from at least one of the cameras.
  • the delayed image part of the composite image may be formed from image data from each of the two or more cameras.
  • Using two or more cameras may increase the field of view of the composite image.
  • the method comprises displaying the composite image on a display within the vehicle.
  • the method may comprise displaying the composite image on a display positioned within the dashboard or centre console of a vehicle.
  • the method may comprise displaying the composite image on a display which forms at least part of a mirror of a vehicle, which may be a rear-view mirror such as an interior mirror of a vehicle, for example.
  • a computer program configured to perform the method of the previous aspect of the invention when executed on a computer and/or a data carrier comprising such a computer program.
  • the computer may be the processing unit of the vehicle camera system according to an aspect of the invention.
  • a non-transitory, computer-readable storage medium comprising a computer program according to the above aspect of the invention.
  • a data carrier comprising a computer program as defined above.
  • the delayed image part replaces the real-time image part within at least a portion of the field of view of the camera at which part of the vehicle is visible. Effectively, the user appears to "see through” the bumper or other parts of the vehicle which are currently blocking the view of the camera of an area of interest. This technique both permits the effective coverage of the camera to be extended into regions not currently visible to the camera (but which were previously visible to the camera), and also permits glare (sun reflection) from vehicle parts (such as the rear bumper) to be avoided or at least mitigated by utilising delayed image parts instead of real-time image parts in regions of the composite image which correspond to those vehicle parts.
  • Figure 1 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention
  • Figure 2 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention.
  • Figure 3 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention
  • Figures 4A and 4B are a series of schematic representations of a display illustrating the operational use of embodiments of a camera system in accordance with an aspect of the invention.
  • Figure 5 is a schematic diagram of an embodiment of a vehicle in accordance with the invention illustrating the implementation of an embodiment of a camera system within the vehicle.
  • FIG. 10 illustrates an embodiment of a vehicle camera system 10 in accordance with an aspect of the invention.
  • the system 10 is implemented as part of a hitch guidance system for assisting in the alignment of a tow point 38 of a vehicle 36 with a corresponding trailer coupling 34.
  • a camera system in accordance with an aspect of the invention in its broadest sense is not necessarily limited to application in a hitch guidance system. Rather, the system can be adapted for a range of different purposes.
  • the term "camera” is intended to encompass any suitable device for capturing image data.
  • FIG 1 is a schematic diagram of the camera system 10 and provides an overview of the components of the system 10.
  • the camera system 10 includes the camera 12, a processing unit 14 and a display 16.
  • the camera 12 is arranged to capture image data of an area to the rear of the vehicle 36, as shown in Figure 5. It will though be appreciated that for use in other applications the camera need not be located at the rear of the vehicle and could, for example, be positioned towards the front, side or the underside of the vehicle, as is desired.
  • the display 16 is configured to show images formed from the image data captured by the camera 12.
  • the display 16 is typically located within the dashboard of the vehicle 36, and may be in the centre console of the vehicle 36. However, the display 16 could be located at any suitable position within the vehicle.
  • the display 16 may be an LCD or LED screen, for example.
  • the processing unit 14 is in communication with both the camera 12 and the display 16.
  • the processing unit is operable to receive and store image data captured by the camera 12 and to generate a composite image to be shown on the display 16.
  • the composite image comprises a real-time image part formed from live image data from the camera 12 and a delayed image part formed from previously captured image data from the camera stored by the processing unit 14.
  • the processing unit 14 will include one or more processors programmed to carry out the processes and methods described herein.
  • the system 10 additionally includes a series of sensors 18a, 18b, 18c operable to obtain and input data to the processing unit 14.
  • the sensors 18a, 18b, 18c are communicable with the processing unit 14 and the data from the sensors 18a, 18b, 18c input into the processing unit 14 is used to determine which image data stored in the processing unit 14 is to be used to form the delayed image part of the composite image.
  • the sensors 18a, 18b, 18c are operable to detect one or more parameters relating to a vehicle's trajectory, speed and/or the relative distance and position of the vehicle with respect to an object of interest.
  • sensor 18a comprises a sensor operable to determine the speed at which a vehicle 36 in which the system 10 is implemented is travelling (see Figure 5).
  • Sensor 18b comprises a position sensor operable to determine the angular position of the steering wheel 40 of the vehicle 36. Data from the speed sensor 18a and position sensor 18b can be used in combination to calculate the trajectory of the vehicle in motion.
  • Sensors 18c comprise distance sensors, data from which can be used to determine the distance of objects identified within the image data captured by the camera 12.
  • the processing unit 14 includes a server 20 for storing image data obtained from the camera 12 and a composite image generator 22 operable to generate the composite image from a real-time image part (formed from image data taken straight from the camera 12) and delayed image part (formed from stored image data within the server 20).
  • the processing unit 14 also includes a vehicle state estimator 24 communicable with the sensors 18a, 18b, 18c to determine the state of the vehicle 36 (which may include information relating to the position or speed of the vehicle, for example) from the one or more parameters measured by the sensors 18a, 18b, 18c, and a vehicle-object state estimator 26 operable to determine how the position of an identified object or area of interest in a previously captured image will vary over time relative to the vehicle 36 on the basis of the estimation made by the vehicle state estimator 24.
  • a vehicle state estimator 24 communicable with the sensors 18a, 18b, 18c to determine the state of the vehicle 36 (which may include information relating to the position or speed of the vehicle, for example) from the one or more parameters measured by the sensors 18a, 18b, 18c
  • a vehicle-object state estimator 26 operable to determine how the position of an identified object or area of interest in a previously captured image will vary over time relative to the vehicle 36 on the basis of the estimation made by the vehicle state estimator 24.
  • the estimation made by the vehicle-object state estimator 26 is used by the composite image generator 22 to determine which previously captured image data will be used to form the delayed image part of the composite image.
  • the delayed image part of the composite image is positioned in the composite image at the estimated position of the tow point 38 of the vehicle 36.
  • the prediction made by the vehicle-object state estimator 26 is used by the composite image generator 22 to determine which area of interest in previously captured image data is proximal the position of the tow point in the real-time image 28 at the point in time when the composite image is generated.
  • FIGS 4A and 4B are schematic representations of images shown on the display 16 to illustrate the operational use of the camera system 10.
  • the images on the display 16 include a real-time image part 28 which is a live feed taken directly from the camera 12.
  • the real- time image part 28 forms part of the generated composite image.
  • the real-time image part 28 does not usefully fill all of the available screen space on the display 16 and there is an obstructed portion 30, represented at the bottom of the image by cross hatching.
  • the obstructed portion 30 may be present due to misalignment of the camera 12 so that the area around the tow point is not visible or may be caused by a physical obstruction present within the field of view of the camera 12.
  • part of the bodywork of the vehicle 36 such as the rear bumper, may obscure the vehicle tow point 38 from the camera's view.
  • the presence of the obstructed portion 30 is problematic as it prevents the user from being able to view the tow point 38 on the vehicle 36 directly in the real-time image 28. This makes alignment of the vehicle tow point 38 with a corresponding trailer coupling when reversing difficult. The problem is further compounded when the tow point 38 is brought close to the trailer coupling, as the trailer coupling will also become obscured from view by the camera 12.
  • this problem is reduced by generating a composite image comprising a real-time image part 28 and a delayed image part 32 which is used to replace the obstructed portion 30, or at least a portion thereof.
  • the composite image is generated by the processing unit 14 in which the delayed image part 32 is made up from previously captured image data relating to an area of interest to the rear of vehicle which the processing unit 14 has calculated would be located in the obscured portion of the real-time image being replaced at the time of display.
  • the delayed image part 32 is used to replace the obstructed portion 30 in the region occupied by the tow point so that the composite image creates a virtual display in which the area about the tow point 38 is shown as if it is part of the real-time image, even though it is not within the field of view of the camera 12 or is otherwise obscured.
  • An associated problem is glare in a real-time image part caused by the reflection of the sun from the bumper (or other part) of the vehicle within the field of view of the camera. This can visibly detract from the visibility of the tow bar, or of external objects in the vicinity of the bumper.
  • the composite image presented to the user can be substantially free of glare and other reflection related artefacts, improving visibility of the tow bar, the trailer hitch coupling and any other objects within the composite image and in particular within the vicinity of the bumper.
  • the present technique may be used not only to provide a user with a view of a region which is currently physically obstructed, but also to reduce glare related artefacts impacting on non-obscured areas.
  • This image data is stored by the server 20.
  • the speed and trajectory of the vehicle is determined by the state estimator 24 using data from the sensors 18a, 18b, 18c and this information is input into the vehicle-object state estimator 26 which determines that the object 34 will be at the position of the tow point 38 after 5s.
  • the delayed image part 32 is the area of the stored image around the identified object 34.
  • the delayed image part 32 is positioned in the composite image at the approximate position of the tow point 38 of the vehicle 36 with respect to the real-time image 28 within the obstructed portion 30 of the image.
  • the delayed image part 32 of the composite image will be continually updated with image data stored in the server 20. It is not necessary for an object of interest, such as object 34, to have been identified before the delayed image part 32 is generated.
  • the system 10 of the invention provides a way to effectively increase the field of view of the camera 12 by continually updating the delayed image part 32 of the composite image to show what is (or is calculated to be) proximal to the tow point 38 of the vehicle 36 at all times.
  • the processing unit 14 may be able to select from a number of different sets of stored image data which could be used to form the delayed image part 32.
  • the processing unit 14 may be configured to select the stored image data which will best fit with the real-time image data to form a realistic composite image.
  • the vehicle 36 is travelling in a straight line at a constant speed for simplicity.
  • the system 10 is capable of taking into account changes in speed and trajectory of the vehicle in determining which part of the stored image data is to be displayed in the delayed image portion 32 using suitable algorithms and based on information provided to the vehicle state estimator 24 and the vehicle-object state estimator 26 by the sensors. Changes in vehicle speed and/or trajectory are estimated by the vehicle state estimator 24 and the vehicle-object state estimator 26 based on data from the sensors and/or other inputs and provides instructions to the composite image generator 22 regarding which area of which stored image should be used as the delayed image part 32 in any generated composite image.
  • the composite image may be generated at any feasible rate and is dependent on the frame rate of the camera 12 used and the processing speed of the processing unit 14.
  • the rate at which the composite image is generated is 1 frame per second, or at least 1 frame of the delayed image part 32 per second. It is, however, envisaged that the composite image be generated at a much quicker rate than this exemplary embodiment such that the delayed image part 32 effectively appears to be a live image of the area around the tow point 38 of the vehicle 36.
  • the above embodiments are described by way of example only. Many variations are possible without departing from the scope of the invention as defined in the appended claims.
  • more than one camera 12 can be used to capture image date which can be used to generate the composite image.
  • the delayed image part 32 could be used to fill whole of the obscured portion of the real-time image portion 28 in the display 16. Indeed, the delayed image portion could be positioned anywhere within the realtime image depending on the requirements of the particular application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

A vehicle camera system (10) for use in a vehicle comprising: a camera (12) arranged to capture image data; a display (16) configured to show images formed from the image data captured by the camera (12); and a processing unit (14) operable to store image data captured by the camera (12) and to generate a composite image to be shown on the display (16). The composite image formed by the processing unit (14) comprises a real-time image part (28) and a delayed image part (32) formed from previously captured image data.

Description

A VEHICLE CAMERA SYSTEM
TECHNICAL FIELD
The present disclosure relates to a vehicle camera system. Aspects of the invention relate to a vehicle camera system suitable for assisting in the alignment of a vehicle tow point with a corresponding trailer coupling, a processing unit for a vehicle, a vehicle comprising a vehicle camera system, and a method of forming a composite image from a camera positioned within a vehicle which may improve or assist in the alignment a vehicle tow point with a corresponding coupling.
BACKGROUND
It is known to provide a camera or cameras on the rear of a vehicle in order to capture images behind the vehicle. Such images are typically used to assist a driver when reversing the vehicle. Systems of this type are useful in situations wherein a driver is reversing a vehicle towards a trailer or the like in order to position a tow point on the vehicle proximal to the trailer coupling in order to allow the trailer to be hitched onto the vehicle for towing. To assist further, it is known to incorporate in a displayed image a predicted trajectory of the vehicle as a graphic which is overlaid onto the image obtained by the camera. In some known systems the projected graphic comprises a guide line which emanates from the tow point of the vehicle to display a predicted/recommended trajectory of the tow point to act as a guide in assisting the driver in positioning the vehicle tow point adjacent the trailer coupling ready for hitching the trailer.
In some known systems, the tow point on the rear of the vehicle is within the field of view of the rear facing camera. In such systems, the position of the tow point relative to the trailer hitch coupling can be seen. However, in some instances the tow point may not be within the field of view of the camera and therefore the position of the tow point relative to the trailer coupling cannot easily be determined. For instance, due to the relative positioning of the tow point and the camera on the rear of the vehicle, the tow point may be obscured in the image obtained by the camera by part of the bodywork of the vehicle. Alternatively, given that the tow point is typically located lower than the camera and it therefore being necessary to point the camera downwards to capture the tow point, keeping the tow point within the field of view of the camera inevitably leads to a reduction in the distance behind the vehicle which can be seen. In some instances this may not be desirable or acceptable.
To overcome this issue, it is known to project the position (or the estimated position) of the tow point onto the image captured by the camera. However, given that it is common for the tow point to be outside of the field of view of the camera, the position of the tow point can fall outside of the boundary of the captured image. This solution is therefore incomplete as the trailer coupling will also move out outside of the boundary of the captured image as it approaches the tow point, at which time its distance from, and relative movement to, the tow point cannot be determined. This can typically lead to a misalignment of the tow point with respect to the trailer coupling.
There is therefore a need to provide a system which maintains an acceptable field of view of the captured image whilst simultaneously allows for the area around the tow point of the vehicle to be visible.
It is an aim of the present invention to address disadvantages associated with the prior art.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide a vehicle camera system, a processing unit, a vehicle, a method, a computer program and a data carrier as claimed in the appended claims.
According to an aspect of the invention, there is provided a vehicle camera system comprising: at least one camera arranged to capture image data; a display configured to show images formed from the captured image data; and a processing unit operable to receive and store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part formed from image data captured by the at least one camera at that point in time and a delayed image part formed from previously captured stored image data.
The system provides a means to generate and display a composite image including a delayed image part which may lie outside of the current field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part. The system of the invention provides a means to effectively eliminate obstructions within the real-time image part and/or to effectively increase the field of view of the camera.
In some embodiments, the processing unit is operable in a moving vehicle to select for use in the delayed image part, previously captured image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would be at the position relative the real-time image at which the delayed image part is displayed. In some embodiments the at least one camera comprises at least one rear facing camera configured to be positioned at the rear of a vehicle. The at least one camera may be arranged to capture image data to the rear of the vehicle. The system may comprise more than one camera. In which case, at least two cameras may be rear facing cameras. The processing unit may be operable to generate the composite image with the delayed image part positioned at a set location within the composite image relative to the real-time image part. The delayed image part may in some embodiments be positioned within the composite image at the predicted position of a tow point of a vehicle in which the system is implemented.
The delayed image part may comprise image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated. The area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area with respect to the vehicle or both the movement of the vehicle and the object.
In this way, the vehicle camera system may be suitable for assisting in the alignment of a vehicle tow point with a corresponding trailer coupling. In particular, the vehicle camera system of the invention may assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera. The processing unit may be operable to continuously update the delayed image part of the composite image. For example, each time the composite image is generated, the processing unit may be operable to form a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle in which the system is implemented.
In use, previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move). The processing unit may therefore be operable to determine which area of which previously captured image data is proximal to the tow point of the vehicle at any given time. The processing unit may be operable to determine this on the basis of the movement of the vehicle.
The processing unit may be configured such that the rate at which the delayed image part of the composite image is updated may be equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the vehicle camera system of the invention provides a means to effectively show to a user a simulated "live" image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
The camera system may comprise one or more sensors operable to measure one or more parameters relating to the position and/or motion of a vehicle in which the system is implemented. The one or more sensors may be communicable with the processing unit to be able to input data relating to the one or more parameters into the processing unit. In some embodiments the processing unit may be operable to generate the composite image in dependence on the values of the one or more measured parameters of the vehicle. The processing unit may be operable to select a delayed image part to be presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the processing unit may be operable to control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
In this way, the camera system of the invention provides a means to select a delayed image part from the stored image data and/or position the delayed image part in relation to the realtime image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part. This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively "see through" the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera. At least one of the one or more sensors may comprise a motion sensor. In some embodiments the system may comprise one or more motion sensors operable to determine the speed at which a vehicle in which the system is implemented is travelling. The motion sensor may be operable to measure the speed of rotation of one or more wheels of the vehicle. In some embodiments the motion sensor comprises a speedometer within the vehicle.
At least one of the one or more sensors may comprise a position sensor. The, or each, position sensor may be operable to determine the position of one or more components of a vehicle within which the system is implemented or the position of the vehicle itself. For example, in some embodiments the system may comprise at least one position sensor operable to determine the angular position of the steering wheel of the vehicle. Additionally or alternatively, the system may comprise at least one position sensor operable to determine the angular position of one or both of the front wheels of the vehicle. In some embodiments the system may comprise at least one position sensor operable to determine the angular position of the vehicle relative to one or more objects identified in the captured image data. The angular position of the vehicle may be the yaw angle of the vehicle, for example.
At least one of the one or more sensors may comprise a distance sensor. The, or each, distance sensor may be operable to determine the distance between the sensor and one or more objects identified within the image data captured by the camera. When implemented in a vehicle the, or each, distance sensor may be located on the rear of a vehicle and be operable to determine the distance between the rear of the vehicle and the one or more objects identified in the captured image data. The, or each, distance sensor may comprise an ultrasonic sensor, an infra-red sensor or a radar sensor, for example.
This is advantageous as the distance information obtained by the one or more distance sensors may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
In embodiments wherein the system comprises one or more distance sensors in conjunction with one or more motion and/or position sensors, the processing unit may be operable to calculate the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified. In such embodiments, the processing unit may be operable to generate the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed. In this way, the system provides a means to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the camera system is implemented on a vehicle wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle. In some embodiments the processing unit may be operable to overlay on the composite image a graphical representation of a suggested trajectory for a vehicle within which the system is implemented. For example, the processing unit may be operable to display a graphical representation on the composite image which comprises a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the composite image to illustrate the trajectory of the tow point of the vehicle which needs to be taken to align the tow point within an identified object.
In some embodiments the system may comprise two or more cameras. In such embodiments, the processing unit may be operable to receive and store image data from each camera. The processing unit may be operable to form a single composite image from the image data from each of the two or more cameras. The delayed image part of the composite image may be formed from image data from at least one of the cameras. In some embodiments the delayed image part of the composite image may be formed from image data from each of the two or more cameras. Providing two or more cameras in the system may increase the field of view of the composite image.
In some embodiments the display may comprise an LCD or LED monitor, for example. The display may be configured to be positioned within the interior of a vehicle. In some embodiments the display is configured to be positioned within the dashboard of a vehicle. The display may be configured to be positioned within the centre console of a vehicle. In further embodiments, the display may be configured to form at least part of a mirror of a vehicle, which may be a rear-view mirror such as an interior mirror of a vehicle, for example. According to an aspect of the invention, there is provided a vehicle camera system configured to be implemented within a vehicle comprising: at least one rear-facing camera arranged to capture image data to the rear of the vehicle; a display configured to show images formed from the image data captured by the at least one camera; and a processing unit operable to store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part formed from image data captured at that point in time and a delayed image part formed from previously captured stored image data; wherein the delayed image part is positioned within the composite image with respect to the real-time image part at a predicted position of a tow point of the vehicle.
The vehicle camera system of this aspect of the invention may incorporate any or all of the features of the preceding aspect of the invention as desired or appropriate.
According to another aspect of the invention, there is provided a processing unit for a vehicle, the processing unit being configured to receive and store image data captured by at least one camera within or on the vehicle and to generate images to be shown on a display; wherein the processing unit is operable to generate a composite image on the display comprising a real-time image part formed from image data captured at that point in time and a delayed image part formed from previously captured stored image data.
The processing unit of the invention is able to generate and display a composite image including a delayed image part which may lie outside of the field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part. The processing unit of the invention provides a means to effectively eliminate obstructions within the real-time image part or to effectively increase the field of view of the camera.
The processing unit may be operable to generate the composite image with the delayed image part positioned at a set location within the composite image relative to the real-time image part. The delayed image part may in some embodiments be positioned within the composite image at the predicted position of a tow point of the vehicle.
The delayed image part may comprise image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated. The area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area or both the movement of the vehicle and the object/area.
In this way, the processing unit may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
The processing unit may be operable to continuously update the delayed image part of the composite image. For example, each time the composite image is generated, the processing unit may be operable to form a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle.
In use, previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move). The processing unit may therefore be operable to determine which area of which previously captured image data is proximal to the tow point of the vehicle at any given time. The processing unit may be operable to determine this on the basis of the movement of the vehicle. The processing unit may be configured such that the rate at which the delayed image part of the composite image is updated may be equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the processing unit of the invention is able to effectively generate a "live" image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
In some embodiments the processing unit may be communicable with one or more sensors. The one or more sensors may be operable to measure one or more parameters relating to the position and/or motion of the vehicle. In such embodiments, the processing unit may be configured to receive data from the one or more sensors relating to the one or more parameters of the vehicle. In some embodiments the processing unit may be operable to generate a composite image in dependence on the values of the one or more measured parameters of the vehicle. The processing unit may be operable to select a delayed image part presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the processing unit may be operable to control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters. In this way, the processing unit of the invention is capable of selecting a delayed image part from the stored image data and/or position the delayed image part in relation to the real-time image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part. This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively "see through" the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera. The processing unit may be communicable with at least one motion sensor which may be operable to determine the speed at which the vehicle is travelling.
This is advantageous as the speed information received by the processing unit from the or each motion sensor may be used by the processing unit to determine the closing speed between any identified object and the vehicle.
The processing unit may be communicable with at least one position sensor which may be operable to determine the position of one or more components of the vehicle or the position of the vehicle itself. For example, the one or more components of the vehicle may comprise the steering wheel and the, or each, position sensor may be operable to determine the angular position of the steering wheel.
This is advantageous as the position information received by the processing unit from the, or each, position sensor may be used by the processing unit to predict the trajectory of the vehicle.
The processing unit may be communicable with at least one distance sensor which may be operable to determine the distance between the sensor and one or more objects identified within the image data captured by the camera.
This is advantageous as the distance information received by the processing unit from the or each distance sensor may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
In embodiments wherein the processing unit is communicable with one or more distance sensors and one or more motion and/or position sensors, the processing unit may be operable to calculate the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified. In such embodiments, the processing unit may be operable to generate the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
In this way, the processing unit is operable to generate a composite image to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the processing unit is implemented on a vehicle wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle. In some embodiments the processing unit may be operable to overlay on the composite image a graphical representation of a suggested trajectory for a vehicle within which the system is implemented. For example, the processing unit may be operable to display a graphical representation on the composite image which comprises a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the composite image to illustrate a suggested trajectory of the tow point of the vehicle which needs to be taken in order to align the tow point with an identified object.
According to another aspect of the invention, there is provided a vehicle comprising a vehicle camera system or a processing unit in accordance with any of the preceding aspects of the present invention.
The vehicle may comprise a motor vehicle. The vehicle may comprise a road vehicle. The vehicle may be a car.
According to a further aspect of the invention, there is provided a method of forming a composite image from at least one camera mounted within or on a vehicle comprising: using the at least one camera to capture image data;
storing image data captured by the at least one camera; and
generating a composite image from captured image data comprising a real-time image part formed from image data captured at that point in time and a delayed image part formed from stored captured image data.
The method of the invention provides a means to generate and display a composite image including a delayed image part which may lie outside of the field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part. The method of the invention provides a means to effectively eliminate obstructions within the real-time image part or to effectively increase the field of view of the camera.
In some embodiments the method comprises using at least one rear facing camera positioned at the rear of a vehicle. The method may comprise capturing image data to the rear of the vehicle. In this way, the method may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling.
The method may comprise positioning the delayed image part at a set location within the composite image relative to the real-time image part. In some embodiments the method may comprise positioning the delayed image part at the predicted position of a tow point of the vehicle.
In some embodiments the method may comprise forming the delayed image part from image data relating to an area of a previously captured image, or object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated. The area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area or both the movement of the vehicle and the object/area.
In this way, the method provides may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera. The method may comprise continuously updating the delayed image part of the composite image. For example, each time the composite image is generated, the method may comprise forming a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle in which the system is implemented.
Previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move). The method may therefore comprise determining which area of which previously captured image data is proximal to the tow point of the vehicle at any given time. The method may comprise determining which area of which previously captured image data is proximal to the tow point of the vehicle on the basis of the movement of the vehicle.
The method may comprise updating the delayed image part of the composite image at a rate which is equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the method may generate a "live" image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera. The method may comprise using one or more sensors to measure one or more parameters relating to the position and/or motion of the vehicle. In some embodiments the method may comprise generating a composite image in dependence on the values of the one or more measured parameters of the vehicle. The method may comprise selecting a delayed image part presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the method may comprise controlling the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters. The method may comprise selecting as the delayed image part stored image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would occupy a position relative to the real-time image corresponding to the location of the delayed image part in the composite image, taking into account sensed movement of vehicle since the previously captured image data was captured.
In this way, the method may be used to select a delayed image part from the stored image data and/or position the delayed image part in relation to the real-time image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part. This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively "see through" the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera.
In some embodiments the method may comprise using at least one motion sensor. In such embodiments, the method may comprise using the, or each, motion sensor to determine the speed at which the vehicle is travelling.
The method may comprise using at least one position sensor. In such embodiments, the method may comprise using the, or each, position sensor to determine the position of one or more components of the vehicle or the position of the vehicle itself. For example, in some embodiments the method may comprise using at least one position sensor to determine the angular position of the steering wheel of the vehicle. Additionally or alternatively, the method may comprise using at least one position sensor to determine the angular position of one or both of the front wheels of the vehicle. In some embodiments the method may comprise using at least one position sensor to determine the angular position of the vehicle relative to one or more objects identified in the captured image data. The angular position of the vehicle may be the yaw angle of the vehicle, for example.
The method may comprise using at least one distance sensor. In such embodiments, the method may comprise using the, or each, distance sensor to determine the distance between the sensor and one or more objects identified within the image data captured by the camera. The or each distance sensor may be located on the rear of a vehicle and the method may comprise using the or each distance sensor to determine the distance between the rear of the vehicle and the one or more objects identified in the captured image data. The, or each, distance sensor may comprise an ultrasonic sensor, an infra-red sensor or a radar sensor, for example.
This is advantageous as the distance information obtained by the one or more distance sensors may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
In embodiments wherein the method comprises using one or more distance sensors in conjunction with one or more motion and/or position sensors, the method may additionally comprise calculating the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified. In such embodiments, the method may comprise generating the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
In this way, the method may be used to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle.
In some embodiments the method may comprise overlaying a graphical representation of the predicted trajectory of the vehicle on the composite image. For example, the method may comprise forming a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the generated composite image to illustrate the predicted trajectory of the tow point of the vehicle.
In some embodiments the method may comprise using two or more cameras. In such embodiments, the method may comprise receiving and storing image data from each camera. The method may comprise forming a single composite image from the image data from each of the two or more cameras. The delayed image part of the composite image may be formed from image data from at least one of the cameras. In some embodiments the delayed image part of the composite image may be formed from image data from each of the two or more cameras. Using two or more cameras may increase the field of view of the composite image. In some embodiments the method comprises displaying the composite image on a display within the vehicle. The method may comprise displaying the composite image on a display positioned within the dashboard or centre console of a vehicle. In some embodiments the method may comprise displaying the composite image on a display which forms at least part of a mirror of a vehicle, which may be a rear-view mirror such as an interior mirror of a vehicle, for example. According to a still further aspect of the invention, there is provided a computer program configured to perform the method of the previous aspect of the invention when executed on a computer and/or a data carrier comprising such a computer program. The computer may be the processing unit of the vehicle camera system according to an aspect of the invention.
According to yet another aspect of the invention, there is provided a non-transitory, computer-readable storage medium comprising a computer program according to the above aspect of the invention. According to another aspect of the invention there is provided a data carrier comprising a computer program as defined above.
In some embodiments, the delayed image part replaces the real-time image part within at least a portion of the field of view of the camera at which part of the vehicle is visible. Effectively, the user appears to "see through" the bumper or other parts of the vehicle which are currently blocking the view of the camera of an area of interest. This technique both permits the effective coverage of the camera to be extended into regions not currently visible to the camera (but which were previously visible to the camera), and also permits glare (sun reflection) from vehicle parts (such as the rear bumper) to be avoided or at least mitigated by utilising delayed image parts instead of real-time image parts in regions of the composite image which correspond to those vehicle parts.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention;
Figure 2 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention;
Figure 3 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention; Figures 4A and 4B are a series of schematic representations of a display illustrating the operational use of embodiments of a camera system in accordance with an aspect of the invention; and
Figure 5 is a schematic diagram of an embodiment of a vehicle in accordance with the invention illustrating the implementation of an embodiment of a camera system within the vehicle.
DETAILED DESCRIPTION The Figures illustrate an embodiment of a vehicle camera system 10 in accordance with an aspect of the invention. In this embodiment, the system 10 is implemented as part of a hitch guidance system for assisting in the alignment of a tow point 38 of a vehicle 36 with a corresponding trailer coupling 34. However, it should be understood that a camera system in accordance with an aspect of the invention in its broadest sense is not necessarily limited to application in a hitch guidance system. Rather, the system can be adapted for a range of different purposes. Furthermore, it should be understood that the term "camera" is intended to encompass any suitable device for capturing image data.
Figure 1 is a schematic diagram of the camera system 10 and provides an overview of the components of the system 10. Specifically, the camera system 10 includes the camera 12, a processing unit 14 and a display 16. In the illustrated embodiment for use in the hitch guidance system, the camera 12 is arranged to capture image data of an area to the rear of the vehicle 36, as shown in Figure 5. It will though be appreciated that for use in other applications the camera need not be located at the rear of the vehicle and could, for example, be positioned towards the front, side or the underside of the vehicle, as is desired. The display 16 is configured to show images formed from the image data captured by the camera 12. The display 16 is typically located within the dashboard of the vehicle 36, and may be in the centre console of the vehicle 36. However, the display 16 could be located at any suitable position within the vehicle. The display 16 may be an LCD or LED screen, for example.
The processing unit 14 is in communication with both the camera 12 and the display 16. The processing unit is operable to receive and store image data captured by the camera 12 and to generate a composite image to be shown on the display 16. As described in detail below, the composite image comprises a real-time image part formed from live image data from the camera 12 and a delayed image part formed from previously captured image data from the camera stored by the processing unit 14. Typically, the processing unit 14 will include one or more processors programmed to carry out the processes and methods described herein. As is shown in Figures 2, 3 and 5, the system 10 additionally includes a series of sensors 18a, 18b, 18c operable to obtain and input data to the processing unit 14. The sensors 18a, 18b, 18c are communicable with the processing unit 14 and the data from the sensors 18a, 18b, 18c input into the processing unit 14 is used to determine which image data stored in the processing unit 14 is to be used to form the delayed image part of the composite image.
Typically, the sensors 18a, 18b, 18c are operable to detect one or more parameters relating to a vehicle's trajectory, speed and/or the relative distance and position of the vehicle with respect to an object of interest. For example, in the illustrated embodiment, sensor 18a comprises a sensor operable to determine the speed at which a vehicle 36 in which the system 10 is implemented is travelling (see Figure 5). Sensor 18b comprises a position sensor operable to determine the angular position of the steering wheel 40 of the vehicle 36. Data from the speed sensor 18a and position sensor 18b can be used in combination to calculate the trajectory of the vehicle in motion. Sensors 18c comprise distance sensors, data from which can be used to determine the distance of objects identified within the image data captured by the camera 12. Data from a number of distance sensors 18C spaced apart on the vehicle can be used to determine the position of an object of interest relative to the vehicle 36. Distance sensors 18c may be ultrasonic, infra-red or radar sensors, for example. It will be appreciated that alternative sensor arrangements can be used either in addition to those described or as an alternative in order to provide the processing unit 14 with the data required to generate the composite image. As illustrated in Figure 3, the processing unit 14 includes a server 20 for storing image data obtained from the camera 12 and a composite image generator 22 operable to generate the composite image from a real-time image part (formed from image data taken straight from the camera 12) and delayed image part (formed from stored image data within the server 20). The processing unit 14 also includes a vehicle state estimator 24 communicable with the sensors 18a, 18b, 18c to determine the state of the vehicle 36 (which may include information relating to the position or speed of the vehicle, for example) from the one or more parameters measured by the sensors 18a, 18b, 18c, and a vehicle-object state estimator 26 operable to determine how the position of an identified object or area of interest in a previously captured image will vary over time relative to the vehicle 36 on the basis of the estimation made by the vehicle state estimator 24.
The estimation made by the vehicle-object state estimator 26 is used by the composite image generator 22 to determine which previously captured image data will be used to form the delayed image part of the composite image. In the illustrated embodiment, the delayed image part of the composite image is positioned in the composite image at the estimated position of the tow point 38 of the vehicle 36. Accordingly, the prediction made by the vehicle-object state estimator 26 is used by the composite image generator 22 to determine which area of interest in previously captured image data is proximal the position of the tow point in the real-time image 28 at the point in time when the composite image is generated.
Figures 4A and 4B are schematic representations of images shown on the display 16 to illustrate the operational use of the camera system 10. The images on the display 16 include a real-time image part 28 which is a live feed taken directly from the camera 12. The real- time image part 28 forms part of the generated composite image. However, as schematically illustrated, the real-time image part 28 does not usefully fill all of the available screen space on the display 16 and there is an obstructed portion 30, represented at the bottom of the image by cross hatching. The obstructed portion 30 may be present due to misalignment of the camera 12 so that the area around the tow point is not visible or may be caused by a physical obstruction present within the field of view of the camera 12. For example, part of the bodywork of the vehicle 36, such as the rear bumper, may obscure the vehicle tow point 38 from the camera's view.
The presence of the obstructed portion 30 is problematic as it prevents the user from being able to view the tow point 38 on the vehicle 36 directly in the real-time image 28. This makes alignment of the vehicle tow point 38 with a corresponding trailer coupling when reversing difficult. The problem is further compounded when the tow point 38 is brought close to the trailer coupling, as the trailer coupling will also become obscured from view by the camera 12.
In accordance with an aspect of the invention, this problem is reduced by generating a composite image comprising a real-time image part 28 and a delayed image part 32 which is used to replace the obstructed portion 30, or at least a portion thereof. The composite image is generated by the processing unit 14 in which the delayed image part 32 is made up from previously captured image data relating to an area of interest to the rear of vehicle which the processing unit 14 has calculated would be located in the obscured portion of the real-time image being replaced at the time of display. In the present example, the delayed image part 32 is used to replace the obstructed portion 30 in the region occupied by the tow point so that the composite image creates a virtual display in which the area about the tow point 38 is shown as if it is part of the real-time image, even though it is not within the field of view of the camera 12 or is otherwise obscured.
An associated problem is glare in a real-time image part caused by the reflection of the sun from the bumper (or other part) of the vehicle within the field of view of the camera. This can visibly detract from the visibility of the tow bar, or of external objects in the vicinity of the bumper. By removing the bumper (or other part of the vehicle) which experiences glare, and replacing it with a delayed image part, the composite image presented to the user can be substantially free of glare and other reflection related artefacts, improving visibility of the tow bar, the trailer hitch coupling and any other objects within the composite image and in particular within the vicinity of the bumper. In other words, the present technique may be used not only to provide a user with a view of a region which is currently physically obstructed, but also to reduce glare related artefacts impacting on non-obscured areas.
Example 1
By way of an example, Figure 4A shows a real-time image taken from the camera 12 at a time of t=0s. This image data is stored by the server 20. Within the image is an object 34, which in this example is a coupling of a trailer, that is determined by means of distance sensors 18c to be at a position 5m away (in a straight line) from the tow point 38 of the vehicle 36 at that point in time. If the vehicle 36 travels at a constant speed of 1 ms"1 in a straight line towards the object 34, then the object 34 will be located proximal to the tow point of vehicle in the obstructed portion of real-time image at a time of t=5s. The speed and trajectory of the vehicle is determined by the state estimator 24 using data from the sensors 18a, 18b, 18c and this information is input into the vehicle-object state estimator 26 which determines that the object 34 will be at the position of the tow point 38 after 5s. This data is input into the composite image generator 22 which, at a time t=5s, generates a composite image as shown in Figure 4B formed from a real-time image part 28 produced from live image data received by the camera 12 at t=5s and a delayed image part 32 formed from the image data captured at a time t=0s and stored in the server 20. In the illustrated example, the delayed image part 32 is the area of the stored image around the identified object 34. As shown in Figure 4B, the delayed image part 32 is positioned in the composite image at the approximate position of the tow point 38 of the vehicle 36 with respect to the real-time image 28 within the obstructed portion 30 of the image.
It is envisaged that the delayed image part 32 of the composite image will be continually updated with image data stored in the server 20. It is not necessary for an object of interest, such as object 34, to have been identified before the delayed image part 32 is generated. In this way, the system 10 of the invention provides a way to effectively increase the field of view of the camera 12 by continually updating the delayed image part 32 of the composite image to show what is (or is calculated to be) proximal to the tow point 38 of the vehicle 36 at all times. For example, in the example described above, at a time t=1 s the composite image generator 22 could generate a composite image formed from a real-time image part 28 and a delayed image part 32 formed from image data stored within the server 20 relating to an area of interest in previously captured image data determined to be 1 m away from the tow point of the vehicle at time t=0s, whilst at a time of t=2s the delayed image part 32 could use stored image date relating to an area of interest which was 2m away from the tow point at time t=0s or which was 1 meter away at a time of t=1 s.
At any given point in time whilst the system 10 is in use, the processing unit 14 may be able to select from a number of different sets of stored image data which could be used to form the delayed image part 32. In this case, the processing unit 14 may be configured to select the stored image data which will best fit with the real-time image data to form a realistic composite image. For example, in the first example where the vehicle is moving at a constant speed of 1 ms"1 , when creating a composite image at a time of t=5s, rather than using stored image data of the area of interest 32 which was 5 meters from the tow point captured at a time of t=0s, the delayed image part could use image data of the area of interest when it was 1 meter from the tow point captured at a time of t=4s. In this case, image data captured at t=4s may be selected if it is a better fit with the real-time image than the image data captured at t=0s. In the above examples, the vehicle 36 is travelling in a straight line at a constant speed for simplicity. However, the system 10 is capable of taking into account changes in speed and trajectory of the vehicle in determining which part of the stored image data is to be displayed in the delayed image portion 32 using suitable algorithms and based on information provided to the vehicle state estimator 24 and the vehicle-object state estimator 26 by the sensors. Changes in vehicle speed and/or trajectory are estimated by the vehicle state estimator 24 and the vehicle-object state estimator 26 based on data from the sensors and/or other inputs and provides instructions to the composite image generator 22 regarding which area of which stored image should be used as the delayed image part 32 in any generated composite image.
The composite image may be generated at any feasible rate and is dependent on the frame rate of the camera 12 used and the processing speed of the processing unit 14. In the example discussed above, the rate at which the composite image is generated is 1 frame per second, or at least 1 frame of the delayed image part 32 per second. It is, however, envisaged that the composite image be generated at a much quicker rate than this exemplary embodiment such that the delayed image part 32 effectively appears to be a live image of the area around the tow point 38 of the vehicle 36.
The above embodiments are described by way of example only. Many variations are possible without departing from the scope of the invention as defined in the appended claims. For example, more than one camera 12 can be used to capture image date which can be used to generate the composite image. Furthermore, the delayed image part 32 could be used to fill whole of the obscured portion of the real-time image portion 28 in the display 16. Indeed, the delayed image portion could be positioned anywhere within the realtime image depending on the requirements of the particular application.

Claims

1 . A vehicle camera system comprising:
at least one camera arranged to capture image data;
a display configured to show images formed from the image data captured by the at least one camera; and
a processing unit operable to receive and store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part and a delayed image part formed from previously captured stored image data.
2. A vehicle camera system as claimed in claim 1 wherein the camera comprises at least one rear facing camera configured to be positioned at the rear of a vehicle.
3. A vehicle camera system as claimed in claim 2 wherein the processing unit is operable to generate the composite image with the delayed image part positioned within the composite image at the predicted position of a tow point of a vehicle in which the system is implemented.
4. A vehicle camera system as claimed in claim 3 wherein the delayed image part comprises image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
5. A vehicle camera system as claimed in any preceding claim wherein the processing unit is operable to continuously update the delayed image part of the composite image.
6. A vehicle camera system as claimed in claim 5 wherein the processing unit is configured such that the rate at which the delayed image part of the composite image is updated is equal to the frame rate of the camera.
7. A vehicle camera system as claimed in any preceding claim comprising one or more sensors operable to measure one or more parameters relating to the position and/or motion of a vehicle in which the system is implemented.
8. A vehicle camera system as claimed in claim 7 wherein the processing unit is operable to generate the composite image in dependence on the values of the one or more measured parameters of the vehicle.
9. A vehicle camera system as claimed in claim 8 wherein the processing unit is operable to select a delayed image part to be presented in the composite image from the stored image data and/or control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
10. A vehicle camera system as claimed in any of claims 7 to 9 wherein at least one of the one or more sensors comprises a motion sensor, a position sensor and/or a distance sensor.
1 1 . A vehicle camera system according to any preceding claim, wherein the delayed image part replaces the real-time image part within at least a portion of the field of view of the camera at which part of the vehicle is visible.
12. A processing unit for a vehicle, the processing unit being configured to receive and store image data captured by at least one camera within or on the vehicle and to generate images to be shown on a display; wherein the processing unit is operable to generate a composite image on the display comprising a real-time image part and a delayed image part formed from previously captured stored image data.
13. A processing unit as claimed in claim 12 operable to generate the composite image with the delayed image part positioned at the predicted position of a tow point of the vehicle.
14. A processing unit of claim 13 wherein the delayed image part comprises image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
15. A processing unit as claimed in any of claims 12 to 14 operable to continuously update the delayed image part of the composite image.
16. A processing unit as claimed in claim 15 configured such that the rate at which the delayed image part of the composite image is updated is equal to the frame rate of the camera.
17. A processing unit as claimed in any of claims 12 to 16 wherein the processing unit is communicable with one or more sensors operable to measure one or more parameters relating to the position and/or motion of the vehicle and is operable to generate a composite image in dependence on the values of the one or more measured parameters of the vehicle.
18. A processing unit as claimed in claim 17 wherein the processing unit is communicable with at least one motion sensor, at least one position sensor and/or at least one distance sensor.
19. A processing unit as claimed in any one of claims 12 to 18 wherein the delayed image part replaces the real-time image part within at least a portion of the field of view of the camera within which part of the vehicle is visible.
20. A vehicle comprising the vehicle camera system as claimed in any one of claims 1 to 1 1 or the processing unit as claimed in any one of claims 12 to 19.
21 . A method of forming a composite image from at least one camera mounted within or on a vehicle comprising:
using the at least one camera to capture image data;
storing image data captured by the at least one camera; and
generating a composite image from image data comprising a real-time image part and a delayed image part formed from the stored image data.
22. A method of claim 21 comprising using at least one rear facing camera positioned at the rear of a vehicle to capture the image data.
23. A method of claim 22 comprising positioning the delayed image part at the predicted position of a tow point of the vehicle.
24. A method as claimed in claim 23 comprising forming the delayed image part from image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
25. A method of any one of claims 21 to 24 comprising continuously updating the delayed image part of the composite image.
26. A method of claim 25 comprising updating the delayed image part of the composite image at a rate which is equal to the frame rate of the camera.
27. A method of any of claims 21 to 26 comprising using one or more sensors to measure one or more parameters relating to the position and/or motion of the vehicle and generating a composite image in dependence on the values of the one or more measured parameters of the vehicle.
28. A method of claim 27 comprising selecting a delayed image part to be presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters and/or controlling the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
29. A method of claim 27 or claim 28, the method comprising selecting for use as the delayed image part, stored image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would occupy a position relative to the real-time image part corresponding to the location of the delayed image part in the composite image, taking into account sensed movement of the vehicle since the previously captured image data was captured.
30. A method of claim 27 or claim 28 comprising using at least one motion sensor at least one position sensor and/or at least one distance sensor.
31 . A method of any one of claims 21 to 30, wherein the delayed image part replaces the real-time image part within at least a portion of the field of view of the camera within which part of the vehicle is visible.
32. A program for a computer configured to perform the method according to any one of claims 21 to 31 when executed on a computer.
33. A data carrier comprising a computer program as claimed in claim 32.
PCT/EP2017/071205 2016-08-26 2017-08-23 A vehicle camera system WO2018037032A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1614551.8 2016-08-26
GB1614551.8A GB2553143A (en) 2016-08-26 2016-08-26 A Vehicle camera system

Publications (1)

Publication Number Publication Date
WO2018037032A1 true WO2018037032A1 (en) 2018-03-01

Family

ID=57119800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/071205 WO2018037032A1 (en) 2016-08-26 2017-08-23 A vehicle camera system

Country Status (2)

Country Link
GB (1) GB2553143A (en)
WO (1) WO2018037032A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10357715B2 (en) * 2017-07-07 2019-07-23 Buxton Global Enterprises, Inc. Racing simulation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020149673A1 (en) * 2001-03-29 2002-10-17 Matsushita Electric Industrial Co., Ltd. Image display method and apparatus for rearview system
GB2469438A (en) * 2009-03-09 2010-10-20 Applic Solutions Displaying movement of an object
GB2513393A (en) * 2013-04-26 2014-10-29 Jaguar Land Rover Ltd Vehicle hitch assistance system
DE102013207906A1 (en) * 2013-04-30 2014-10-30 Bayerische Motoren Werke Aktiengesellschaft Guided vehicle positioning for inductive charging with the help of a vehicle camera

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4156214B2 (en) * 2001-06-13 2008-09-24 株式会社デンソー Vehicle periphery image processing apparatus and recording medium
JP4593070B2 (en) * 2001-12-12 2010-12-08 株式会社エクォス・リサーチ Image processing apparatus for vehicle
JP4670463B2 (en) * 2005-04-28 2011-04-13 アイシン・エィ・ダブリュ株式会社 Parking space monitoring device
JP4815993B2 (en) * 2005-10-19 2011-11-16 アイシン・エィ・ダブリュ株式会社 Parking support method and parking support device
JP2008109283A (en) * 2006-10-24 2008-05-08 Nissan Motor Co Ltd Vehicle periphery display device and method for presenting visual information
EP2161195B1 (en) * 2008-09-08 2012-04-18 Thales Avionics, Inc. A system and method for providing a live mapping display in a vehicle
DE102014223941A1 (en) * 2014-11-25 2016-05-25 Robert Bosch Gmbh Method for marking camera images of a parking maneuver assistant

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020149673A1 (en) * 2001-03-29 2002-10-17 Matsushita Electric Industrial Co., Ltd. Image display method and apparatus for rearview system
GB2469438A (en) * 2009-03-09 2010-10-20 Applic Solutions Displaying movement of an object
GB2513393A (en) * 2013-04-26 2014-10-29 Jaguar Land Rover Ltd Vehicle hitch assistance system
DE102013207906A1 (en) * 2013-04-30 2014-10-30 Bayerische Motoren Werke Aktiengesellschaft Guided vehicle positioning for inductive charging with the help of a vehicle camera

Also Published As

Publication number Publication date
GB2553143A (en) 2018-02-28
GB201614551D0 (en) 2016-10-12

Similar Documents

Publication Publication Date Title
US11528413B2 (en) Image processing apparatus and image processing method to generate and display an image based on a vehicle movement
US9902323B2 (en) Periphery surveillance apparatus and program
US8441536B2 (en) Vehicle periphery displaying apparatus
US20160375831A1 (en) Hitching assist with pan/zoom and virtual top-view
CN107021018B (en) Visual system of commercial vehicle
JP4782963B2 (en) Device for monitoring the surroundings of a parked vehicle
CN103108796B (en) The method of assisting vehicle shut-down operation, driver assistance system and power actuated vehicle
US20160297362A1 (en) Vehicle exterior side-camera systems and methods
JP2019509204A (en) Vehicle-trailer retraction system with hitch angle detection and trailer shape learning that does not require a target
US20110228980A1 (en) Control apparatus and vehicle surrounding monitoring apparatus
JP5182137B2 (en) Vehicle periphery display device
WO2018159017A1 (en) Vehicle display control device, vehicle display system, vehicle display control method and program
GB2554427B (en) Method and device for detecting a trailer
WO2018150642A1 (en) Surroundings monitoring device
EP4070996B1 (en) Auto panning camera mirror system including image based trailer angle detection
US20140285665A1 (en) Apparatus and Method for Assisting Parking
JP2016175549A (en) Safety confirmation support device, safety confirmation support method
JP2010264945A (en) Parking support device, parking support method and parking support program
JP6720729B2 (en) Display controller
JP5083142B2 (en) Vehicle periphery monitoring device
WO2018037032A1 (en) A vehicle camera system
KR20160107529A (en) Apparatus and method for parking assist animated a car image
JP6439233B2 (en) Image display apparatus for vehicle and image processing method
JP5083137B2 (en) Driving assistance device
JP6961882B2 (en) Parking support device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17757532

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17757532

Country of ref document: EP

Kind code of ref document: A1