GB2553143A - A Vehicle camera system - Google Patents

A Vehicle camera system Download PDF

Info

Publication number
GB2553143A
GB2553143A GB1614551.8A GB201614551A GB2553143A GB 2553143 A GB2553143 A GB 2553143A GB 201614551 A GB201614551 A GB 201614551A GB 2553143 A GB2553143 A GB 2553143A
Authority
GB
United Kingdom
Prior art keywords
vehicle
image
camera
processing unit
delayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1614551.8A
Other versions
GB201614551D0 (en
Inventor
Hussein Adwan Adam
Strano Giovanni
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jaguar Land Rover Ltd
Original Assignee
Jaguar Land Rover Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jaguar Land Rover Ltd filed Critical Jaguar Land Rover Ltd
Priority to GB1614551.8A priority Critical patent/GB2553143A/en
Publication of GB201614551D0 publication Critical patent/GB201614551D0/en
Priority to PCT/EP2017/071205 priority patent/WO2018037032A1/en
Publication of GB2553143A publication Critical patent/GB2553143A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/002Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles specially adapted for covering the peripheral part of the vehicle, e.g. for viewing tyres, bumpers or the like
    • B60R1/003Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles specially adapted for covering the peripheral part of the vehicle, e.g. for viewing tyres, bumpers or the like for viewing trailer hitches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/04Rear-view mirror arrangements mounted inside vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1253Mirror assemblies combined with other articles, e.g. clocks with cameras, video cameras or video screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/808Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for facilitating docking to a trailer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

A vehicle camera system for use in a vehicle comprising: a camera 12, which may be rear facing, arranged to capture image data; a display 16 configured to show images formed from the image data captured by the camera; and a processing unit 14 operable to receive and store image data captured by the camera, and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part and a delayed image part formed from previously captured stored image data. The composite image may be generated with the delayed image part positioned within the composite image at a predicted position of a tow point, which may also comprise of image data relating to an area or object within an area of a previously captured image which is determined to have moved to an area proximal to the vehicle tow point. A method, which may include forming a composite image from at least one camera mounted on a vehicle to capture image data; storing image data captured by the at least one camera; and generating a composite image from image data comprising a real-time image part and a delayed image part.

Description

(71) Applicant(s):
Jaguar Land Rover Limited (Incorporated in the United Kingdom)
Abbey Road, Whitley, Coventry, Warwickshire, CV3 4LF, United Kingdom (72) Inventor(s):
(56) Documents Cited:
EP 2161195 A JP 2008109283 A JP 2003244688 A US 20030165255 A (58) Field of Search:
INT CL B60R, B62D, G08G Other: WPI, EPODOC
Adam Hussein Adwan
Giovanni Strano
WO 2016/082961 A1 JP 2006311299 A US 20070088474 A1 (74) Agent and/or Address for Service:
JAGUAR LAND ROVER
Patents Department W/1/073, Abbey Road, Whitley, Coventry, Warwickshire, CV3 4LF, United Kingdom (54) Title of the Invention: A Vehicle camera system
Abstract Title: A vehicle camera system which uses a processor to produce a composite image comprising of a real time image and a delayed image on a display (57) A vehicle camera system for use in a vehicle comprising: a camera 12, which may be rear facing, arranged to capture image data; a display 16 configured to show images formed from the image data captured by the camera; and a processing unit 14 operable to receive and store image data captured by the camera, and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part and a delayed image part formed from previously captured stored image data. The composite image may be generated with the delayed image part positioned within the composite image at a predicted position of a tow point, which may also comprise of image data relating to an area or object within an area of a previously captured image which is determined to have moved to an area proximal to the vehicle tow point. A method, which may include forming a composite image from at least one camera mounted on a vehicle to capture image data; storing image data captured by the at least one camera; and generating a composite image from image data comprising a real-time image part and a delayed image part.
Figure GB2553143A_D0001
Figure GB2553143A_D0002
Figure GB2553143A_D0003
<c^ I
Zjs-
Figure GB2553143A_D0004
Figure GB2553143A_D0005
Figure GB2553143A_D0006
Figure GB2553143A_D0007
PiqOcCL· Μ
Figure GB2553143A_D0008
Figure GB2553143A_D0009
16.
Figure GB2553143A_D0010
X
A VEHICLE CAMERA SYSTEM
TECHNICAL FIELD
The present disclosure relates to a vehicle camera system. Aspects of the invention relate to a vehicle camera system suitable for assisting in the alignment of a vehicle tow point with a corresponding trailer coupling, a processing unit for a vehicle, a vehicle comprising a vehicle camera system, and a method of forming a composite image from a camera positioned within a vehicle which may improve or assist in the alignment a vehicle tow point with a corresponding coupling.
BACKGROUND
It is known to provide a camera or cameras on the rear of a vehicle in order to capture images behind the vehicle. Such images are typically used to assist a driver when reversing the vehicle. Systems of this type are useful in situations wherein a driver is reversing a vehicle towards a trailer or the like in order to position a tow point on the vehicle proximal to the trailer coupling in order to allow the trailer to be hitched onto the vehicle for towing. To assist further, it is known to incorporate in a displayed image a predicted trajectory of the vehicle as a graphic which is overlaid onto the image obtained by the camera. In some known systems the projected graphic comprises a guide line which emanates from the tow point of the vehicle to display a predicted/recommended trajectory of the tow point to act as a guide in assisting the driver in positioning the vehicle tow point adjacent the trailer coupling ready for hitching the trailer.
In some known systems, the tow point on the rear of the vehicle is within the field of view of the rear facing camera. In such systems, the position of the tow point relative to the trailer hitch coupling can be seen. However, in some instances the tow point may not be within the field of view of the camera and therefore the position of the tow point relative to the trailer coupling cannot easily be determined. For instance, due to the relative positioning of the tow point and the camera on the rear of the vehicle, the tow point may be obscured in the image obtained by the camera by part of the bodywork of the vehicle. Alternatively, given that the tow point is typically located lower than the camera and it therefore being necessary to point the camera downwards to capture the tow point, keeping the tow point within the field of view of the camera inevitably leads to a reduction in the distance behind the vehicle which can be seen. In some instances this may not be desirable or acceptable.
To overcome this issue, it is known to project the position (or the estimated position) of the tow point onto the image captured by the camera. However, given that it is common for the tow point to be outside of the field of view of the camera, the position of the tow point can fall outside of the boundary of the captured image. This solution is therefore incomplete as the trailer coupling will also move out outside of the boundary of the captured image as it approaches the tow point, at which time its distance from, and relative movement to, the tow point cannot be determined. This can typically lead to a misalignment of the tow point with respect to the trailer coupling.
There is therefore a need to provide a system which maintains an acceptable field of view of the captured image whilst simultaneously allows for the area around the tow point of the vehicle to be visible.
It is an aim of the present invention to address disadvantages associated with the prior art.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide a vehicle camera system, a processing unit, a vehicle, a method, a computer program and a data carrier as claimed in the appended claims.
According to an aspect of the invention, there is provided a vehicle camera system comprising: at least one camera arranged to capture image data; a display configured to show images formed from the captured image data; and a processing unit operable to receive and store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part formed from image data captured by the at least one camera at that point in time and a delayed image part formed from previously captured stored image data.
The system provides a means to generate and display a composite image including a delayed image part which may lie outside of the current field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part. The system of the invention provides a means to effectively eliminate obstructions within the real-time image part and/or to effectively increase the field of view of the camera.
In some embodiments, the processing unit is operable in a moving vehicle to select for use in the delayed image part, previously captured image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would be at the position relative the real-time image at which the delayed image part is displayed.
In some embodiments the at least one camera comprises at least one rear facing camera configured to be positioned at the rear of a vehicle. The at least one camera may be arranged to capture image data to the rear of the vehicle. The system may comprise more than one camera. In which case, at least two cameras may be rear facing cameras.
The processing unit may be operable to generate the composite image with the delayed image part positioned at a set location within the composite image relative to the real-time image part. The delayed image part may in some embodiments be positioned within the composite image at the predicted position of a tow point of a vehicle in which the system is implemented.
The delayed image part may comprise image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated. The area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area with respect to the vehicle or both the movement of the vehicle and the object.
In this way, the vehicle camera system may be suitable for assisting in the alignment of a vehicle tow point with a corresponding trailer coupling. In particular, the vehicle camera system of the invention may assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
The processing unit may be operable to continuously update the delayed image part of the composite image. For example, each time the composite image is generated, the processing unit may be operable to form a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle in which the system is implemented.
In use, previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move). The processing unit may therefore be operable to determine which area of which previously captured image data is proximal to the tow point of the vehicle at any given time. The processing unit may be operable to determine this on the basis of the movement of the vehicle.
The processing unit may be configured such that the rate at which the delayed image part of the composite image is updated may be equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the vehicle camera system of the invention provides a means to effectively show to a user a simulated “live” image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
The camera system may comprise one or more sensors operable to measure one or more parameters relating to the position and/or motion of a vehicle in which the system is implemented. The one or more sensors may be communicable with the processing unit to be able to input data relating to the one or more parameters into the processing unit. In some embodiments the processing unit may be operable to generate the composite image in dependence on the values of the one or more measured parameters of the vehicle.
The processing unit may be operable to select a delayed image part to be presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the processing unit may be operable to control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
In this way, the camera system of the invention provides a means to select a delayed image part from the stored image data and/or position the delayed image part in relation to the realtime image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part. This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively “see through” the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera.
At least one of the one or more sensors may comprise a motion sensor. In some embodiments the system may comprise one or more motion sensors operable to determine the speed at which a vehicle in which the system is implemented is travelling. The motion sensor may be operable to measure the speed of rotation of one or more wheels of the vehicle. In some embodiments the motion sensor comprises a speedometer within the vehicle.
At least one of the one or more sensors may comprise a position sensor. The, or each, position sensor may be operable to determine the position of one or more components of a vehicle within which the system is implemented or the position of the vehicle itself. For example, in some embodiments the system may comprise at least one position sensor operable to determine the angular position of the steering wheel of the vehicle. Additionally or alternatively, the system may comprise at least one position sensor operable to determine the angular position of one or both of the front wheels of the vehicle. In some embodiments the system may comprise at least one position sensor operable to determine the angular position of the vehicle relative to one or more objects identified in the captured image data. The angular position of the vehicle may be the yaw angle of the vehicle, for example.
At least one of the one or more sensors may comprise a distance sensor. The, or each, distance sensor may be operable to determine the distance between the sensor and one or more objects identified within the image data captured by the camera. When implemented in a vehicle the, or each, distance sensor may be located on the rear of a vehicle and be operable to determine the distance between the rear of the vehicle and the one or more objects identified in the captured image data. The, or each, distance sensor may comprise an ultrasonic sensor, an infra-red sensor or a radar sensor, for example.
This is advantageous as the distance information obtained by the one or more distance sensors may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
In embodiments wherein the system comprises one or more distance sensors in conjunction with one or more motion and/or position sensors, the processing unit may be operable to calculate the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified. In such embodiments, the processing unit may be operable to generate the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
In this way, the system provides a means to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the camera system is implemented on a vehicle wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle.
In some embodiments the processing unit may be operable to overlay on the composite image a graphical representation of a suggested trajectory for a vehicle within which the system is implemented. For example, the processing unit may be operable to display a graphical representation on the composite image which comprises a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the composite image to illustrate the trajectory of the tow point of the vehicle which needs to be taken to align the tow point within an identified object.
In some embodiments the system may comprise two or more cameras. In such embodiments, the processing unit may be operable to receive and store image data from each camera. The processing unit may be operable to form a single composite image from the image data from each of the two or more cameras. The delayed image part of the composite image may be formed from image data from at least one of the cameras. In some embodiments the delayed image part of the composite image may be formed from image data from each of the two or more cameras. Providing two or more cameras in the system may increase the field of view of the composite image.
In some embodiments the display may comprise an LCD or LED monitor, for example. The display may be configured to be positioned within the interior of a vehicle. In some embodiments the display is configured to be positioned within the dashboard of a vehicle. The display may be configured to be positioned within the centre console of a vehicle. In further embodiments, the display may be configured to form at least part of a mirror of a vehicle, which may be a rear-view mirror such as an interior mirror of a vehicle, for example.
According to an aspect of the invention, there is provided a vehicle camera system configured to be implemented within a vehicle comprising: at least one rear-facing camera arranged to capture image data to the rear of the vehicle; a display configured to show images formed from the image data captured by the at least one camera; and a processing unit operable to store image data captured by the at least one camera and to generate a composite image to be shown on the display; wherein the composite image comprises a real-time image part formed from image data captured at that point in time and a delayed image part formed from previously captured stored image data; wherein the delayed image part is positioned within the composite image with respect to the real-time image part at a predicted position of a tow point of the vehicle.
The vehicle camera system of this aspect of the invention may incorporate any or all of the features of the preceding aspect of the invention as desired or appropriate.
According to another aspect of the invention, there is provided a processing unit for a vehicle, the processing unit being configured to receive and store image data captured by at least one camera within or on the vehicle and to generate images to be shown on a display; wherein the processing unit is operable to generate a composite image on the display comprising a real-time image part formed from image data captured at that point in time and a delayed image part formed from previously captured stored image data.
The processing unit of the invention is able to generate and display a composite image including a delayed image part which may lie outside of the field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part. The processing unit of the invention provides a means to effectively eliminate obstructions within the real-time image part or to effectively increase the field of view of the camera.
The processing unit may be operable to generate the composite image with the delayed image part positioned at a set location within the composite image relative to the real-time image part. The delayed image part may in some embodiments be positioned within the composite image at the predicted position of a tow point of the vehicle.
The delayed image part may comprise image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated. The area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area or both the movement of the vehicle and the object/area.
In this way, the processing unit may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
The processing unit may be operable to continuously update the delayed image part of the composite image. For example, each time the composite image is generated, the processing unit may be operable to form a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle.
In use, previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move). The processing unit may therefore be operable to determine which area of which previously captured image data is proximal to the tow point of the vehicle at any given time. The processing unit may be operable to determine this on the basis of the movement of the vehicle.
The processing unit may be configured such that the rate at which the delayed image part of the composite image is updated may be equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the processing unit of the invention is able to effectively generate a “live” image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
In some embodiments the processing unit may be communicable with one or more sensors. The one or more sensors may be operable to measure one or more parameters relating to the position and/or motion of the vehicle. In such embodiments, the processing unit may be configured to receive data from the one or more sensors relating to the one or more parameters of the vehicle. In some embodiments the processing unit may be operable to generate a composite image in dependence on the values of the one or more measured parameters of the vehicle.
The processing unit may be operable to select a delayed image part presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the processing unit may be operable to control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
In this way, the processing unit of the invention is capable of selecting a delayed image part from the stored image data and/or position the delayed image part in relation to the real-time image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part. This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively “see through” the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera.
The processing unit may be communicable with at least one motion sensor which may be operable to determine the speed at which the vehicle is travelling.
This is advantageous as the speed information received by the processing unit from the or each motion sensor may be used by the processing unit to determine the closing speed between any identified object and the vehicle.
The processing unit may be communicable with at least one position sensor which may be operable to determine the position of one or more components of the vehicle or the position of the vehicle itself. For example, the one or more components of the vehicle may comprise the steering wheel and the, or each, position sensor may be operable to determine the angular position of the steering wheel.
This is advantageous as the position information received by the processing unit from the, or each, position sensor may be used by the processing unit to predict the trajectory of the vehicle.
The processing unit may be communicable with at least one distance sensor which may be operable to determine the distance between the sensor and one or more objects identified within the image data captured by the camera.
This is advantageous as the distance information received by the processing unit from the or each distance sensor may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
In embodiments wherein the processing unit is communicable with one or more distance sensors and one or more motion and/or position sensors, the processing unit may be operable to calculate the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified. In such embodiments, the processing unit may be operable to generate the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
In this way, the processing unit is operable to generate a composite image to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the processing unit is implemented on a vehicle wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle.
In some embodiments the processing unit may be operable to overlay on the composite image a graphical representation of a suggested trajectory for a vehicle within which the system is implemented. For example, the processing unit may be operable to display a graphical representation on the composite image which comprises a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the composite image to illustrate a suggested trajectory of the tow point of the vehicle which needs to be taken in order to align the tow point with an identified object.
According to another aspect of the invention, there is provided a vehicle comprising a vehicle camera system or a processing unit in accordance with any of the preceding aspects of the present invention.
The vehicle may comprise a motor vehicle. The vehicle may comprise a road vehicle. The vehicle may be a car.
According to a further aspect of the invention, there is provided a method of forming a composite image from at least one camera mounted within or on a vehicle comprising:
using the at least one camera to capture image data;
storing image data captured by the at least one camera; and generating a composite image from captured image data comprising a real-time image part formed from image data captured at that point in time and a delayed image part formed from stored captured image data.
The method of the invention provides a means to generate and display a composite image including a delayed image part which may lie outside of the field of view of the camera or is positioned within an area of the real-time image part which is obstructed in some way at a particular point in time. This is particularly advantageous where the camera is obstructed or misaligned resulting in an area which would otherwise be visible not being present in the real-time image part. The method of the invention provides a means to effectively eliminate obstructions within the real-time image part or to effectively increase the field of view of the camera.
In some embodiments the method comprises using at least one rear facing camera positioned at the rear of a vehicle. The method may comprise capturing image data to the rear of the vehicle. In this way, the method may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling.
The method may comprise positioning the delayed image part at a set location within the composite image relative to the real-time image part. In some embodiments the method may comprise positioning the delayed image part at the predicted position of a tow point of the vehicle.
In some embodiments the method may comprise forming the delayed image part from image data relating to an area of a previously captured image, or object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated. The area or object present in the previously captured image may have moved due to the movement of the vehicle, the movement of the object within the area or both the movement of the vehicle and the object/area.
In this way, the method provides may be used to assist in the alignment of a vehicle tow point with a corresponding trailer coupling where the tow point of the vehicle is not within the field of view of the camera.
The method may comprise continuously updating the delayed image part of the composite image. For example, each time the composite image is generated, the method may comprise forming a delayed image part from any previously captured image data which is determined to be proximal to the tow point of the vehicle in which the system is implemented.
Previously captured image data determined to be proximal to the tow point of the vehicle will vary as the vehicle moves (or objects within the environment of the vehicle move). The method may therefore comprise determining which area of which previously captured image data is proximal to the tow point of the vehicle at any given time. The method may comprise determining which area of which previously captured image data is proximal to the tow point of the vehicle on the basis of the movement of the vehicle.
The method may comprise updating the delayed image part of the composite image at a rate which is equal to or substantially equal to the frame rate of the camera and hence the frame rate of the real-time image part of the composite image. In this way, the method may generate a “live” image of the area surrounding the tow point of the vehicle, even where the tow point is not within the field of view of the camera.
The method may comprise using one or more sensors to measure one or more parameters relating to the position and/or motion of the vehicle. In some embodiments the method may comprise generating a composite image in dependence on the values of the one or more measured parameters of the vehicle.
The method may comprise selecting a delayed image part presented in the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters. Additionally or alternatively, the method may comprise controlling the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters. The method may comprise selecting as the delayed image part stored image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would occupy a position relative to the real-time image corresponding to the location of the delayed image part in the composite image, taking into account sensed movement of vehicle since the previously captured image data was captured.
In this way, the method may be used to select a delayed image part from the stored image data and/or position the delayed image part in relation to the real-time image part in the composite image to show where an object or objects within the delayed image part are positioned in relation to the real-time image part. This is particularly advantageous where an obstructed region within the real-time image part can be replaced with the delayed image part in the composite image to effectively “see through” the obstruction, or where the delayed image part can be placed at or adjacent to a boundary of the real-time image part within the composite image to effectively increase the field of view of the camera.
In some embodiments the method may comprise using at least one motion sensor. In such embodiments, the method may comprise using the, or each, motion sensor to determine the speed at which the vehicle is travelling.
The method may comprise using at least one position sensor. In such embodiments, the method may comprise using the, or each, position sensor to determine the position of one or more components of the vehicle or the position of the vehicle itself. For example, in some embodiments the method may comprise using at least one position sensor to determine the angular position of the steering wheel of the vehicle. Additionally or alternatively, the method may comprise using at least one position sensor to determine the angular position of one or both of the front wheels of the vehicle. In some embodiments the method may comprise using at least one position sensor to determine the angular position of the vehicle relative to one or more objects identified in the captured image data. The angular position of the vehicle may be the yaw angle of the vehicle, for example.
The method may comprise using at least one distance sensor. In such embodiments, the method may comprise using the, or each, distance sensor to determine the distance between the sensor and one or more objects identified within the image data captured by the camera. The or each distance sensor may be located on the rear of a vehicle and the method may comprise using the or each distance sensor to determine the distance between the rear of the vehicle and the one or more objects identified in the captured image data. The, or each, distance sensor may comprise an ultrasonic sensor, an infra-red sensor or a radar sensor, for example.
This is advantageous as the distance information obtained by the one or more distance sensors may be used by the processing unit to determine the distance between any identified object and an obstructed area or a relevant boundary of the real-time image part.
In embodiments wherein the method comprises using one or more distance sensors in conjunction with one or more motion and/or position sensors, the method may additionally comprise calculating the time it may take any identified object to move to an obstructed area or to a relevant boundary of the real-time image part from its position when identified. In such embodiments, the method may comprise generating the composite image with the delayed image part containing the identified object positioned at the obstructed area or relevant boundary of the real-time image part after the calculated time has elapsed.
In this way, the method may be used to show to a user the position of an identified object even where that object has moved out of the field of view of the camera or behind an obstruction within the field of view of the camera. This is particularly beneficial in instances wherein the tow point of the vehicle is out of the field of view of the camera or behind an obstruction within the field of view of the camera and the identified object is a relevant coupling or trailer to be coupled to the tow point of the vehicle.
In some embodiments the method may comprise overlaying a graphical representation of the predicted trajectory of the vehicle on the composite image. For example, the method may comprise forming a line emanating from the tow point of the vehicle (or predicted position of the tow point of the vehicle within the composite image) which extends across the generated composite image to illustrate the predicted trajectory of the tow point of the vehicle.
In some embodiments the method may comprise using two or more cameras. In such embodiments, the method may comprise receiving and storing image data from each camera. The method may comprise forming a single composite image from the image data from each of the two or more cameras. The delayed image part of the composite image may be formed from image data from at least one of the cameras. In some embodiments the delayed image part of the composite image may be formed from image data from each of the two or more cameras. Using two or more cameras may increase the field of view of the composite image.
In some embodiments the method comprises displaying the composite image on a display within the vehicle. The method may comprise displaying the composite image on a display positioned within the dashboard or centre console of a vehicle. In some embodiments the method may comprise displaying the composite image on a display which forms at least part of a mirror of a vehicle, which may be a rear-view mirror such as an interior mirror of a vehicle, for example.
According to a still further aspect of the invention, there is provided a computer program configured to perform the method of the previous aspect of the invention when executed on a computer and/or a data carrier comprising such a computer program. The computer may be the processing unit of the vehicle camera system according to an aspect of the invention.
According to yet another aspect of the invention, there is provided a non-transitory, computer-readable storage medium comprising a computer program according to the above aspect of the invention.
According to another aspect of the invention there is provided a data carrier comprising a computer program as defined above.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1
Figure 2
Figure 3 is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention;
is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention;
is a schematic diagram of an embodiment of a camera system in accordance with an aspect of the invention;
Figures 4A and 4B are a series of schematic representations of a display illustrating the operational use of embodiments of a camera system in accordance with an aspect of the invention; and
Figure 5 is a schematic diagram of an embodiment of a vehicle in accordance with the invention illustrating the implementation of an embodiment of a camera system within the vehicle.
DETAILED DESCRIPTION
The Figures illustrate an embodiment of a vehicle camera system 10 in accordance with an aspect of the invention. In this embodiment, the system 10 is implemented as part of a hitch guidance system for assisting in the alignment of a tow point 38 of a vehicle 36 with a corresponding trailer coupling 34. However, it should be understood that a camera system in accordance with an aspect of the invention in its broadest sense is not necessarily limited to application in a hitch guidance system. Rather, the system can be adapted for a range of different purposes. Furthermore, it should be understood that the term “camera” is intended to encompass any suitable device for capturing image data.
Figure 1 is a schematic diagram of the camera system 10 and provides an overview of the components of the system 10. Specifically, the camera system 10 includes the camera 12, a processing unit 14 and a display 16. In the illustrated embodiment for use in the hitch guidance system, the camera 12 is arranged to capture image data of an area to the rear of the vehicle 36, as shown in Figure 5. It will though be appreciated that for use in other applications the camera need not be located at the rear of the vehicle and could, for example, be positioned towards the front, side or the underside of the vehicle, as is desired.
The display 16 is configured to show images formed from the image data captured by the camera 12. The display 16 is typically located within the dashboard of the vehicle 36, and may be in the centre console of the vehicle 36. However, the display 16 could be located at any suitable position within the vehicle. The display 16 may be an LCD or LED screen, for example.
The processing unit 14 is in communication with both the camera 12 and the display 16. The processing unit is operable to receive and store image data captured by the camera 12 and to generate a composite image to be shown on the display 16. As described in detail below, the composite image comprises a real-time image part formed from live image data from the camera 12 and a delayed image part formed from previously captured image data from the camera stored by the processing unit 14. Typically, the processing unit 14 will include one or more processors programmed to carry out the processes and methods described herein.
As is shown in Figures 2, 3 and 5, the system 10 additionally includes a series of sensors 18a, 18b, 18c operable to obtain and input data to the processing unit 14. The sensors 18a, 18b, 18c are communicable with the processing unit 14 and the data from the sensors 18a, 18b, 18c input into the processing unit 14 is used to determine which image data stored in the processing unit 14 is to be used to form the delayed image part of the composite image.
Typically, the sensors 18a, 18b, 18c are operable to detect one or more parameters relating to a vehicle’s trajectory, speed and/or the relative distance and position of the vehicle with respect to an object of interest. For example, in the illustrated embodiment, sensor 18a comprises a sensor operable to determine the speed at which a vehicle 36 in which the system 10 is implemented is travelling (see Figure 5). Sensor 18b comprises a position sensor operable to determine the angular position of the steering wheel 40 of the vehicle 36. Data from the speed sensor 18a and position sensor 18b can be used in combination to calculate the trajectory of the vehicle in motion. Sensors 18c comprise distance sensors, data from which can be used to determine the distance of objects identified within the image data captured by the camera 12. Data from a number of distance sensors 18C spaced apart on the vehicle can be used to determine the position of an object of interest relative to the vehicle 36. Distance sensors 18c may be ultrasonic, infra-red or radar sensors, for example. It will be appreciated that alternative sensor arrangements can be used either in addition to those described or as an alternative in order to provide the processing unit 14 with the data required to generate the composite image.
As illustrated in Figure 3, the processing unit 14 includes a server 20 for storing image data obtained from the camera 12 and a composite image generator 22 operable to generate the composite image from a real-time image part (formed from image data taken straight from the camera 12) and delayed image part (formed from stored image data within the server 20). The processing unit 14 also includes a vehicle state estimator 24 communicable with the sensors 18a, 18b, 18c to determine the state of the vehicle 36 (which may include information relating to the position or speed of the vehicle, for example) from the one or more parameters measured by the sensors 18a, 18b, 18c, and a vehicle-object state estimator 26 operable to determine how the position of an identified object or area of interest in a previously captured image will vary over time relative to the vehicle 36 on the basis of the estimation made by the vehicle state estimator 24.
The estimation made by the vehicle-object state estimator 26 is used by the composite image generator 22 to determine which previously captured image data will be used to form the delayed image part of the composite image. In the illustrated embodiment, the delayed image part of the composite image is positioned in the composite image at the estimated position of the tow point 38 of the vehicle 36. Accordingly, the prediction made by the vehicle-object state estimator 26 is used by the composite image generator 22 to determine which area of interest in previously captured image data is proximal the position of the tow point in the real-time image 28 at the point in time when the composite image is generated.
Figures 4A and 4B are schematic representations of images shown on the display 16 to illustrate the operational use of the camera system 10. The images on the display 16 include a real-time image part 28 which is a live feed taken directly from the camera 12. The realtime image part 28 forms part of the generated composite image. However, as schematically illustrated, the real-time image part 28 does not usefully fill all of the available screen space on the display 16 and there is an obstructed portion 30, represented at the bottom of the image by cross hatching. The obstructed portion 30 may be present due to misalignment of the camera 12 so that the area around the tow point is not visible or may be caused by a physical obstruction present within the field of view of the camera 12. For example, part of the bodywork of the vehicle 36, such as the rear bumper, may obscure the vehicle tow point 38 from the camera’s view.
The presence of the obstructed portion 30 is problematic as it prevents the user from being able to view the tow point 38 on the vehicle 36 directly in the real-time image 28. This makes alignment of the vehicle tow point 38 with a corresponding trailer coupling when reversing difficult. The problem is further compounded when the tow point 38 is brought close to the trailer coupling, as the trailer coupling will also become obscured from view by the camera 12.
In accordance with an aspect of the invention, this problem is reduced by generating a composite image comprising a real-time image part 28 and a delayed image part 32 which is used to replace the obstructed portion 30, or at least a portion thereof. The composite image is generated by the processing unit 14 in which the delayed image part 32 is made up from previously captured image data relating to an area of interest to the rear of vehicle which the processing unit 14 has calculated would be located in the obscured portion of the real-time image being replaced at the time of display. In the present example, the delayed image part 32 is used to replace the obstructed portion 30 in the region occupied by the tow point so that the composite image creates a virtual display in which the area about the tow point 38 is shown as if it is part of the real-time image, even though it is not within the field of view of the camera 12 or is otherwise obscured.
Example 1
By way of an example, Figure 4A shows a real-time image taken from the camera 12 at a time of t=Os. This image data is stored by the server 20. Within the image is an object 34, which in this example is a coupling of a trailer, that is determined by means of distance sensors 18c to be at a position 5m away (in a straight line) from the tow point 38 of the vehicle 36 at that point in time.
If the vehicle 36 travels at a constant speed of 1ms1 in a straight line towards the object 34, then the object 34 will be located proximal to the tow point of vehicle in the obstructed portion of real-time image at a time of t=5s. The speed and trajectory of the vehicle is determined by the state estimator 24 using data from the sensors 18a, 18b, 18c and this information is input into the vehicle-object state estimator 26 which determines that the object 34 will be at the position of the tow point 38 after 5s. This data is input into the composite image generator 22 which, at a time t=5s, generates a composite image as shown in Figure 4B formed from a real-time image part 28 produced from live image data received by the camera 12 at t=5s and a delayed image part 32 formed from the image data captured at a time t=Os and stored in the server 20. In the illustrated example, the delayed image part 32 is the area of the stored image around the identified object 34. As shown in Figure 4B, the delayed image part 32 is positioned in the composite image at the approximate position of the tow point 38 of the vehicle 36 with respect to the real-time image 28 within the obstructed portion 30 of the image.
It is envisaged that the delayed image part 32 of the composite image will be continually updated with image data stored in the server 20. It is not necessary for an object of interest, such as object 34, to have been identified before the delayed image part 32 is generated. In this way, the system 10 of the invention provides a way to effectively increase the field of view of the camera 12 by continually updating the delayed image part 32 of the composite image to show what is (or is calculated to be) proximal to the tow point 38 of the vehicle 36 at all times. For example, in the example described above, at a time t=1s the composite image generator 22 could generate a composite image formed from a real-time image part 28 and a delayed image part 32 formed from image data stored within the server 20 relating to an area of interest in previously captured image data determined to be 1m away from the tow point of the vehicle at time t=Os, whilst at a time of t=2s the delayed image part 32 could use stored image date relating to an area of interest which was 2m away from the tow point at time t=Os or which was 1 meter away at a time of t=1 s.
At any given point in time whilst the system 10 is in use, the processing unit 14 may be able to select from a number of different sets of stored image data which could be used to form the delayed image part 32. In this case, the processing unit 14 may be configured to select the stored image data which will best fit with the real-time image data to form a realistic composite image. For example, in the first example where the vehicle is moving at a constant speed of 1ms1, when creating a composite image at a time of t=5s, rather than using stored image data of the area of interest 32 which was 5 meters from the tow point captured at a time of t=Os, the delayed image part could use image data of the area of interest when it was 1 meter from the tow point captured at a time of t=4s. In this case, image data captured at t=4s may be selected if it is a better fit with the real-time image than the image data captured at t=Os.
In the above examples, the vehicle 36 is travelling in a straight line at a constant speed for simplicity. However, the system 10 is capable of taking into account changes in speed and trajectory of the vehicle in determining which part of the stored image data is to be displayed in the delayed image portion 32 using suitable algorithms and based on information provided to the vehicle state estimator 24 and the vehicle-object state estimator 26 by the sensors. Changes in vehicle speed and/or trajectory are estimated by the vehicle state estimator 24 and the vehicle-object state estimator 26 based on data from the sensors and/or other inputs and provides instructions to the composite image generator 22 regarding which area of which stored image should be used as the delayed image part 32 in any generated composite image.
The composite image may be generated at any feasible rate and is dependent on the frame rate of the camera 12 used and the processing speed of the processing unit 14. In the example discussed above, the rate at which the composite image is generated is 1 frame per second, or at least 1 frame of the delayed image part 32 per second. It is, however, envisaged that the composite image be generated at a much quicker rate than this exemplary embodiment such that the delayed image part 32 effectively appears to be a live image of the area around the tow point 38 of the vehicle 36.
The above embodiments are described by way of example only. Many variations are possible without departing from the scope of the invention as defined in the appended claims. For example, more than one camera 12 can be used to capture image date which can be used to generate the composite image. Furthermore, the delayed image part 32 could be used to fill whole of the obscured portion of the real-time image portion 28 in the display 16. Indeed, the delayed image portion could be positioned anywhere within the real5 time image depending on the requirements of the particular application.

Claims (31)

1. A vehicle camera system comprising:
at least one camera arranged to capture image data; a display configured to show images formed from the image data captured by the at least one camera; and a processing unit operable to receive and store image data captured by the at least one camera and to generate a composite image to be shown on the display;
wherein the composite image comprises a real-time image part and a delayed image part formed from previously captured stored image data.
2. A vehicle camera system as claimed in claim 1 wherein the camera comprises at least one rear facing camera configured to be positioned at the rear of a vehicle.
3. A vehicle camera system as claimed in claim 2 wherein the processing unit is operable to generate the composite image with the delayed image part positioned within the composite image at the predicted position of a tow point of a vehicle in which the system is implemented.
4. A vehicle camera system as claimed in claim 3 wherein the delayed image part comprises image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
5. A vehicle camera system as claimed in any preceding claim wherein the processing unit is operable to continuously update the delayed image part of the composite image.
6. A vehicle camera system as claimed in claim 5 wherein the processing unit is configured such that the rate at which the delayed image part of the composite image is updated is equal to the frame rate of the camera.
7. A vehicle camera system as claimed in any preceding claim comprising one or more sensors operable to measure one or more parameters relating to the position and/or motion of a vehicle in which the system is implemented.
8. A vehicle camera system as claimed in claim 7 wherein the processing unit is operable to generate the composite image in dependence on the values of the one or more measured parameters of the vehicle.
9. A vehicle camera system as claimed in claim 8 wherein the processing unit is operable to select a delayed image part to be presented in the composite image from the stored image data and/or control the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
10. A vehicle camera system as claimed in any of claims 7 to 9 wherein at least one of the one or more sensors comprises a motion sensor, a position sensor and/or a distance sensor.
11. A processing unit for a vehicle, the processing unit being configured to receive and store image data captured by at least one camera within or on the vehicle and to generate images to be shown on a display; wherein the processing unit is operable to generate a composite image on the display comprising a real-time image part and a delayed image part formed from previously captured stored image data.
12. A processing unit as claimed in claim 11 operable to generate the composite image with the delayed image part positioned at the predicted position of a tow point of the vehicle.
13. A processing unit of claim 12 wherein the delayed image part comprises image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
14. A processing unit as claimed in any of claims 11 to 13 operable to continuously update the delayed image part of the composite image.
15. A processing unit as claimed in claim 14 configured such that the rate at which the delayed image part of the composite image is updated is equal to the frame rate of the camera.
16. A processing unit as claimed in any of claims 11 to 15 wherein the processing unit is communicable with one or more sensors operable to measure one or more parameters relating to the position and/or motion of the vehicle and is operable to generate a composite image in dependence on the values of the one or more measured parameters of the vehicle.
17. A processing unit as claimed in claim 16 wherein the processing unit is communicable with at least one motion sensor, at least one position sensor and/or at least one distance sensor.
18. A vehicle comprising the vehicle camera system as claimed in any one of claims 1 to 10 or the processing unit as claimed in any one of claims 11 to 17.
19. A method of forming a composite image from at least one camera mounted within or on a vehicle comprising:
using the at least one camera to capture image data;
storing image data captured by the at least one camera; and generating a composite image from image data comprising a real-time image part and a delayed image part formed from the stored image data.
20. A method of claim 19 comprising using at least one rear facing camera positioned at the rear of a vehicle to capture the image data.
21. A method of claim 20 comprising positioning the delayed image part at the predicted position of a tow point of the vehicle.
22. A method as claimed in claim 21 comprising forming the delayed image part from image data relating to an area of a previously captured image, or an object within an area of a previously captured image, which is determined to have moved to a position wherein it is proximal to the tow point of the vehicle at the time the composite image is generated.
23. A method of any one of claims 19 to 22 comprising continuously updating the delayed image part of the composite image.
24. A method of claim 23 comprising updating the delayed image part of the composite image at a rate which is equal to the frame rate of the camera.
25. A method of any of claims 19 to 24 comprising using one or more sensors to measure one or more parameters relating to the position and/or motion of the vehicle and generating a composite image in dependence on the values of the one or more measured parameters of the vehicle.
26. A method of claim 25 comprising selecting a delayed image part to be presented in
5 the composite image from the stored image data in dependence on the value(s) of the one or more measured parameters and/or controlling the position of the delayed image part within the composite image relative to the real-time image part in dependence on the value(s) of the one or more measured parameters.
10
27. A method of claim 25 or claim 26, the method comprising selecting for use as the delayed image part, stored image data relating to an area of interest in a previously captured image which, at the time the composite image is generated, is determined would occupy a position relative to the real-time image part corresponding to the location of the delayed image part in the composite image, taking into account sensed movement of the vehicle
15 since the previously captured image data was captured.
28. A method of claim 25 or claim 26 comprising using at least one motion sensor at least one position sensor and/or at least one distance sensor.
20
29. A program for a computer configured to perform the method according to any one of claims 19 to 28 when executed on a computer.
30. A data carrier comprising a computer program as claimed in claim 29.
25
31. A system, a processing unit, a vehicle, a method, or a computer program substantially as described herein with reference to the accompanying drawings.
Intellectual
Property
Office
Application No: GB1614551.8 Examiner: Mr Hitesh Kerai
GB1614551.8A 2016-08-26 2016-08-26 A Vehicle camera system Withdrawn GB2553143A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1614551.8A GB2553143A (en) 2016-08-26 2016-08-26 A Vehicle camera system
PCT/EP2017/071205 WO2018037032A1 (en) 2016-08-26 2017-08-23 A vehicle camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1614551.8A GB2553143A (en) 2016-08-26 2016-08-26 A Vehicle camera system

Publications (2)

Publication Number Publication Date
GB201614551D0 GB201614551D0 (en) 2016-10-12
GB2553143A true GB2553143A (en) 2018-02-28

Family

ID=57119800

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1614551.8A Withdrawn GB2553143A (en) 2016-08-26 2016-08-26 A Vehicle camera system

Country Status (2)

Country Link
GB (1) GB2553143A (en)
WO (1) WO2018037032A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11484790B2 (en) * 2017-07-07 2022-11-01 Buxton Global Enterprises, Inc. Reality vs virtual reality racing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003244688A (en) * 2001-12-12 2003-08-29 Equos Research Co Ltd Image processing system for vehicle
US20030165255A1 (en) * 2001-06-13 2003-09-04 Hirohiko Yanagawa Peripheral image processor of vehicle and recording medium
JP2006311299A (en) * 2005-04-28 2006-11-09 Aisin Aw Co Ltd Parking section monitoring device
US20070088474A1 (en) * 2005-10-19 2007-04-19 Aisin Aw Co., Ltd. Parking assist method and a parking assist apparatus
JP2008109283A (en) * 2006-10-24 2008-05-08 Nissan Motor Co Ltd Vehicle periphery display device and method for presenting visual information
EP2161195A1 (en) * 2008-09-08 2010-03-10 Thales Avionics, Inc. A system and method for providing a live mapping display in a vehicle
WO2016082961A1 (en) * 2014-11-25 2016-06-02 Robert Bosch Gmbh Method for characterizing camera images of a parking assistant

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002359839A (en) * 2001-03-29 2002-12-13 Matsushita Electric Ind Co Ltd Method and device for displaying image of rearview camera
GB2469438B (en) * 2009-03-09 2014-04-09 Applic Solutions Electronics & Vision Ltd Display movement of an object
GB2513393B (en) * 2013-04-26 2016-02-03 Jaguar Land Rover Ltd Vehicle hitch assistance system
DE102013207906A1 (en) * 2013-04-30 2014-10-30 Bayerische Motoren Werke Aktiengesellschaft Guided vehicle positioning for inductive charging with the help of a vehicle camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030165255A1 (en) * 2001-06-13 2003-09-04 Hirohiko Yanagawa Peripheral image processor of vehicle and recording medium
JP2003244688A (en) * 2001-12-12 2003-08-29 Equos Research Co Ltd Image processing system for vehicle
JP2006311299A (en) * 2005-04-28 2006-11-09 Aisin Aw Co Ltd Parking section monitoring device
US20070088474A1 (en) * 2005-10-19 2007-04-19 Aisin Aw Co., Ltd. Parking assist method and a parking assist apparatus
JP2008109283A (en) * 2006-10-24 2008-05-08 Nissan Motor Co Ltd Vehicle periphery display device and method for presenting visual information
EP2161195A1 (en) * 2008-09-08 2010-03-10 Thales Avionics, Inc. A system and method for providing a live mapping display in a vehicle
WO2016082961A1 (en) * 2014-11-25 2016-06-02 Robert Bosch Gmbh Method for characterizing camera images of a parking assistant

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11484790B2 (en) * 2017-07-07 2022-11-01 Buxton Global Enterprises, Inc. Reality vs virtual reality racing

Also Published As

Publication number Publication date
WO2018037032A1 (en) 2018-03-01
GB201614551D0 (en) 2016-10-12

Similar Documents

Publication Publication Date Title
US11528413B2 (en) Image processing apparatus and image processing method to generate and display an image based on a vehicle movement
US9296422B2 (en) Trailer angle detection target plausibility
CN106462314B (en) Help the dynamic camera view of trailer attachment
CN107021018B (en) Visual system of commercial vehicle
US20160297362A1 (en) Vehicle exterior side-camera systems and methods
GB2552282A (en) Vehicle control system
CN104608692A (en) Parking auxiliary system and the method
WO2014174037A1 (en) System for a towing vehicle
US20200286244A1 (en) Image processing method and apparatus
JP2018144526A (en) Periphery monitoring device
JP6760122B2 (en) Peripheral monitoring device
JP2018142885A (en) Vehicular display control apparatus, vehicular display system, vehicular display control method, and program
GB2554427A (en) Method and device for detecting a trailer
WO2010134240A1 (en) Parking assistance device, parking assistance method, and parking assistance program
WO2016152000A1 (en) Safety confirmation assist apparatus, safety confirmation assist method
CN105793909B (en) The method and apparatus for generating warning for two images acquired by video camera by vehicle-periphery
EP3681151A1 (en) Image processing device, image processing method, and image display system
CN108422932B (en) Driving assistance system, method and vehicle
JP6720729B2 (en) Display controller
JP5083142B2 (en) Vehicle periphery monitoring device
GB2553143A (en) A Vehicle camera system
JP2008213647A (en) Parking assist method and parking assist system
US20210086695A1 (en) Method and apparatus for invisible vehicle underbody view
KR20160107529A (en) Apparatus and method for parking assist animated a car image
US20230125045A1 (en) Trailer end tracking in a camera monitoring system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)