WO2019202317A1 - Method and system of articulation angle measurement - Google Patents

Method and system of articulation angle measurement Download PDF

Info

Publication number
WO2019202317A1
WO2019202317A1 PCT/GB2019/051091 GB2019051091W WO2019202317A1 WO 2019202317 A1 WO2019202317 A1 WO 2019202317A1 GB 2019051091 W GB2019051091 W GB 2019051091W WO 2019202317 A1 WO2019202317 A1 WO 2019202317A1
Authority
WO
WIPO (PCT)
Prior art keywords
trailer
image data
vehicle
camera
processing unit
Prior art date
Application number
PCT/GB2019/051091
Other languages
French (fr)
Inventor
David Cebon
Christopher DE SAXE
Original Assignee
Cambridge Enterprise Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambridge Enterprise Limited filed Critical Cambridge Enterprise Limited
Publication of WO2019202317A1 publication Critical patent/WO2019202317A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a method and system for measuring an articulation angle between a vehicle and trailer, more particularly the measurement of an articulation angle using a vehicle-mounted camera.
  • HGVs Heavy Goods Vehicles
  • HCVs High Capacity Vehicles
  • Articulation angle sensing refers to the measurement of the yaw angle between a truck and trailer, or between subsequent trailers, and is a core requirement for autonomous reversing, jackknife prevention, and combined braking and steering control technologies.
  • articulation sensors exist either commercially or for research and development work. These are generally trailer-based“contact-type” sensors, and require non-standard communication links with the tractor. In particular, they often rely on trailer-based articulation angle sensors, with nonstandard or experimental communication links. Furthermore they often require significant modifications to the trailer kingpin and/or require a physical connection between vehicle and trailer. They also suffer from insufficient resolution for active-trailer steering applications. Furthermore, for semi-trailers the fifth wheel is subjected to high static and dynamics loads and is a dirt and grease-prone environment, with potentially negative effects on the longevity of these sensors.
  • tractor and trailer units are generally designed to be interchangeable so that any tractor can pull any trailer
  • trailer-based articulation sensors require installation onto each trailer to be pulled by the vehicle and therefore increase the cost and burden in widespread installation.
  • Some vehicle based systems have been developed but these technologies generally require knowledge of the trailer states and employ non-standard sensors. Such systems are therefore not well suited to commercial use where trailers are interchanged with a certain tractor as the systems will often need to be adjusted to suit the changing trailer geometry and/or have sensors fitted to the trailer.
  • the present invention seeks to provide a system and method for the measurement of an articulation angle between a vehicle and trailer. It is a particular aim to provide a non-contact, vehicle-based articulation angle sensor which is compatible with multiple trailer combinations. A further aim is to provide a low cost system which is robust to the use of various interchangeable trailers and can measure the articulation angle to a sufficient degree of precision without prolonged calibration procedures.
  • system for measuring the articulation angle of a trailer pulled by a vehicle comprising: a camera mounted, in use, in a fixed orientation on the vehicle so as to image the trailer; and a processing unit arranged to receive image data from the camera; wherein the processing unit is configured to: receive initial image data from the camera from multiple orientations relative to the trailer; identify common feature points in the initial image data to compute and store a three- dimensional map; receive further image data as the trailer is pulled by the vehicle; identify and track feature points in the image data relative to the stored map to estimate the pose of the camera relative to the trailer; and calculate an articulation angle based on the estimated pose.
  • the present invention therefore provides a non-contact system for articulation angle measurement by processing the image data received by a camera mounted so as to image the trailer.
  • a vehicle installed with the system can be used with any trailer without modification and therefore improves the interoperability of vehicles and trailers. It also prevents the wear associated with sensors positioned on the trailer and avoids the need for communications links between the trailer and vehicle.
  • the components required are inexpensive and straightforward to install such that it is low cost and practical to implement across a large number of vehicles.
  • the image processing unit of the present invention does not require any knowledge about the imaged trailer, such as specific dimensions or known patterns in the structure of the trailer, nor does it require physical markers to be implemented.
  • the degree of pre-calibration and configuration is therefore markedly reduced allowing for complete interoperability with various trailers.
  • the above system makes no assumptions regarding planar surfaces and is theoretically applicable to arbitrarily non-planar scenes. This means that the algorithm would be applicable to theoretically all trailer shapes, including box-type trailers, tankers with domed fronts, trailers with front-mounted refrigeration units, and abnormal load trailers or car-carriers. As more of the trailer is viewed with increasing articulation angles, additional feature points can be added to the three-dimensional map of the trailer, allowing for efficient recall when viewed again.
  • the separation of the mapping and tracking elements of the method mean it is computationally efficient, reducing the processing requirement and allowing the continuous calculation of an articulation angle in real time.
  • the camera may be a camera with a wide angle lens such that, in use, most or all of the opposing face of the trailer is captured in the field of view. In this way, feature points may be selected across a wide area of the trailer to provide more accurate tracking and/or mapping of the feature points.
  • vehicle or“towing unit” is used to refer to any type of automobile configured to pull a trailer.
  • vehicles include a“truck” which is a generally a freight transport vehicle having a rigid structure and a“tractor” which is a vehicle which couples to a trailer via a fifth wheel coupling.
  • the vehicle or towing unit might also itself be singularly or multiply articulated.
  • the truck/tractor and the directly connected trailer together can be considered the “vehicle” or“towing unit”, which is configured to pull one or more additional trailers.
  • the camera can be mounted on the truck/tractor (i.e.
  • the driving unit or the immediately connected trailer, so as to image the second trailer, towed by the first trailer.
  • the truck or tractor and the front two or more trailers can be considered the“vehicle” or“towing unit” as this falls within the above definition of “any automobile configured to pull a trailer”.
  • the camera can therefore be mounted on the truck/tractor or an intermediate trailer pulled by the truck vehicle, as long as it is orientated to image the trailer pulled by the truck and one or more intermediate trailers. In this way, the invention can measure the articulation angle between a truck/tractor and trailer or between adjacent connected trailers.
  • vehicle or“towing unit” also covers other types of vehicle configured to pull a trailer, for example the front unit of an articulated bus or the front unit of articulated construction equipment.
  • feature points is used to refer to distinct points of interest in the image data which are identifiable by the processing unit. These are locations and intensities of pixels or groups of pixels in the image data. Features are usually detected based on common groupings of pixel intensities, or particular discontinuities in pixel intensity gradients. Common image features include edges, corners and‘blobs’. A feature point may have associated with it a feature descriptor, which enables the effective matching of similar feature points from one image to another. Examples of feature descriptors include, but are not limited to, the Binary Robust Independent Elementary Features (BRIEF), Gradient Location and Orientation Histogram (GLOH), or a vector of the pixel intensities of the pixels in an area of pre-defined size around the location of the feature.
  • BRIEF Binary Robust Independent Elementary Features
  • GLOH Gradient Location and Orientation Histogram
  • the processing unit may be configured to use one or more feature detector algorithms to identify feature points in the image. For example, convolving an image with a Gaussian kernel, Laplacian of Gaussian (LoG) kernel, or a Difference of Gaussian (DoG) kernel may be used to detect peaks, troughs or zero-crossings of pixel intensity gradients, from which various features are defined. Examples of feature detectors that may be used include the Canny edge detector, the Harris corner detector, and the DoG blob detector. A possible corner detector which may be implemented is the Features from Accelerated Segment Test (FAST) in which the variation in pixel intensities in a circle around a point are used to determine whether or not the point is a corner.
  • FAST Accelerated Segment Test
  • three-dimensional map is used to refer to the record of the locations of all identified feature points in three-dimensional space, together with the appropriate feature descriptors when used.
  • the phrase“pose of the camera” is used to refer to the position and orientation of the camera.
  • the pose may therefore have 6 degrees of freedom.
  • the pose of the camera may be output in the form of a 3x1 translation vector and a 3x3 rotation matrix.
  • the articulation angle refers to the yaw angle of the trailer relative to the vehicle or drawbar.
  • the articulation angle may be calculated by decomposition of the rotation matrix into a combination of sequential rotations about each axis, known as Euler angles: roll (f), yaw (y) and pitch (Q) where the yaw angle corresponds to the articulation angle assuming the pitch and roll angles are zero.
  • the initial image date comprises two images from two different orientations of the camera relative to the trailer. For example, two images may be taken as the vehicle and trailer is driven through a manoeuvre which provides a change in orientation of the trailer. This process may be referred to as “stereo initialisation”. These two images are defined as “keyframes” - frames on which the current map is based. Additional“keyframes” may be added to update the map during processing of further image data.
  • the processing unit is configured to update the stored three- dimensional map from the received further image data.
  • the currently stored three-dimensional map, relative to which the pose of the camera is estimated may be updated and optimised as further image data is received. The accuracy of the calculated articulation angle is therefore improved.
  • Additional images taken from the further image data to update the map are defined as further“keyframes”.
  • the processing unit is configured to only update the stored three-dimensional map if certain conditions are met, wherein the conditions comprise one or more of: the time since the last update exceeds a certain threshold;
  • the orientation of the camera relative to the trailer has changed by more than a certain threshold; feature points in the image data are being tracked successfully. In this way, the processing requirement is reduced since the stored map is only updated when required, providing a more computationally efficient method.
  • the processing unit is configured to run two parallel threads: a mapping thread which updates the stored three-dimensional map from the further image data; and a tracking thread which tracks feature points in the further image data relative to the latest stored three-dimensional map and updates the estimated pose of the camera.
  • a mapping thread which updates the stored three-dimensional map from the further image data
  • a tracking thread which tracks feature points in the further image data relative to the latest stored three-dimensional map and updates the estimated pose of the camera.
  • the parallel formulation results in excellent computational efficiency, while allowing for computationally expensive pose and map refinement to take place when other mapping tasks are not needed.
  • the processing unit is a dual core processor wherein one core carries out the mapping processes and a second core carries out the parallel tracking processes.
  • the processing unit is further configured to identify one or more regions of interest in the image data.
  • This region of interest may be chosen to be an area which is readily identifiable to the processing unit and provides a more reliable calculation of camera pose and accordingly articulation angle.
  • the region of interest may correspond to a portion of the trailer or drawbar.
  • the processing unit is configured to identify two regions of interest in the image data, a first region of interest corresponding to at least a portion of the trailer and a second region of interest corresponding to at least a portion of the drawbar. In this way two independent articulation angles may be extracted from a drawbar-trailer combination having one articulation angle between the vehicle and the drawbar and the second articulation angle between the drawbar and the trailer.
  • the processing unit is either configured to: run four parallel threads, two threads corresponding to the mapping and tracking of the trailer region of interest respectively and two threads corresponding to the mapping and tracking of the draw bar region of interest respectively; or run two parallel threads wherein the mapping and tracking of the trailer region of interest and draw bar region of interest are performed alternately.
  • the tracking and mapping of the two regions of interest may be performed in a computationally efficient way to provide the two articulation angles in real time. This may equally be applied to regions of interest other than a trailer and drawbar or extended to three or more regions of interest.
  • the processing unit is configured to determine a zero value of the articulation angle by collecting image data while the vehicle is driven in a straight line.
  • Zeroing the articulation angle in this way provides a more accurate measurement during processing since, after the initial map is stored based on the initial image data (stereo initialisation), the reference co-ordinate frame is not necessarily aligned with the longitudinal axis or yaw plane of the vehicle, depending on the features found or the orientation of the trailer between the initial images. In this way, calibration is performed during one simple initial straight line manoeuvre over a few seconds.
  • zeroing is performed by a user initiated or automated command when the trailer is known to be travelling straight with both articulation angles approximately zero.
  • Subsequent rotation matrices may be post- multiplied by this reference matrix to adjust the co-ordinate frame accordingly, before performing Euler decomposition to extract the articulation angle.
  • the processing unit is configured to estimate the pose by calculating a rotation matrix, the rotation matrix describing the relative motion between the vehicle and trailer, wherein the rotation matrix is calculated from the location of the identified feature points relative to the location of the corresponding feature points in the stored three-dimensional map.
  • the system provides two primary outputs: a 3-D map of feature point locations, and the camera pose in the form of a 3x1 translation vector and a 3x3 rotation matrix.
  • the rotation matrix can be defined relative to the dominant plane observed during initialisation and the translation vector represents the motion of the camera relative to its original location.
  • Euler decomposition may be used to extract the roll, pitch and yaw angles of the trailer, wherein the yaw angle corresponds to the articulation angle, assuming the pitch and roll angles are zero.
  • the camera is mounted, in use, on a vehicle towing a full-trailer or dolly and semi-trailer having two articulation angles, one articulation angle between the vehicle and the drawbar and the second articulation angle between the drawbar and the trailer; and the camera is mounted, in use, in a fixed orientation on the vehicle such that the draw-bar and trailer are imaged; wherein the processing unit is further configured to calculate the two articulation angles.
  • This allows the calculation of two articulation angles using a single camera and processing unit.
  • the multiple articulation angles of various HGV combinations may be calculated for example truck and full-trailer; and truck, converter dolly and semi-trailer (“Nordic combination”).
  • the camera may be mounted at a raised position above the vehicle to capture multiple trailers in the field of view such that the articulation angle of each may be calculated.
  • the multiple articulation angles of addition HGV combinations may be calculated, which include but are not limited to: B-double; B-triple; A-double; A-triple; AB-triple.
  • the feature points are arbitrary points in the image data selected by the image processing unit such that no physical markers need be implemented. In this way, no pre-configuration or preparation of a trailer need be carried out.
  • Features may be detected based on common groupings of pixel intensities, or particular discontinuities in pixel intensity gradients.
  • image features include edges, corners and ‘blobs’. Edges are not well-suited to motion detection, but can be used to identify shapes (using Hough transforms for example). Corners and blobs are more useful for motion detection.
  • the processing unit is configured to receive the initial image data as the vehicle is driven through a manoeuvre such that relative motion between the vehicle and trailer is generated to provide the multiple orientations.
  • the initial map can be generated in a straightforward manner via a simple manoeuvre and without significant pre-configuration.
  • two images of the trailer are captured during the manoeuvre to provide a moderate translation between them. These two images are the first two keyframes forming the initial image data and are used to generate the initial three dimensional map.
  • the initial three dimensional map can be stored offline and then subsequently used to recognise when the same trailer is coupled to the vehicle again. In this way it is not necessary to repeat the initialisation process when it has been performed previously.
  • Figure 1A is a schematic illustration of a system according to the present invention.
  • Figure 1 B is an illustration of the system according to the present invention in use on a vehicle
  • Figure 2 is a diagram showing the steps performed by the processing unit of a system according to the present invention.
  • Figure 3A illustrates the stereo initialisation step performed by the processing unit of a system according to the present invention
  • Figure 3B illustrates the features on a trailer face detected by the processing unit of a system according to the present invention
  • Figure 3C illustrates the same features tracked by the processing unit of a system according to the present invention as the articulation angle varies
  • Figure 3D illustrates the same features tracked by the processing unit of a system according to the present invention as the articulation angle varies under low light conditions
  • Figure 3E illustrates the 3D map of the feature points of the trailer stored by the processing unit of a system according to the present invention as the articulation angle varies;
  • Figure 3E illustrates a side view the 3D map of the feature points of the trailer stored by the processing unit of a system according to the present invention as the articulation angle varies;
  • Figure 4A illustrates a system according to the present invention implemented on a tractor semi-trailer combination
  • Figure 4B illustrates a system according to the present invention implemented on a rigid truck and rigid drawbar trailer combination
  • Figure 4C illustrates a system according to the present invention implemented on a rigid truck and full-trailer combination
  • Figure 4D illustrates a system according to the present invention implemented on a rigid truck, dolly and semi-trailer combination (“Nordic combination”);
  • Figure 4E illustrates a system according to the present invention implemented on a B-double combination
  • Figure 4F illustrates a system according to the present invention implemented on a B-triple combination
  • Figure 4G illustrates a system according to the present invention implemented on a A-double combination
  • Figure 4H illustrates a system according to the present invention implemented on a A-triple combination
  • Figures 5A and 5B illustrate the segmentation of the image data into regions of interest in order to calculate two articulation angles.
  • FIGS 1A and 1 B illustrate a system 100 for measuring the articulation angle of a trailer 300 pulled by a vehicle 200, according to the present invention.
  • the example of a rigid truck and full-trailer with two articulation angles is illustrated.
  • the system 100 comprises a camera 110 mounted in a fixed orientation on the vehicle 200 so that the trailer 300 is within the field of view.
  • the system 100 also includes a processing unit 120 which is connected to the camera 110 such that it can receive image data from the camera.
  • the processing unit 120 is configured to receive initial image data from the camera 1 10 from multiple orientations relative to the trailer 300; identify common feature points in the initial image data to compute and store a three- dimensional map; receive further image data as the trailer 300 is pulled by the vehicle 200; identify and track feature points in the image data relative to the stored map to estimate the pose of the camera 1 10 relative to the trailer 300; and calculate an articulation angle A based on the estimated pose.
  • the camera may be a digital camera configured to take images of an appropriate field of view, such that it includes a significant proportion of the trailer face, and is capable of connection to the processing unit.
  • a digital camera is fitted to a bracket behind the tractor cabin, facing the front of the semi-trailer.
  • the camera is mounted centrally relative to the sides of the tractor cab and at an arbitrary height above the hitch while maintaining a reasonable view of the front of the trailer.
  • the optical axis of the camera may be aligned by eye with the lateral centre of the trailer.
  • the lens used is a wide-angle lens with a focal length of approximately 3 mm.
  • the camera may be colour or greyscale. In this example greyscale images are captured via USB 3.0 at 20 fps at a resolution of 640x480 pixels.
  • connection means between camera and processing unit is a USB 3.0 cable but may equally be provided by another physical communications link or a wireless connection.
  • Figure 1 B illustrates how the system is used in use to monitor the articulation angle A of a semi-trailer 300 being pulled by a vehicle 200.
  • the camera 110 is mounted in a fixed orientation on the rear face 210 of the vehicle 200 such that it faces the opposing, front face 310 of the trailer 300.
  • two images are taken with the trailer 300 in differing orientations relative to the vehicle 200 (and accordingly also to the camera 110). For example a first image may be taken as the vehicle 200 is making a right-hand turn as shown in Figure 1 B and a further image may be taken as it takes a subsequent left-hand turn.
  • Image data corresponding to these initial two images is received by the processing unit 120 (not shown in this view) which processes them to produce a three-dimensional map of the imaged portion of the front face 310 of the trailer 300.
  • the processing unit 120 processes them to produce a three-dimensional map of the imaged portion of the front face 310 of the trailer 300.
  • further images are received and processed to determine the rotation of the current image relative to the stored map. In this way the pose of the camera 110 may be estimated from which the articulation angle A may be calculated.
  • the processing method used implements tracking of image features relative to an initial 3D map and updating of the 3D map as parallel processes to obtain gains in computing efficiency.
  • the method can be summarised as follows:
  • Parallel tracking and mapping consists of two parallel processing threads: a mapping thread and a tracking thread. Assuming a map of 3-D feature points has already been generated (the initial map is created in the stereo initialisation step using the two“keyframes” of the initial image data), the tracking thread is responsible for matching detecting features in the current frame with features observed in the previously stored keyframes, and thereby updating the camera’s pose in the current frame. Thus the motion of the camera is tracked in a known 3-D scene. Tracking is performed at frame rate (i.e. at each new frame).
  • mapping and tracking tasks of one particular example of the processing system are illustrated in Figure 2.
  • FIG. 2 illustrates the data processing method 400 according to the present invention.
  • stereo initialisation is performed in which initial image data is received from the camera in the form of a plurality of images from different orientations relative to the trailer.
  • two images are taken as the vehicle is driven through a manoeuvre to provide a variation in orientation of the trailer 300 relative to the camera 1 10.
  • These two images are the first two“keyframes” - that is, images on which the 3D map is based.
  • the initial map is computed and stored based on the initial image data.
  • This stereo initialisation step is important, as this defines the initial map upon which all updates of the map are based.
  • Features are detected in an initial image, making up the first keyframe, and these features are then continuously tracked. A small translation of the camera relative to the scene is then required before the second keyframe is taken, capturing the new locations of these features.
  • the resultant Cartesian map is only accurate up to a scale factor. Rotations are independent of this assumption.
  • the articulation angle is zeroed.
  • An additional initialisation step may be used to zero the articulation measurements during processing. This can be carried out in a straightforward manner during real-time processing.
  • the reference co- ordinate frame is not necessarily aligned with the longitudinal axis or yaw plane of the truck, depending on the features found or the orientation of the trailer between the first two keyframes.
  • the processing unit obtains the instantaneous rotation matrix of the pose. This may be initiated by a user entering a command on a user interface or automated. Subsequent rotation matrices may then be post- multiplied by this reference matrix to adjust the co-ordinate frame accordingly, before performing Euler decomposition to extract the articulation angle.
  • Steps 410 to 430 represent the initialisation processes which are complete at point 440 in Figure 2, at which stage the parallel tracking 460 and mapping 470 threads carried out by the processing unit are started.
  • the image data is received continuously with tracking carried out at the frame rate and mapping carried out as needed.
  • the tracking and mapping threads are started immediately following stereo initialisation and then zeroing is performed at a later point during tracking to zero the measurements.
  • the mapping thread is responsible for building the 3-D map of feature points relative to the global reference frame defined during initialisation. This need not run at frame rate but rather performs more intensive map updates when necessary, independently of the tracking thread.
  • the latest map is recalled by the processing unit and provides the basis for the parallel tracking and mapping threads.
  • the map is based solely on the two keyframes of the initial image data.
  • the map is updated such that it is based on N keyframes, where N is the total number of keyframes taken from the image data.
  • step 470 the processing unit checks whether a further keyframe is required.
  • mapping is not updated at every frame but is limited only to certain‘keyframes’ where various criteria have been met. This allows for a computationally more expensive but more accurate method of mapping to be performed since it does not need to be performed at the frame rate. In addition, by taking mapping out of the tracking thread, more computational time is made available for the tracking task, resulting in more accurate camera pose estimation.
  • a further keyframe may be recorded only if sufficient camera translation has occurred for the mapping update to be meaningful. In this way a keyframe is only added if a minimum distance from other keyframes is exceeded. Similarly a minimum number of frames may be selected between keyframes such that, for example, new keyframes are only added provided at least 20 frames have passed since the last keyframe.
  • a further criteria may be that the tracking thread is operating successfully such that a new keyframe is only recorded if the tracking thread is indicating that tracking quality is above a threshold value. This threshold quality value is described in more detail below.
  • step 471 is performed in which the map is refined in this“free time” when mapping is not required.
  • mapping thread uses this ‘free time’ to refine the existing map. This may be done using global bundle adjustment, which is too computationally expensive to be calculated in real-time.
  • Further refinements may be made by making new measurements in old key- frames to measure newly created features in older key frames or to re-measure outlier measurements. These data association refinements are given a low priority in the mapping thread such that they are only performed if there are no higher ranked tasks such as when a new keyframe is required.
  • step 472 is performed in which map stored at 450 is updated with the new key frame.
  • step 472 the map is expanded and refined with the addition of a new key frame.
  • Local bundle adjustment may be performed to iteratively optimise re- projection errors over all map points and keyframes.
  • the tracking system may have only measured a subset of the potentially visible features in the keyframe so the mapping thread re-projects and measures the remaining map features. Triangulation is performed with other keyframes to extract depth information.
  • mapping update is only performed as needed when certain conditions are met, the tracking thread proceeds at frame rate with each frame of the image data processed by the processing unit.
  • step 460 features in the currently received frame are detected in the image and tracked relative to the currently saved map 450.
  • features are detected in the current image at four image pyramids levels (i.e. four versions of the same image, each down- sampled by a factor of two relative to the preceding level).
  • a small image patch around each feature point is also recorded, assuming the feature to be locally planar.
  • a ‘prior’ estimate of the camera’s pose is then calculated based on its previous position using a decaying velocity motion model (similar to an alpha-beta filter).
  • a decaying velocity motion model similar to an alpha-beta filter.
  • Known 3-D map points are projected into the current image frame.
  • a pin-hole model may be used and distortion may be accounted for using a FOV-model.
  • the planar feature patches are warped to accommodate the change in camera pose.
  • These projected points are matched to detected points in the current image using a fixed-range search around their predicted locations.
  • an initial coarse search is done with a limited number of (for example 50) feature points.
  • the camera pose is updated.
  • the camera pose may be updated using the matches of the limited number of feature points with a further update of the pose performed using a greater number of points (for example up to 1000), searching over a smaller window.
  • the tracking thread also estimates the tracking quality at every frame using the fraction of the predicted features which have been successfully observed. This fraction can be used as a measure of quality and a threshold may be set which, if the quality drops below, no new keyframes are sent to the mapping thread.
  • the updated camera pose and feature information is fed back in to the start point 440 of the mapping/tracking threads and this is used in determining whether a new key frame is required and the refining/updating processes of the map.
  • the updated camera pose is used to calculate a corresponding trailer pose.
  • the camera pose is output in the form of in the form of a 3x1 translation vector and a 3x3 rotation matrix describing the orientation of the identified feature points relative to the corresponding points in the map.
  • the translation vector represents the relative motion of the camera with respect to the map relative to its original location.
  • the 3x3 rotation matrix can be decomposed into a combination of sequential rotations about each axis, known as the Euler angles: roll (cp), yaw (y) and pitch (Q). This is summarised as follows:
  • Euler angles may be extracted from the rotation matrix using a combination of the above definitions and logic to reject multiple solutions.
  • the above rotations are in the camera co-ordinate frame.
  • the yaw angle may be taken to be the articulation angle of the trailer relative to the tractor provided pitch and roll variations are small.
  • pitch and roll however, one must take care to note the effect of the transformation from camera to trailer co-ordinates. For example, assume that the trailer is pitched at 3° about its own lateral axis. In the camera reference frame this would register as a 3° pitch angle only when the articulation angle is zero. At an articulation angle of 90°, this would register as a 3° roll angle.
  • the processing unit In calculating the trailer pose the processing unit therefore corrects pitch and roll measurements based on the measured yaw angle.
  • Figures 3A - 3F illustrate the processing of image frames using the parallel tracking and mapping method.
  • Figure 3A shows the initialisation method in which features are detected in an initial image, making up the first keyframe, and these features are then continuously tracked through a small change in orientation of the camera relative to the trailer before the second keyframe is taken, capturing the new locations of these features.
  • the tracked features are shown overlaid as trajectories 501 in the image.
  • Figure 3B shows the identified features on the front face 310 of the trailer 300. Most detected features are located on the clear visual features of the attached visual texture, though some have been detected on the bare trailer face 310.
  • Figure 3C shows the features as the vehicle is turned to provide a change in articulation angle. All features are tracked effectively as the articulation angle increases as shown in Figure 3C and also under variations in lighting intensity as shown in Figure 3D (most features are retained from Figure 3B).
  • the generated scene map of feature points is shown in Figure 3E where the reference plane has been fixed to the features on the planar trailer face 310 (the plane is also shown in Figures 3A to 3D), and the path of relative camera motion is shown as an arc in front of this plane.
  • a view perpendicular to the plane of the trailer front is shown in Figure 3F in which the feature points can be observed in the detected plane of the trailer face 310.
  • LCVs Long Combination Vehicles
  • articulation angle measurements at multiple points of articulation are required.
  • FIG. 4A - 4H Various configurations of articulated vehicles are shown in Figure 4A - 4H together with how the system may be modified to measure the articulation angles.
  • the above described embodiment of the invention may be applied to tractor semi-trailer shown in Figure 4A and the rigid truck and rigid drawbar trailer shown in Figure 4B by mounting the rear facing camera on a rear face of the tractor or truck respectively. Extending the invention to other forms of articulated vehicle requires a change in position of the mounted camera on the vehicle.
  • Common LCVs include‘B-double’, illustrated in Figure 4E, and‘truck and full- trailer’ combinations, illustrated in Figure 4C but many other combinations are used as shown in the arrangements of Figure 4.
  • Figure 4E if the camera is mounted to the rear of the tractor 200 of a B-double, the system would not function since the first trailer 301 (the‘B-link’) would obscure the view of the second trailer 302 (the semi-trailer).
  • Alternative camera mounting options may be used to overcome these issues such that both trailers are in the field of view.
  • an elevated mount above the tractor 1 11 or cameras mounted to the side mirrors 1 12 may be employed as shown in Figure 4E - 4H.
  • a first camera may be mounted on the tractor/truck 200 so as to image the first trailer 301 and a second camera may be mounted on the rear of the first trailer 301 so as to image the second trailer 302.
  • image data from the first camera can be processed to determine the articulation angle between the truck/tractor 200 and the first trailer 301 and image date from the second camera can be processes to determine the articulation angle between the first trailer 301 and the second trailer 302.
  • Extending the applicability of the system to rigid truck and trailer combinations can be achieved by mounting the camera 113 at the rear of the rigid truck, which provides a field of view which incorporates both the drawbar (or“dolly”) 303 and the drawbar trailer/full-trailer/semi-trailer 302 without obstruction, so that the articulation angles of both can be calculated. This is illustrated in Figures 4B-4D.
  • the image data from the camera 1 13 is partitioned into two regions of interest: a first region of interest 601 containing at least a portion of the trailer 302 and a second region of interest 602 containing at least a portion of the drawbar 303.
  • These partitions 601 , 602 are shown in Figures 5A and 5B during use of the system with a truck and full trailer (“Nordic”) combination.
  • the size of the partitions may be chosen as necessary to contain a sufficient portion of the relevant object but in this case
  • the sizes of the partitions were 440x270 and 560x180 pixels for the semi-trailer and drawbar respectively, and were centred laterally.
  • the image data from the two partitions is then processed in the manner described above to extract an estimate for each articulation angle - in the case of Figure 5 for the drawbar 303 and trailer 302.
  • the regions may be processes simultaneously by introducing two more processing threads.
  • an additional tracking thread and an additional mapping thread may be introduced for the second partition. This can be realised by using a processor with four cores to provide the four threads of parallel processing.
  • the two regions can be processed sequentially at each frame.
  • the small lag caused by the additional processing does not significantly delay processing and can be mitigated by introducing suitable delay compensation techniques.
  • the processing demand varies proportionately with the number of features. As such, with the same number or fewer features, the total processing demand in a sequential processing routine yields comparable framerates to full-frame processing.
  • the drawbar view is slightly different in comparison to the trailer view in that the dominant plane is horizontal instead of vertical. However, this does not influence the accuracy of the output. What is important about the application of the system to a drawbar is the proximity of the camera to the point of rotation. If the camera was located on the axis of rotation stereo initialisation would not be possible, as this requires relative translation between the two initial keyframes. Therefore the camera must be sufficiently offset from the axis of rotation to ensure that stereo initialisation of the drawbar is possible in all cases.
  • the zeroing procedure may be carried out as described above when the truck and full-trailer is being driven in a straight line.
  • a second camera may be added to provide a stereo camera pair, which enables to-scale measurements of distances. This can be used to detect the location of a trailer hitch, relative to a truck hitch, and hence aid in coupling the trailer. The algorithm can then be used to guide the driver towards coupling (by planning a trajectory etc)
  • the system can measure trailer parameters during operation, such as length, wheelbase, number of axles etc.
  • a reliable real time measure of the articulation angle of a vehicle and trailer can be provided which may be integrated on existing vehicles to facilitate the move towards more carbon efficient larger combination vehicles.
  • the invention uses an image processing technique such that the system may be installed on the vehicle with no adjustments to the trailer required such that it may be used commercially where often different trailers are used with one vehicle.
  • the processing method is such as to allow a computationally efficient mapping and tracking procedure that can provide an accurate measure of articulation angle in real time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A system for measuring the articulation angle of a trailer (300) pulled by a vehicle (200) is described. The system comprises : a camera (110) mounted, in use, in a fixed orientation on the vehicle (200) so as to image the trailer (300); and a processing unit (120) arranged to receive image data from the camera (110). The processing unit (120) is configured to: - receive initial image data from the camera (110) from multiple orientations relative to the trailer (300); - identify common feature points in the initial image data to compute and store a three-dimensional map; - receive further image data as the trailer (300) is pulled by the vehicle (200); - identify and track feature points in the image data relative to the stored map to estimate the pose of the camera (110) relative to the trailer (300); and calculate an articulation angle based on the estimated pose. With the system according to the present invention a reliable real time measure of the articulation angle of a vehicle (200) and trailer (300) can be provided which may be integrated on existing vehicles with little or no modification.

Description

METHOD AND SYSTEM OF ARTICULATION ANGLE MEASUREMENT
TECHNICAL FIELD
The present invention relates to a method and system for measuring an articulation angle between a vehicle and trailer, more particularly the measurement of an articulation angle using a vehicle-mounted camera.
BACKGROUND
Heavy Goods Vehicles, or HGVs, play a vital role in sustaining and growing modern
economies. In the UK in 2014, inland freight transport amounted to 185 billion tonne kilometres, and of this 74% was transported by road on HGVs. The growth of freight transport demand on limited infrastructure, coupled with ambitious C02 emission reduction targets, has driven technology and policy development to investigate and implement measures to improve the efficiency of road freight transport.
It is possible to reduce vehicle emissions by 1 1-19% with relatively minor barriers to introduction by permitting the use of longer and/or heavier truck combinations, carrying more freight with fewer vehicles. These vehicles are often referred to as ‘High Capacity Vehicles’, or HCVs. This solution can utilise existing trucks and trailers, requires no additional infrastructure, and reduces the number of HGVs on the roads. Furthermore, these vehicles are as safe, and often safer than, conventional HGVs when operated within a suitable regulatory framework. The effectiveness and safety of such vehicles has been proved in implementations and trials in a number of countries.
However, manoeuvrability is a particular limitation for long HGVs or HCVs with two or more articulation points. These will tend to exhibit large cut-in behaviour when turning, and are inherently unstable in reverse. Although a professional driver can reverse singly articulated vehicles (into a loading bay for example), reversing combinations with two or more articulation points is nearly impossible without technological intervention.
Existing technologies intended to improve the manoeuvrability and hence road network access of HCVs include active trailer steering, autonomous reversing, combined braking and steering control and anti-jackknife control. A notable challenge preventing the widespread uptake of such technologies is the requirement for additional vehicle sensors and instrumentation. Specialised sensing equipment is fundamental to most active vehicle control technologies and high levels of accuracy and robustness are usually required. The cost and practicality of sensors is equally important.
A particularly significant barrier to the commercial adoption of such manoeuvrability-improving technologies is the difficulty in implementing reliable articulation angle sensing. Articulation angle sensing refers to the measurement of the yaw angle between a truck and trailer, or between subsequent trailers, and is a core requirement for autonomous reversing, jackknife prevention, and combined braking and steering control technologies.
Various articulation sensors exist either commercially or for research and development work. These are generally trailer-based“contact-type” sensors, and require non-standard communication links with the tractor. In particular, they often rely on trailer-based articulation angle sensors, with nonstandard or experimental communication links. Furthermore they often require significant modifications to the trailer kingpin and/or require a physical connection between vehicle and trailer. They also suffer from insufficient resolution for active-trailer steering applications. Furthermore, for semi-trailers the fifth wheel is subjected to high static and dynamics loads and is a dirt and grease-prone environment, with potentially negative effects on the longevity of these sensors.
Since tractor and trailer units are generally designed to be interchangeable so that any tractor can pull any trailer, trailer-based articulation sensors require installation onto each trailer to be pulled by the vehicle and therefore increase the cost and burden in widespread installation. Some vehicle based systems have been developed but these technologies generally require knowledge of the trailer states and employ non-standard sensors. Such systems are therefore not well suited to commercial use where trailers are interchanged with a certain tractor as the systems will often need to be adjusted to suit the changing trailer geometry and/or have sensors fitted to the trailer.
A substantial improvement in the commercial feasibility of these technologies is achieved if all sensing can be conducted remotely on the same vehicle unit as the controller and actuation, without the need for modifications to or pre-existing knowledge of the other vehicle unit. This would permit these technologies to be completely developed, optimised and pre-fitted, mitigating the risk of incorrect fitment of sensors or modifications to the other vehicle unit which would impact performance.
There have been some attempts to measure articulation angle via non-contact measurement, for example using camera-based methods. However such systems generally require extensive knowledge of the type of trailer used and/or physical markers of some form to be placed on the trailer to be identified in the images. Such methods therefore do not overcome the above issues with the requirement to be robust to interchanging of various different trailer types. Furthermore the accuracy with which an articulation angle can be detected with such systems is limited and the data processing requirements are too high to allow the commercial introduction of such systems at an appropriate cost.
Accordingly, in order to facilitate the wider uptake of HCVs, there exists a need for a system for measuring the articulation angle between a truck and trailer which makes progress in overcoming the above problems with known systems.
It particular, there is a need for a system which can be installed on the vehicle and does not require modifications to the trailer. The system should be low cost and practical, should be sufficiently accurate for control effectiveness, not require significant pre-configurations or knowledge of specific vehicle/trailer type and should be robust to the heavy duty operating environment of HGVs.
SUMMARY OF THE INVENTION
The present invention seeks to provide a system and method for the measurement of an articulation angle between a vehicle and trailer. It is a particular aim to provide a non-contact, vehicle-based articulation angle sensor which is compatible with multiple trailer combinations. A further aim is to provide a low cost system which is robust to the use of various interchangeable trailers and can measure the articulation angle to a sufficient degree of precision without prolonged calibration procedures.
According to a first aspect of the invention, there is provided system for measuring the articulation angle of a trailer pulled by a vehicle, the system comprising: a camera mounted, in use, in a fixed orientation on the vehicle so as to image the trailer; and a processing unit arranged to receive image data from the camera; wherein the processing unit is configured to: receive initial image data from the camera from multiple orientations relative to the trailer; identify common feature points in the initial image data to compute and store a three- dimensional map; receive further image data as the trailer is pulled by the vehicle; identify and track feature points in the image data relative to the stored map to estimate the pose of the camera relative to the trailer; and calculate an articulation angle based on the estimated pose.
The present invention therefore provides a non-contact system for articulation angle measurement by processing the image data received by a camera mounted so as to image the trailer. In this way, a vehicle installed with the system can be used with any trailer without modification and therefore improves the interoperability of vehicles and trailers. It also prevents the wear associated with sensors positioned on the trailer and avoids the need for communications links between the trailer and vehicle. The components required are inexpensive and straightforward to install such that it is low cost and practical to implement across a large number of vehicles. The image processing unit of the present invention does not require any knowledge about the imaged trailer, such as specific dimensions or known patterns in the structure of the trailer, nor does it require physical markers to be implemented. The degree of pre-calibration and configuration is therefore markedly reduced allowing for complete interoperability with various trailers. In particular the above system makes no assumptions regarding planar surfaces and is theoretically applicable to arbitrarily non-planar scenes. This means that the algorithm would be applicable to theoretically all trailer shapes, including box-type trailers, tankers with domed fronts, trailers with front-mounted refrigeration units, and abnormal load trailers or car-carriers. As more of the trailer is viewed with increasing articulation angles, additional feature points can be added to the three-dimensional map of the trailer, allowing for efficient recall when viewed again.
Unlike basic vision-based method based on geometric models, no pre- formulation of a geometric model is required (for example based on specific physical parameters of the trailer) but instead the model is generated automatically by constructing a three-dimensional map of identified feature points in the image data and tracking feature points relative to the stored three dimensional map. By tracking feature points relative to the stored three- dimensional map it also possible to measure the relative roll and pitch angles of the trailer, in addition to the relative yaw angle (corresponding to the articulation angle). This is a significant advantage over prior art methods which generally require additional sensors placed on the trailer to measure roll and pitch angles, meaning the system is not solely vehicle-based but also requires a the trailer to be configured before use. The processing steps of the current invention mean pitch, roll and yaw can be calculated solely from the vehicle-mounted camera, meaning a trailer can be exchanged without having to mount additional sensors.
Furthermore, unlike other vision-based methods, the separation of the mapping and tracking elements of the method mean it is computationally efficient, reducing the processing requirement and allowing the continuous calculation of an articulation angle in real time. The camera may be a camera with a wide angle lens such that, in use, most or all of the opposing face of the trailer is captured in the field of view. In this way, feature points may be selected across a wide area of the trailer to provide more accurate tracking and/or mapping of the feature points.
The term“vehicle” or“towing unit” is used to refer to any type of automobile configured to pull a trailer. Such vehicles include a“truck” which is a generally a freight transport vehicle having a rigid structure and a“tractor” which is a vehicle which couples to a trailer via a fifth wheel coupling. The vehicle or towing unit might also itself be singularly or multiply articulated. For example, in the case of a truck or tractor pulling two trailers (as shown in in Figure 4 (e) and (g)), the truck/tractor and the directly connected trailer together can be considered the “vehicle” or“towing unit”, which is configured to pull one or more additional trailers. In the latter situation the camera can be mounted on the truck/tractor (i.e. the driving unit) or the immediately connected trailer, so as to image the second trailer, towed by the first trailer. Clearly this extends to three or more trailers (see for example Figure 4 (f) and (h)) wherein the truck or tractor and the front two or more trailers can be considered the“vehicle” or“towing unit” as this falls within the above definition of “any automobile configured to pull a trailer”. The camera can therefore be mounted on the truck/tractor or an intermediate trailer pulled by the truck vehicle, as long as it is orientated to image the trailer pulled by the truck and one or more intermediate trailers. In this way, the invention can measure the articulation angle between a truck/tractor and trailer or between adjacent connected trailers.
The above definition of the term“vehicle” or“towing unit” also covers other types of vehicle configured to pull a trailer, for example the front unit of an articulated bus or the front unit of articulated construction equipment.
The term“feature points” is used to refer to distinct points of interest in the image data which are identifiable by the processing unit. These are locations and intensities of pixels or groups of pixels in the image data. Features are usually detected based on common groupings of pixel intensities, or particular discontinuities in pixel intensity gradients. Common image features include edges, corners and‘blobs’. A feature point may have associated with it a feature descriptor, which enables the effective matching of similar feature points from one image to another. Examples of feature descriptors include, but are not limited to, the Binary Robust Independent Elementary Features (BRIEF), Gradient Location and Orientation Histogram (GLOH), or a vector of the pixel intensities of the pixels in an area of pre-defined size around the location of the feature.
The processing unit may be configured to use one or more feature detector algorithms to identify feature points in the image. For example, convolving an image with a Gaussian kernel, Laplacian of Gaussian (LoG) kernel, or a Difference of Gaussian (DoG) kernel may be used to detect peaks, troughs or zero-crossings of pixel intensity gradients, from which various features are defined. Examples of feature detectors that may be used include the Canny edge detector, the Harris corner detector, and the DoG blob detector. A possible corner detector which may be implemented is the Features from Accelerated Segment Test (FAST) in which the variation in pixel intensities in a circle around a point are used to determine whether or not the point is a corner.
The phrase “three-dimensional map” is used to refer to the record of the locations of all identified feature points in three-dimensional space, together with the appropriate feature descriptors when used.
The phrase“pose of the camera” is used to refer to the position and orientation of the camera. The pose may therefore have 6 degrees of freedom. The pose of the camera may be output in the form of a 3x1 translation vector and a 3x3 rotation matrix.
The articulation angle refers to the yaw angle of the trailer relative to the vehicle or drawbar. The articulation angle may be calculated by decomposition of the rotation matrix into a combination of sequential rotations about each axis, known as Euler angles: roll (f), yaw (y) and pitch (Q) where the yaw angle corresponds to the articulation angle assuming the pitch and roll angles are zero. In preferable embodiments the initial image date comprises two images from two different orientations of the camera relative to the trailer. For example, two images may be taken as the vehicle and trailer is driven through a manoeuvre which provides a change in orientation of the trailer. This process may be referred to as “stereo initialisation”. These two images are defined as “keyframes” - frames on which the current map is based. Additional“keyframes” may be added to update the map during processing of further image data.
Preferably the processing unit is configured to update the stored three- dimensional map from the received further image data. In this way, the currently stored three-dimensional map, relative to which the pose of the camera is estimated, may be updated and optimised as further image data is received. The accuracy of the calculated articulation angle is therefore improved.
Additional images taken from the further image data to update the map are defined as further“keyframes”.
In certain embodiments the processing unit is configured to only update the stored three-dimensional map if certain conditions are met, wherein the conditions comprise one or more of: the time since the last update exceeds a certain threshold;
the orientation of the camera relative to the trailer has changed by more than a certain threshold; feature points in the image data are being tracked successfully. In this way, the processing requirement is reduced since the stored map is only updated when required, providing a more computationally efficient method.
Preferably the processing unit is configured to run two parallel threads: a mapping thread which updates the stored three-dimensional map from the further image data; and a tracking thread which tracks feature points in the further image data relative to the latest stored three-dimensional map and updates the estimated pose of the camera. This allows the use of computationally expensive techniques not usually associated with real-time operation allowing the articulation angle to be tracked at the frame rate. Furthermore, since mapping and tracking are run independently, it is not necessary to use every video frame for mapping such that the system does not waste time processing redundant information but only processes a smaller number of frames which contain new, useful information which can be used for mapping. This also means more time is gained for processing each frame for mapping, reducing the computational burden associated with meeting strict real- time processing requirements.
The parallel formulation results in excellent computational efficiency, while allowing for computationally expensive pose and map refinement to take place when other mapping tasks are not needed.
In some embodiments the processing unit is a dual core processor wherein one core carries out the mapping processes and a second core carries out the parallel tracking processes.
Preferably the processing unit is further configured to identify one or more regions of interest in the image data. In this way, a more computationally efficient system is provided since only a subsection of the image data is continually processed. This region of interest may be chosen to be an area which is readily identifiable to the processing unit and provides a more reliable calculation of camera pose and accordingly articulation angle. For example the region of interest may correspond to a portion of the trailer or drawbar. In certain embodiments the processing unit is configured to identify two regions of interest in the image data, a first region of interest corresponding to at least a portion of the trailer and a second region of interest corresponding to at least a portion of the drawbar. In this way two independent articulation angles may be extracted from a drawbar-trailer combination having one articulation angle between the vehicle and the drawbar and the second articulation angle between the drawbar and the trailer.
In some embodiments the processing unit is either configured to: run four parallel threads, two threads corresponding to the mapping and tracking of the trailer region of interest respectively and two threads corresponding to the mapping and tracking of the draw bar region of interest respectively; or run two parallel threads wherein the mapping and tracking of the trailer region of interest and draw bar region of interest are performed alternately. In this way the tracking and mapping of the two regions of interest may be performed in a computationally efficient way to provide the two articulation angles in real time. This may equally be applied to regions of interest other than a trailer and drawbar or extended to three or more regions of interest.
Preferably the processing unit is configured to determine a zero value of the articulation angle by collecting image data while the vehicle is driven in a straight line. Zeroing the articulation angle in this way provides a more accurate measurement during processing since, after the initial map is stored based on the initial image data (stereo initialisation), the reference co-ordinate frame is not necessarily aligned with the longitudinal axis or yaw plane of the vehicle, depending on the features found or the orientation of the trailer between the initial images. In this way, calibration is performed during one simple initial straight line manoeuvre over a few seconds.
In certain embodiments zeroing is performed by a user initiated or automated command when the trailer is known to be travelling straight with both articulation angles approximately zero. Subsequent rotation matrices may be post- multiplied by this reference matrix to adjust the co-ordinate frame accordingly, before performing Euler decomposition to extract the articulation angle.
Preferably the processing unit is configured to estimate the pose by calculating a rotation matrix, the rotation matrix describing the relative motion between the vehicle and trailer, wherein the rotation matrix is calculated from the location of the identified feature points relative to the location of the corresponding feature points in the stored three-dimensional map.
In certain embodiments the system provides two primary outputs: a 3-D map of feature point locations, and the camera pose in the form of a 3x1 translation vector and a 3x3 rotation matrix. The rotation matrix can be defined relative to the dominant plane observed during initialisation and the translation vector represents the motion of the camera relative to its original location. Euler decomposition may be used to extract the roll, pitch and yaw angles of the trailer, wherein the yaw angle corresponds to the articulation angle, assuming the pitch and roll angles are zero.
In certain embodiments the camera is mounted, in use, on a vehicle towing a full-trailer or dolly and semi-trailer having two articulation angles, one articulation angle between the vehicle and the drawbar and the second articulation angle between the drawbar and the trailer; and the camera is mounted, in use, in a fixed orientation on the vehicle such that the draw-bar and trailer are imaged; wherein the processing unit is further configured to calculate the two articulation angles. This allows the calculation of two articulation angles using a single camera and processing unit. In this way, the multiple articulation angles of various HGV combinations may be calculated for example truck and full-trailer; and truck, converter dolly and semi-trailer (“Nordic combination”). In some embodiments the camera may be mounted at a raised position above the vehicle to capture multiple trailers in the field of view such that the articulation angle of each may be calculated. In such an embodiment, the multiple articulation angles of addition HGV combinations may be calculated, which include but are not limited to: B-double; B-triple; A-double; A-triple; AB-triple.
Preferably the feature points are arbitrary points in the image data selected by the image processing unit such that no physical markers need be implemented. In this way, no pre-configuration or preparation of a trailer need be carried out.
Features may be detected based on common groupings of pixel intensities, or particular discontinuities in pixel intensity gradients. Examples of image features include edges, corners and ‘blobs’. Edges are not well-suited to motion detection, but can be used to identify shapes (using Hough transforms for example). Corners and blobs are more useful for motion detection.
Preferably the processing unit is configured to receive the initial image data as the vehicle is driven through a manoeuvre such that relative motion between the vehicle and trailer is generated to provide the multiple orientations. In this way, the initial map can be generated in a straightforward manner via a simple manoeuvre and without significant pre-configuration. Preferably two images of the trailer are captured during the manoeuvre to provide a moderate translation between them. These two images are the first two keyframes forming the initial image data and are used to generate the initial three dimensional map. In certain embodiments the initial three dimensional map can be stored offline and then subsequently used to recognise when the same trailer is coupled to the vehicle again. In this way it is not necessary to repeat the initialisation process when it has been performed previously.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1A is a schematic illustration of a system according to the present invention;
Figure 1 B is an illustration of the system according to the present invention in use on a vehicle;
Figure 2 is a diagram showing the steps performed by the processing unit of a system according to the present invention;
Figure 3A illustrates the stereo initialisation step performed by the processing unit of a system according to the present invention;
Figure 3B illustrates the features on a trailer face detected by the processing unit of a system according to the present invention;
Figure 3C illustrates the same features tracked by the processing unit of a system according to the present invention as the articulation angle varies; Figure 3D illustrates the same features tracked by the processing unit of a system according to the present invention as the articulation angle varies under low light conditions;
Figure 3E illustrates the 3D map of the feature points of the trailer stored by the processing unit of a system according to the present invention as the articulation angle varies;
Figure 3E illustrates a side view the 3D map of the feature points of the trailer stored by the processing unit of a system according to the present invention as the articulation angle varies;
Figure 4A illustrates a system according to the present invention implemented on a tractor semi-trailer combination;
Figure 4B illustrates a system according to the present invention implemented on a rigid truck and rigid drawbar trailer combination;
Figure 4C illustrates a system according to the present invention implemented on a rigid truck and full-trailer combination;
Figure 4D illustrates a system according to the present invention implemented on a rigid truck, dolly and semi-trailer combination (“Nordic combination”);
Figure 4E illustrates a system according to the present invention implemented on a B-double combination;
Figure 4F illustrates a system according to the present invention implemented on a B-triple combination;
Figure 4G illustrates a system according to the present invention implemented on a A-double combination; Figure 4H illustrates a system according to the present invention implemented on a A-triple combination;
Figures 5A and 5B illustrate the segmentation of the image data into regions of interest in order to calculate two articulation angles.
DETAILED DESCRIPTION
Figures 1A and 1 B illustrate a system 100 for measuring the articulation angle of a trailer 300 pulled by a vehicle 200, according to the present invention. The example of a rigid truck and full-trailer with two articulation angles is illustrated. The system 100 comprises a camera 110 mounted in a fixed orientation on the vehicle 200 so that the trailer 300 is within the field of view. The system 100 also includes a processing unit 120 which is connected to the camera 110 such that it can receive image data from the camera. As described in greater detail below, the processing unit 120 is configured to receive initial image data from the camera 1 10 from multiple orientations relative to the trailer 300; identify common feature points in the initial image data to compute and store a three- dimensional map; receive further image data as the trailer 300 is pulled by the vehicle 200; identify and track feature points in the image data relative to the stored map to estimate the pose of the camera 1 10 relative to the trailer 300; and calculate an articulation angle A based on the estimated pose.
The camera may be a digital camera configured to take images of an appropriate field of view, such that it includes a significant proportion of the trailer face, and is capable of connection to the processing unit. In this example a digital camera is fitted to a bracket behind the tractor cabin, facing the front of the semi-trailer. The camera is mounted centrally relative to the sides of the tractor cab and at an arbitrary height above the hitch while maintaining a reasonable view of the front of the trailer. The optical axis of the camera may be aligned by eye with the lateral centre of the trailer. The lens used is a wide-angle lens with a focal length of approximately 3 mm. The camera may be colour or greyscale. In this example greyscale images are captured via USB 3.0 at 20 fps at a resolution of 640x480 pixels.
In this example, the connection means between camera and processing unit is a USB 3.0 cable but may equally be provided by another physical communications link or a wireless connection.
Figure 1 B illustrates how the system is used in use to monitor the articulation angle A of a semi-trailer 300 being pulled by a vehicle 200. In this example, the camera 110 is mounted in a fixed orientation on the rear face 210 of the vehicle 200 such that it faces the opposing, front face 310 of the trailer 300. As will be described in more detail below, to initialise the system, two images are taken with the trailer 300 in differing orientations relative to the vehicle 200 (and accordingly also to the camera 110). For example a first image may be taken as the vehicle 200 is making a right-hand turn as shown in Figure 1 B and a further image may be taken as it takes a subsequent left-hand turn. Image data corresponding to these initial two images is received by the processing unit 120 (not shown in this view) which processes them to produce a three-dimensional map of the imaged portion of the front face 310 of the trailer 300. As the vehicle continues to travel, pulling the trailer 300, further images are received and processed to determine the rotation of the current image relative to the stored map. In this way the pose of the camera 110 may be estimated from which the articulation angle A may be calculated.
Processing method
The method implemented by the processing unit 120 to determine the articulation angle from the received image data will now be described in more detail. The processing method used implements tracking of image features relative to an initial 3D map and updating of the 3D map as parallel processes to obtain gains in computing efficiency. The method can be summarised as follows:
Parallel tracking and mapping (PTAM) consists of two parallel processing threads: a mapping thread and a tracking thread. Assuming a map of 3-D feature points has already been generated (the initial map is created in the stereo initialisation step using the two“keyframes” of the initial image data), the tracking thread is responsible for matching detecting features in the current frame with features observed in the previously stored keyframes, and thereby updating the camera’s pose in the current frame. Thus the motion of the camera is tracked in a known 3-D scene. Tracking is performed at frame rate (i.e. at each new frame).
Specific details of the mapping and tracking tasks of one particular example of the processing system are illustrated in Figure 2.
Figure 2 illustrates the data processing method 400 according to the present invention.
At step 410, stereo initialisation is performed in which initial image data is received from the camera in the form of a plurality of images from different orientations relative to the trailer.
For example, two images are taken as the vehicle is driven through a manoeuvre to provide a variation in orientation of the trailer 300 relative to the camera 1 10. These two images are the first two“keyframes” - that is, images on which the 3D map is based.
At step 420 the initial map is computed and stored based on the initial image data.
This stereo initialisation step is important, as this defines the initial map upon which all updates of the map are based. Features are detected in an initial image, making up the first keyframe, and these features are then continuously tracked. A small translation of the camera relative to the scene is then required before the second keyframe is taken, capturing the new locations of these features. As the exact magnitude is not actually known and can vary, the resultant Cartesian map is only accurate up to a scale factor. Rotations are independent of this assumption. At step 430 the articulation angle is zeroed.
An additional initialisation step may be used to zero the articulation measurements during processing. This can be carried out in a straightforward manner during real-time processing. After stereo initialisation, the reference co- ordinate frame is not necessarily aligned with the longitudinal axis or yaw plane of the truck, depending on the features found or the orientation of the trailer between the first two keyframes.
When the trailer is known to be travelling straight, with the articulation angle approximately zero, the processing unit obtains the instantaneous rotation matrix of the pose. This may be initiated by a user entering a command on a user interface or automated. Subsequent rotation matrices may then be post- multiplied by this reference matrix to adjust the co-ordinate frame accordingly, before performing Euler decomposition to extract the articulation angle.
Steps 410 to 430 represent the initialisation processes which are complete at point 440 in Figure 2, at which stage the parallel tracking 460 and mapping 470 threads carried out by the processing unit are started. The image data is received continuously with tracking carried out at the frame rate and mapping carried out as needed. In some embodiments the tracking and mapping threads are started immediately following stereo initialisation and then zeroing is performed at a later point during tracking to zero the measurements.
The mapping thread is responsible for building the 3-D map of feature points relative to the global reference frame defined during initialisation. This need not run at frame rate but rather performs more intensive map updates when necessary, independently of the tracking thread.
At step 450 the latest map is recalled by the processing unit and provides the basis for the parallel tracking and mapping threads. Immediately following stereo initialisation the map is based solely on the two keyframes of the initial image data. After further keyframes are recorded (as discussed below) the map is updated such that it is based on N keyframes, where N is the total number of keyframes taken from the image data.
At step 470 the processing unit checks whether a further keyframe is required.
Mapping is not updated at every frame but is limited only to certain‘keyframes’ where various criteria have been met. This allows for a computationally more expensive but more accurate method of mapping to be performed since it does not need to be performed at the frame rate. In addition, by taking mapping out of the tracking thread, more computational time is made available for the tracking task, resulting in more accurate camera pose estimation.
Various criteria may be used to decide whether a further keyframe is to be recorded. For example a further keyframe may be recorded only if sufficient camera translation has occurred for the mapping update to be meaningful. In this way a keyframe is only added if a minimum distance from other keyframes is exceeded. Similarly a minimum number of frames may be selected between keyframes such that, for example, new keyframes are only added provided at least 20 frames have passed since the last keyframe. A further criteria may be that the tracking thread is operating successfully such that a new keyframe is only recorded if the tracking thread is indicating that tracking quality is above a threshold value. This threshold quality value is described in more detail below.
If the processor determines that the relevant criteria are not met and as such a new keyframe from the image data is not required, step 471 is performed in which the map is refined in this“free time” when mapping is not required.
In particular, when no new keyframes are being added, and the camera is viewing previously explored sections of the map, the mapping thread uses this ‘free time’ to refine the existing map. This may be done using global bundle adjustment, which is too computationally expensive to be calculated in real-time.
Further refinements may be made by making new measurements in old key- frames to measure newly created features in older key frames or to re-measure outlier measurements. These data association refinements are given a low priority in the mapping thread such that they are only performed if there are no higher ranked tasks such as when a new keyframe is required.
If the criteria for processing a new keyframe are met then step 472 is performed in which map stored at 450 is updated with the new key frame.
In step 472 the map is expanded and refined with the addition of a new key frame. Local bundle adjustment may be performed to iteratively optimise re- projection errors over all map points and keyframes. The tracking system may have only measured a subset of the potentially visible features in the keyframe so the mapping thread re-projects and measures the remaining map features. Triangulation is performed with other keyframes to extract depth information.
Following the alternative refining 471 or updating 472 steps at step 473 the refined/updated map is saved at 450 to be used in the parallel tracking process, as will now be described.
Whereas the above-described mapping update is only performed as needed when certain conditions are met, the tracking thread proceeds at frame rate with each frame of the image data processed by the processing unit.
At step 460 features in the currently received frame are detected in the image and tracked relative to the currently saved map 450.
In particular, in one example features are detected in the current image at four image pyramids levels (i.e. four versions of the same image, each down- sampled by a factor of two relative to the preceding level). A small image patch around each feature point is also recorded, assuming the feature to be locally planar.
A ‘prior’ estimate of the camera’s pose (position and orientation) is then calculated based on its previous position using a decaying velocity motion model (similar to an alpha-beta filter). Known 3-D map points are projected into the current image frame. A pin-hole model may be used and distortion may be accounted for using a FOV-model. The planar feature patches are warped to accommodate the change in camera pose.
These projected points are matched to detected points in the current image using a fixed-range search around their predicted locations. In this example, an initial coarse search is done with a limited number of (for example 50) feature points.
At step 461 the camera pose is updated.
The camera pose may be updated using the matches of the limited number of feature points with a further update of the pose performed using a greater number of points (for example up to 1000), searching over a smaller window.
The tracking thread also estimates the tracking quality at every frame using the fraction of the predicted features which have been successfully observed. This fraction can be used as a measure of quality and a threshold may be set which, if the quality drops below, no new keyframes are sent to the mapping thread.
At step 462 the updated camera pose and feature information is fed back in to the start point 440 of the mapping/tracking threads and this is used in determining whether a new key frame is required and the refining/updating processes of the map.
At step 480 the updated camera pose is used to calculate a corresponding trailer pose.
The camera pose is output in the form of in the form of a 3x1 translation vector and a 3x3 rotation matrix describing the orientation of the identified feature points relative to the corresponding points in the map. The translation vector represents the relative motion of the camera with respect to the map relative to its original location. The 3x3 rotation matrix can be decomposed into a combination of sequential rotations about each axis, known as the Euler angles: roll (cp), yaw (y) and pitch (Q). This is summarised as follows:
Figure imgf000023_0001
where
Figure imgf000023_0002
Euler angles may be extracted from the rotation matrix using a combination of the above definitions and logic to reject multiple solutions. The above rotations are in the camera co-ordinate frame. The yaw angle may be taken to be the articulation angle of the trailer relative to the tractor provided pitch and roll variations are small. In the case of pitch and roll however, one must take care to note the effect of the transformation from camera to trailer co-ordinates. For example, assume that the trailer is pitched at 3° about its own lateral axis. In the camera reference frame this would register as a 3° pitch angle only when the articulation angle is zero. At an articulation angle of 90°, this would register as a 3° roll angle.
More generally, this can be accounted for as follows:
Figure imgf000024_0001
In calculating the trailer pose the processing unit therefore corrects pitch and roll measurements based on the measured yaw angle.
Figures 3A - 3F illustrate the processing of image frames using the parallel tracking and mapping method.
Figure 3A shows the initialisation method in which features are detected in an initial image, making up the first keyframe, and these features are then continuously tracked through a small change in orientation of the camera relative to the trailer before the second keyframe is taken, capturing the new locations of these features. The tracked features are shown overlaid as trajectories 501 in the image.
Figure 3B shows the identified features on the front face 310 of the trailer 300. Most detected features are located on the clear visual features of the attached visual texture, though some have been detected on the bare trailer face 310.
Figure 3C shows the features as the vehicle is turned to provide a change in articulation angle. All features are tracked effectively as the articulation angle increases as shown in Figure 3C and also under variations in lighting intensity as shown in Figure 3D (most features are retained from Figure 3B). The generated scene map of feature points is shown in Figure 3E where the reference plane has been fixed to the features on the planar trailer face 310 (the plane is also shown in Figures 3A to 3D), and the path of relative camera motion is shown as an arc in front of this plane. A view perpendicular to the plane of the trailer front is shown in Figure 3F in which the feature points can be observed in the detected plane of the trailer face 310. Long Combination Vehicles
Thus far the present invention has been described in the context of a truck and semi-trailer arrangement in which there is only one articulation angle. However the present invention may equally be applied to various types of articulated vehicles, including those with multiple articulation angles.
Long Combination Vehicles (LCVs), with more than one articulation point, are increasing in use in Europe and are already common in countries including Sweden, Australia, New Zealand and South Africa. It is hence valuable to extend the applicability of the articulation sensor to include these combinations. In order for technologies such as reversing assist or jackknife control to be compatible with LCVs, articulation angle measurements at multiple points of articulation are required.
Various configurations of articulated vehicles are shown in Figure 4A - 4H together with how the system may be modified to measure the articulation angles. The above described embodiment of the invention may be applied to tractor semi-trailer shown in Figure 4A and the rigid truck and rigid drawbar trailer shown in Figure 4B by mounting the rear facing camera on a rear face of the tractor or truck respectively. Extending the invention to other forms of articulated vehicle requires a change in position of the mounted camera on the vehicle.
Common LCVs include‘B-double’, illustrated in Figure 4E, and‘truck and full- trailer’ combinations, illustrated in Figure 4C but many other combinations are used as shown in the arrangements of Figure 4. As shown in Figure 4E, if the camera is mounted to the rear of the tractor 200 of a B-double, the system would not function since the first trailer 301 (the‘B-link’) would obscure the view of the second trailer 302 (the semi-trailer). Alternative camera mounting options may be used to overcome these issues such that both trailers are in the field of view. For example, an elevated mount above the tractor 1 11 or cameras mounted to the side mirrors 1 12 may be employed as shown in Figure 4E - 4H. Alternatively a first camera may be mounted on the tractor/truck 200 so as to image the first trailer 301 and a second camera may be mounted on the rear of the first trailer 301 so as to image the second trailer 302. In this way, image data from the first camera can be processed to determine the articulation angle between the truck/tractor 200 and the first trailer 301 and image date from the second camera can be processes to determine the articulation angle between the first trailer 301 and the second trailer 302.
Extending the applicability of the system to rigid truck and trailer combinations (as shown in Figures 4B - 4D) can be achieved by mounting the camera 113 at the rear of the rigid truck, which provides a field of view which incorporates both the drawbar (or“dolly”) 303 and the drawbar trailer/full-trailer/semi-trailer 302 without obstruction, so that the articulation angles of both can be calculated. This is illustrated in Figures 4B-4D.
In order to extract two independent articulation angles from a single camera, the image data from the camera 1 13 is partitioned into two regions of interest: a first region of interest 601 containing at least a portion of the trailer 302 and a second region of interest 602 containing at least a portion of the drawbar 303. These partitions 601 , 602 are shown in Figures 5A and 5B during use of the system with a truck and full trailer (“Nordic”) combination. The size of the partitions may be chosen as necessary to contain a sufficient portion of the relevant object but in this case The sizes of the partitions were 440x270 and 560x180 pixels for the semi-trailer and drawbar respectively, and were centred laterally.
The precise dimensions of these regions may be chosen by trial and error to give a good field of view of the target object within the range of articulation, whilst minimising unnecessary background visuals. An optimum size of these partitions can be found which is suitable for most or all anticipated trailers and geometries.
The image data from the two partitions is then processed in the manner described above to extract an estimate for each articulation angle - in the case of Figure 5 for the drawbar 303 and trailer 302. There are two possible ways in which the processing of the partitioned image data may be carried out. Firstly, the regions may be processes simultaneously by introducing two more processing threads. In particular an additional tracking thread and an additional mapping thread may be introduced for the second partition. This can be realised by using a processor with four cores to provide the four threads of parallel processing.
Alternatively, the two regions can be processed sequentially at each frame. The small lag caused by the additional processing does not significantly delay processing and can be mitigated by introducing suitable delay compensation techniques. This is because the total number of features tracked in both semi- trailer and drawbar regions of interest are likely to be equal to or less than when the full frame is used. For the same number of keyframes, the processing demand varies proportionately with the number of features. As such, with the same number or fewer features, the total processing demand in a sequential processing routine yields comparable framerates to full-frame processing.
The drawbar view is slightly different in comparison to the trailer view in that the dominant plane is horizontal instead of vertical. However, this does not influence the accuracy of the output. What is important about the application of the system to a drawbar is the proximity of the camera to the point of rotation. If the camera was located on the axis of rotation stereo initialisation would not be possible, as this requires relative translation between the two initial keyframes. Therefore the camera must be sufficiently offset from the axis of rotation to ensure that stereo initialisation of the drawbar is possible in all cases.
The zeroing procedure may be carried out as described above when the truck and full-trailer is being driven in a straight line.
Further possible functionality
In some further examples of the present invention a second camera may be added to provide a stereo camera pair, which enables to-scale measurements of distances. This can be used to detect the location of a trailer hitch, relative to a truck hitch, and hence aid in coupling the trailer. The algorithm can then be used to guide the driver towards coupling (by planning a trajectory etc)
In embodiments implementing elevated or side mounted cameras, and possibly also stereo cameras, the system can measure trailer parameters during operation, such as length, wheelbase, number of axles etc.
With the system according to the present invention a reliable real time measure of the articulation angle of a vehicle and trailer can be provided which may be integrated on existing vehicles to facilitate the move towards more carbon efficient larger combination vehicles.
The invention uses an image processing technique such that the system may be installed on the vehicle with no adjustments to the trailer required such that it may be used commercially where often different trailers are used with one vehicle. Unlike previous camera based systems, the processing method is such as to allow a computationally efficient mapping and tracking procedure that can provide an accurate measure of articulation angle in real time.

Claims

1. A system for measuring the articulation angle of a trailer pulled by a vehicle, the system comprising:
a camera mounted, in use, in a fixed orientation on the vehicle so as to image the trailer; and
a processing unit arranged to receive image data from the camera; wherein the processing unit is configured to:
receive initial image data from the camera from multiple orientations relative to the trailer;
identify common feature points in the initial image data to compute and store a three-dimensional map;
receive further image data as the trailer is pulled by the vehicle;
identify and track feature points in the image data relative to the stored map to estimate the pose of the camera relative to the trailer; and
calculate an articulation angle based on the estimated pose.
2. The system of claim 1 wherein the processing unit is configured to update the stored three-dimensional map from the received further image data.
3. The system of claim 2 wherein the processing unit is configured to only update the stored three-dimensional map if certain conditions are met, wherein the conditions comprise one or more of:
the time since the last update exceeds a certain threshold;
the orientation of the camera relative to the trailer has changed by more than a certain threshold;
greater than a threshold proportion of feature points in the image data are being tracked successfully.
4. The system of any preceding claim wherein the processing unit is configured to run two parallel threads:
a mapping thread which updates the stored three-dimensional map from the further image data; and
a tracking thread which tracks feature points in the further image data relative to the latest stored three-dimensional map and updates the estimated pose of the camera.
5. The system of any preceding claim wherein the processing unit is further configured to identify one or more regions of interest in the image data.
6. The system of claim 5 wherein the processing unit is configured to identify two regions of interest in the image data, a first region of interest corresponding to at least a portion of the trailer and a second region of interest corresponding to at least a portion of a drawbar.
7. The system of claim 6 wherein the processing unit is either configured to: run four parallel threads, two threads corresponding to the mapping and tracking of the trailer region of interest respectively and two threads corresponding to the mapping and tracking of the draw bar region of interest respectively; or
run two parallel threads wherein the mapping and tracking of the trailer region of interest and draw bar region of interest are performed alternately.
8. The system of any preceding claim wherein the processing unit is configured to determine a zero value of the articulation angle by collecting image data while the vehicle is driven in a straight line.
9. The system of any preceding claim wherein the processing unit is configured to estimate the pose by calculating a rotation matrix, the rotation matrix describing the relative motion between the vehicle and trailer, wherein the rotation matrix is calculated from the location of the identified feature points relative to the location of the corresponding feature points in the stored three- dimensional map.
10. The system of claim 9 wherein the rotation matrix is used to calculate the roll, pitch and yaw angles of the trailer.
11. The system of any preceding claim wherein the camera is mounted, in use, on a vehicle towing a draw-bar trailer having two articulation angles, one articulation angle between the vehicle and the drawbar and the second articulation angle between the drawbar and the trailer; and
the camera is mounted, in use, in a fixed orientation on the vehicle such that the draw-bar and trailer are imaged; wherein the processing unit is further configured to calculate the two articulation angles.
12. The system of any preceding claim is mounted at a raised position above the vehicle so as to capture multiple trailers in the field of view.
13. The system of claim 12 wherein multiple articulation angles are calculated each articulation angle corresponding to one of the multiple trailers.
14. The system of any preceding claim wherein the feature points are arbitrary points in the image data identified by the image processing unit such that no physical markers need be implemented.
15. The system of any preceding claim wherein the processing unit is configured to receive the initial image data as the vehicle is driven through a manoeuvre such that relative motion between the vehicle and trailer is generated to provide the multiple orientations.
PCT/GB2019/051091 2018-04-17 2019-04-17 Method and system of articulation angle measurement WO2019202317A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1806279.4 2018-04-17
GBGB1806279.4A GB201806279D0 (en) 2018-04-17 2018-04-17 Method and system of articulation angle measurement

Publications (1)

Publication Number Publication Date
WO2019202317A1 true WO2019202317A1 (en) 2019-10-24

Family

ID=62203313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2019/051091 WO2019202317A1 (en) 2018-04-17 2019-04-17 Method and system of articulation angle measurement

Country Status (2)

Country Link
GB (1) GB201806279D0 (en)
WO (1) WO2019202317A1 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111637852A (en) * 2020-05-27 2020-09-08 中国汽车技术研究中心有限公司 System and method for measuring articulation angle of full-trailer automobile train
EP3836086A1 (en) * 2019-12-09 2021-06-16 PlusAI Corp System and method for trailer pose estimation
FR3106560A1 (en) * 2020-01-28 2021-07-30 Continental Automotive System for determining the angular position of a vehicle with two pivot points
WO2021155914A1 (en) * 2020-02-04 2021-08-12 Volvo Truck Corporation Method for adapting an overlaid image of an area located rearwards and along a vehicle side
WO2021160637A1 (en) * 2020-02-12 2021-08-19 Saf-Holland Gmbh Method and system for ascertaining an orientation of a trailer relative to a tractor vehicle
US11175677B2 (en) * 2018-05-01 2021-11-16 Continental Automotive Systems, Inc. Tow vehicle and trailer alignment
EP4012330A1 (en) 2020-12-10 2022-06-15 Volvo Truck Corporation A system and method for estimating an articulation angle of an articulated vehicle combination
US11372406B2 (en) 2019-12-09 2022-06-28 Plusai, Inc. System and method for collaborative sensor calibration
US20220258800A1 (en) * 2021-02-17 2022-08-18 Robert Bosch Gmbh Method for ascertaining a spatial orientation of a trailer
EP4070996A1 (en) * 2021-04-05 2022-10-12 Stoneridge, Inc. Auto panning camera mirror system including image based trailer angle detection
US11579632B2 (en) 2019-12-09 2023-02-14 Plusai, Inc. System and method for assisting collaborative sensor calibration
US11630454B2 (en) 2019-12-09 2023-04-18 Plusai, Inc. System and method for coordinating landmark based collaborative sensor calibration
US20230119562A1 (en) * 2021-10-20 2023-04-20 Continental Automotive Systems, Inc. System and method for adjusting trailer reverse assist parameters based upon estimated trailer position
EP4170604A1 (en) * 2021-10-19 2023-04-26 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings
WO2023066610A1 (en) * 2021-10-22 2023-04-27 Continental Automotive Gmbh Method for determining the angular position of a complex vehicle with two axes of rotation, and system configured to implement such a method
US11661006B2 (en) 2019-09-27 2023-05-30 Magna Electronics Inc. Vehicular trailering assist system
EP4202836A1 (en) * 2021-12-21 2023-06-28 Volvo Truck Corporation A method and a system for determining an articulation angle of a vehicle combination
CN116823954A (en) * 2023-08-29 2023-09-29 深圳魔视智能科技有限公司 Pose estimation method and device of articulated vehicle, vehicle and storage medium
US11820424B2 (en) 2011-01-26 2023-11-21 Magna Electronics Inc. Trailering assist system with trailer angle detection
EP4290876A1 (en) * 2022-06-06 2023-12-13 Stoneridge, Inc. Auto panning camera monitoring system including image based trailer angle detection
US11852730B2 (en) 2019-12-09 2023-12-26 Plusai, Inc. System and method for collaborative calibration via landmark
US11875682B2 (en) 2019-12-09 2024-01-16 Plusai, Inc. System and method for coordinating collaborative sensor calibration
CN117622322A (en) * 2024-01-26 2024-03-01 杭州海康威视数字技术股份有限公司 Corner detection method, device, equipment and storage medium
JP7493051B2 (en) 2020-03-31 2024-05-30 コンチネンタル・オートナマス・モビリティ・ジャーマニー・ゲゼルシャフト・ミト・ベシュレンクテル・ハフツング Vehicle trailer angle calculation method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160049020A1 (en) * 2014-08-13 2016-02-18 Bendix Commercial Vehicle Systems Llc Cabin and trailer body movement determination with camera at the back of the cabin
US20170174128A1 (en) * 2015-12-17 2017-06-22 Ford Global Technologies, Llc Centerline method for trailer hitch angle detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160049020A1 (en) * 2014-08-13 2016-02-18 Bendix Commercial Vehicle Systems Llc Cabin and trailer body movement determination with camera at the back of the cabin
US20170174128A1 (en) * 2015-12-17 2017-06-22 Ford Global Technologies, Llc Centerline method for trailer hitch angle detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHRISTOPHER CHARLES DE SAXE: "Vision-based trailer pose estimation for articulated vehicles", 8 November 2017 (2017-11-08), XP055517775, Retrieved from the Internet <URL:https://www.repository.cam.ac.uk/bitstream/handle/1810/268169/deSaxe-2017-PhD.pdf?sequence=1&isAllowed=y> [retrieved on 20181022], DOI: 10.17863/CAM.14370 *
FUCHS C ET AL: "3D pose estimation for articulated vehicles using Kalman-filter based tracking", PATTERN RECOGNITION. IMAGE ANALYSIS, ALLEN PRESS, LAWRENCE, KS, US, vol. 26, no. 1, 6 April 2016 (2016-04-06), pages 109 - 113, XP035660274, ISSN: 1054-6618, [retrieved on 20160406], DOI: 10.1134/S1054661816010077 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11820424B2 (en) 2011-01-26 2023-11-21 Magna Electronics Inc. Trailering assist system with trailer angle detection
US11175677B2 (en) * 2018-05-01 2021-11-16 Continental Automotive Systems, Inc. Tow vehicle and trailer alignment
US11661006B2 (en) 2019-09-27 2023-05-30 Magna Electronics Inc. Vehicular trailering assist system
US11852730B2 (en) 2019-12-09 2023-12-26 Plusai, Inc. System and method for collaborative calibration via landmark
US11875682B2 (en) 2019-12-09 2024-01-16 Plusai, Inc. System and method for coordinating collaborative sensor calibration
CN113033280A (en) * 2019-12-09 2021-06-25 智加科技公司 System and method for trailer attitude estimation
US11682139B2 (en) 2019-12-09 2023-06-20 Plusai, Inc. System and method for trailer pose estimation
US11087496B2 (en) 2019-12-09 2021-08-10 Plusai Limited System and method for trailer pose estimation
US11835967B2 (en) 2019-12-09 2023-12-05 Plusai, Inc. System and method for assisting collaborative sensor calibration
US11372406B2 (en) 2019-12-09 2022-06-28 Plusai, Inc. System and method for collaborative sensor calibration
EP3836086A1 (en) * 2019-12-09 2021-06-16 PlusAI Corp System and method for trailer pose estimation
US11635762B2 (en) 2019-12-09 2023-04-25 Plusai, Inc. System and method for collaborative sensor calibration
US11579632B2 (en) 2019-12-09 2023-02-14 Plusai, Inc. System and method for assisting collaborative sensor calibration
US11630454B2 (en) 2019-12-09 2023-04-18 Plusai, Inc. System and method for coordinating landmark based collaborative sensor calibration
FR3106560A1 (en) * 2020-01-28 2021-07-30 Continental Automotive System for determining the angular position of a vehicle with two pivot points
US12039629B2 (en) 2020-02-04 2024-07-16 Volvo Truck Corporation Method for adapting an overlaid image of an area located rearwards and along a vehicle side
WO2021155914A1 (en) * 2020-02-04 2021-08-12 Volvo Truck Corporation Method for adapting an overlaid image of an area located rearwards and along a vehicle side
WO2021160637A1 (en) * 2020-02-12 2021-08-19 Saf-Holland Gmbh Method and system for ascertaining an orientation of a trailer relative to a tractor vehicle
JP7493051B2 (en) 2020-03-31 2024-05-30 コンチネンタル・オートナマス・モビリティ・ジャーマニー・ゲゼルシャフト・ミト・ベシュレンクテル・ハフツング Vehicle trailer angle calculation method and system
CN111637852A (en) * 2020-05-27 2020-09-08 中国汽车技术研究中心有限公司 System and method for measuring articulation angle of full-trailer automobile train
EP4012330A1 (en) 2020-12-10 2022-06-15 Volvo Truck Corporation A system and method for estimating an articulation angle of an articulated vehicle combination
US11993305B2 (en) 2020-12-10 2024-05-28 Volvo Truck Corporation System and method for estimating an articulation angle of an articulated vehicle combination
US20220258800A1 (en) * 2021-02-17 2022-08-18 Robert Bosch Gmbh Method for ascertaining a spatial orientation of a trailer
US12084107B2 (en) * 2021-02-17 2024-09-10 Robert Bosch Gmbh Method for ascertaining a spatial orientation of a trailer
EP4070996A1 (en) * 2021-04-05 2022-10-12 Stoneridge, Inc. Auto panning camera mirror system including image based trailer angle detection
US11890988B2 (en) 2021-04-05 2024-02-06 Stoneridge, Inc. Auto panning camera mirror system including image based trailer angle detection
US12049172B2 (en) 2021-10-19 2024-07-30 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings
EP4170604A1 (en) * 2021-10-19 2023-04-26 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings
US11897469B2 (en) * 2021-10-20 2024-02-13 Continental Autonomous Mobility US, LLC System and method for adjusting trailer reverse assist parameters based upon estimated trailer position
US20230119562A1 (en) * 2021-10-20 2023-04-20 Continental Automotive Systems, Inc. System and method for adjusting trailer reverse assist parameters based upon estimated trailer position
WO2023066610A1 (en) * 2021-10-22 2023-04-27 Continental Automotive Gmbh Method for determining the angular position of a complex vehicle with two axes of rotation, and system configured to implement such a method
FR3128524A1 (en) * 2021-10-22 2023-04-28 Continental Automotive Gmbh Method for determining the angular position of a complex vehicle with two axes of rotation and system configured to implement such a method
EP4202836A1 (en) * 2021-12-21 2023-06-28 Volvo Truck Corporation A method and a system for determining an articulation angle of a vehicle combination
EP4290876A1 (en) * 2022-06-06 2023-12-13 Stoneridge, Inc. Auto panning camera monitoring system including image based trailer angle detection
CN116823954A (en) * 2023-08-29 2023-09-29 深圳魔视智能科技有限公司 Pose estimation method and device of articulated vehicle, vehicle and storage medium
CN116823954B (en) * 2023-08-29 2023-12-08 深圳魔视智能科技有限公司 Pose estimation method and device of articulated vehicle, vehicle and storage medium
CN117622322A (en) * 2024-01-26 2024-03-01 杭州海康威视数字技术股份有限公司 Corner detection method, device, equipment and storage medium
CN117622322B (en) * 2024-01-26 2024-04-26 杭州海康威视数字技术股份有限公司 Corner detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
GB201806279D0 (en) 2018-05-30

Similar Documents

Publication Publication Date Title
WO2019202317A1 (en) Method and system of articulation angle measurement
JP7124114B2 (en) Apparatus and method for determining the center of a trailer tow coupler
EP3787909B1 (en) Coupler and tow-bar detection for automated trailer hitching via cloud points
US11433812B2 (en) Hitching maneuver
US10984553B2 (en) Real-time trailer coupler localization and tracking
EP3787913B1 (en) Trailer detection and autonomous hitching
US20210078634A1 (en) Vehicular trailering assist system
US10035457B2 (en) Vehicle hitch assistance system
US7204504B2 (en) Process for coupling a trailer to a motor vehicle
EP3787912B1 (en) Tow vehicle and trailer alignment
CN112752700A (en) Automatic reversing by selection of target position
US11403767B2 (en) Method and apparatus for detecting a trailer, tow-ball, and coupler for trailer hitch assistance and jackknife prevention
CN105678787A (en) Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
US20230294767A1 (en) Vehicle Trailer Angle Estimation via Projective Geometry
EP4188775B1 (en) Long-term visual trailer tracker for vehicle-trailer angle estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19719596

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19719596

Country of ref document: EP

Kind code of ref document: A1