WO2020160927A1 - Vehicle control system and method - Google Patents

Vehicle control system and method Download PDF

Info

Publication number
WO2020160927A1
WO2020160927A1 PCT/EP2020/051683 EP2020051683W WO2020160927A1 WO 2020160927 A1 WO2020160927 A1 WO 2020160927A1 EP 2020051683 W EP2020051683 W EP 2020051683W WO 2020160927 A1 WO2020160927 A1 WO 2020160927A1
Authority
WO
WIPO (PCT)
Prior art keywords
control system
vehicle
wheel
dimensional data
rut
Prior art date
Application number
PCT/EP2020/051683
Other languages
French (fr)
Inventor
Jithesh KOTTERI
Neenu Issac
Jim Kelly
Vishnu DHARMAJAN SHEELA
Original Assignee
Jaguar Land Rover Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1901749.0A external-priority patent/GB2584383B/en
Priority claimed from GB1902191.4A external-priority patent/GB2581954B/en
Application filed by Jaguar Land Rover Limited filed Critical Jaguar Land Rover Limited
Priority to DE112020000735.9T priority Critical patent/DE112020000735T5/en
Publication of WO2020160927A1 publication Critical patent/WO2020160927A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present disclosure relates to a vehicle control system and method. Aspects of the invention relate to a control system for identifying one or more rut in a surface and to a control system for predicting a vertical position of one or more wheel of the vehicle.
  • a rut may be formed in a ground surface by the wheels of a vehicle, particularly if the ground is composed of a deformable medium, such as mud.
  • the rut is usually in the form of an elongated open channel.
  • the wheels of the vehicle may form left and right ruts which extend substantially parallel to each other.
  • the rut(s) may present an obstacle to a following vehicle and it may be appropriate to configure the powertrain and/or the suspension of the following vehicle to aid progress along the rut(s) or traversal of the rut(s).
  • the detection of ruts may prove problematic due to limitations in sensor perception. For example, optical sensors operating in very bright or very dark conditions may result in generation of false positives.
  • a vertical position of a vehicle wheel may be measured based on a suspension travel, for example as the vehicle traverses a terrain comprising changes in height.
  • the vertical position of the vehicle wheel may be used to control a vehicle system, such as a suspension assembly, to control dynamic behaviour of the vehicle.
  • control of the vehicle system is reactive.
  • the vehicle system cannot be pre-configured to anticipate changes in the terrain in a direction of travel of the vehicle.
  • the present invention seeks to overcome or address at least some of the limitations associated with known systems.
  • control system for identifying one or more rut in a surface, the control system comprising one or more controllers, the control system being configured to:
  • the control system may be configured to analyse the three dimensional data to identify one or more elongate section having a vertical offset relative to an adjacent section.
  • the control system may be configured to identify the elongate section as having a vertical height which is below that of the adjacent section.
  • the three dimensional data generated by the control system may represent a ground surface (i.e. a surface of the ground within the imaging region).
  • the control system may be installed in a host vehicle.
  • the control system may output a rut identification signal for identifying each identified elongate section as corresponding to a rut.
  • the rut identification signal may be output to one or more vehicle system, for example via a communication network.
  • the one or more vehicle system may be controlled in dependence on the rut identification signal.
  • the control system may enable advance detection of the one or more rut (i.e. before the vehicle encounters the rut).
  • the one or more vehicle system may be pre-configured to facilitate progress, for example to enable progress of the vehicle within the identified rut(s), or traversal of the identified rut(s).
  • the vehicle powertrain and/or the vehicle suspension may be pre-configured in dependence on the rut identification signal.
  • the rut identification signal may comprise rut data defining one or more characteristic of each identified rut.
  • the rut data may comprise one or more of the following: the location of the rut; a profile of the rut in plan elevation; a depth profile of the rut; and a width profile of the rut.
  • the rut data may be used to generate a graphical representation of the rut, for example to display the rut in relation to the vehicle.
  • the controller may comprise a processor having an input for receiving the image data; and a memory coupled to the processor and having instructions stored thereon for controlling operation of the processor.
  • the processor may be configured to analyse the image data to generate the three dimensional data.
  • the processor may identify the one or more elongate section having a vertical offset relative to an adjacent section.
  • the control system may be configured to identify an elongate section having a vertical offset relative to a first adjacent section disposed on a first side thereof; and/or having a vertical offset relative to a second adjacent section disposed on a second side thereof.
  • the identified elongate section may be located at a lower height than the first adjacent section and/or the second adjacent section.
  • the control system may be configured to analyse the three dimensional data to identify said one or more elongate section by identifying a step change in vertical height relative to the adjacent section.
  • the control system may be configured to analyse the three dimensional data to identify said one or more elongate section by identifying a vertical offset greater than or equal to a predetermined threshold value.
  • the control system may be configured to analyse the three dimensional data to identify said one or more elongate section having a width less than a predefined threshold width; and/or a length greater than or equal to a predefined threshold length.
  • the control system may be configured to analyse the three dimensional data to identify said one or more elongate section having a substantially continuous profile in plan elevation.
  • the elongate section may comprise a curved section and/or a rectilinear section.
  • the three dimensional data may comprise a plurality of cells.
  • the control system may be configured to analyse the three dimensional data to identify said one or more elongate section by identifying a sequence composed of a plurality of cells. Each cell in the sequence may be vertically offset from at least one adjacent cell.
  • the control system may be configured to identify first and second said elongate sections as corresponding to first and second ruts.
  • the first and second ruts may form a vehicle track, for example on an unmetalled surface.
  • the identification of said first and second elongate sections may comprise identifying elongate sections which are substantially parallel to each other.
  • the identification of said first and second elongate sections may comprise identifying elongate sections having at least substantially the same depth and/or at least substantially the same width.
  • the identification of said first and second elongate sections may comprise identifying elongate sections having a predetermined spacing therebetween; or having a spacing therebetween which is within a predetermined range.
  • the identification of the elongate section may comprise identifying each cell having first and second adjacent cells (disposed on opposing sides thereof) which are at a greater height.
  • the identification of a plurality of said cells forming a continuous or substantially continuous line may represent a rut. This configuration may be indicative of the profile of a rut in a transverse direction.
  • the control system may be configured to identify a sequence of cells representing a substantially planar surface extending in a horizontal plane. This functionality may be used in conjunction with the other techniques described herein, for example to identify first and second sequences representing respective planar surfaces which extend substantially parallel to each other.
  • the processor could optionally assess whether the first and second sequences represent surfaces at the same vertical height (which may be indicative of first and second ruts in liquid communication with each other).
  • the control system may be configured to analyse the three dimensional data to determine the vertical offset between the elongate section and the adjacent section to determine a depth of the corresponding rut.
  • the control system may be configured to output an alert if the determined vertical offset is determined to be greater than or equal to a predetermined threshold.
  • the image data may be received from first and second imaging sensors.
  • the first and second imaging sensors may, for example, each comprise an optical camera, for example a video camera.
  • the image data may comprise video image data.
  • the imaging sensors may capture the image data at least substantially in real time.
  • the three dimensional data may comprise data received from a lidar sensor or a radar sensor.
  • the image data may be received from a suitable sensor array.
  • control system for identifying first and second ruts in a surface, the control system comprising one or more controllers, the control system being configured to:
  • a rut identification signal for identifying each identified elongate section as corresponding to a rut.
  • a vehicle comprising a control system as described herein.
  • a method of identifying one or more rut in a surface comprising:
  • the method may comprise identifying said one or more elongate section by identifying a step change in vertical height.
  • the one or more elongate section may each have a substantially continuous profile in plan elevation.
  • the three dimensional data may comprise a plurality of cells.
  • the identification of said one or more elongate section may comprise identifying a sequence composed of a plurality of said cells.
  • the cells in the sequence may each be vertically offset from at least one adjacent cell.
  • the method may comprise identifying first and second said elongate sections corresponding to first and second ruts.
  • the method may comprise identifying first and second elongate sections which are substantially parallel to each other.
  • the method may comprise identifying elongate sections having a predetermined spacing therebetween.
  • the method may comprise determining a vertical offset between the elongate section and the adjacent section to determine a depth of the corresponding rut.
  • the method may comprise generating an alert if the determined vertical offset is greater than or equal to a predetermined threshold.
  • the method may comprise receiving the image data from first and second imaging sensors.
  • a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method described herein.
  • a control system for predicting a vertical position of at least one wheel of a vehicle, the control system comprising one or more controllers, the control system being configured to: receive image data representing an imaging region; and analyse the image data to generate three dimensional data relating to the imaging region.
  • the control system may be configured to receive an indication of a predicted position of the at least one wheel; and predict the vertical position of the at least one wheel at the predicted position in dependence on the three dimensional data.
  • the control system may optionally output a signal in dependence on the predicted vertical position.
  • the control system may determine the predicted position of the at least one wheel.
  • the three dimensional data may comprise topographical terrain relief data for representing terrain relief within the imaging region.
  • the three dimensional data may comprise or consist of relief features of a ground surface (i.e. a surface of the ground within 35 the imaging region).
  • the three dimensional data may comprise one or more obstacle within the imaging region.
  • the obstacle may be a vertical projection, such as a rock or a tree; or a depression, such as a hole.
  • the control system can predict or anticipate changes in the vertical position of at least one wheel.
  • the control system may pre-configure one or more vehicle systems in dependence on the predicted vertical position of the at least one wheel, for example to facilitate traversal of the terrain.
  • the control system may be configured to predict the vertical position of the at least one wheel relative to a reference point on the vehicle.
  • the reference point may, for example, define an origin of a vehicle co-ordinate system.
  • the reference point may be disposed on a centreline of the vehicle.
  • the reference point may, for example, define a centre position of a rear axle of the vehicle.
  • the controller may comprise a processor having an input for receiving the image data; and a memory coupled to the processor and having instructions stored thereon for controlling operation of the processor.
  • the processor may be configured to analyse the image data to generate the three dimensional data.
  • the processor may predict the vertical position of the at least one wheel for a given location of the vehicle.
  • the control signal may control one or more of the following: a throttle response; a drivetrain; a vehicle transmission (for example to select a particular gear ratio); a transfer case (for example to select a high or low ratio); an electrical power steering unit (for example to modify a steering ratio and/or to change feedback from the steering wheel); and a suspension system (for example to adjust suspension travel and/or to adjust a damping setting).
  • the position of the at least one wheel may be predicted for a given geospatial position of the vehicle.
  • the geospatial position of the vehicle may be defined at a position on a planned or projected route of the vehicle.
  • the position of the at least one wheel may be predicted when the vehicle is at the defined geospatial position.
  • the geospatial position of the vehicle may be defined in a reference plane, for example a horizontal reference plane or a reference plane of the vehicle.
  • the vehicle route may be determined in dependence on a current steering angle of the vehicle.
  • the steering angle may be measured by a steering wheel angular position sensor.
  • a wheel path may be determined for each wheel along the vehicle route.
  • the wheel path may be determined in dependence on the vehicle route, for example referencing a predefined vehicle geometry.
  • the vehicle geometry may comprise the wheel track and/or the wheel base of the vehicle.
  • One or more wheel may be provided on a first axle.
  • Two or more wheels may be provided on the first axle.
  • the control system may be configured to predict the vertical position of each wheel on the first axle.
  • the first axle may be a single component, for example a beam axle, a rigid axle or a solid axle.
  • the first axle may comprise a pair of stub axles supported by independent suspension assemblies disposed on opposing sides of the vehicle.
  • first and second wheels may be provided on opposite ends of the first axle.
  • the control system may determine an articulation angle of each stub axle.
  • the control system may be configured to determine a first articulation angle in dependence on the predicted vertical position of each wheel on the first axle.
  • the first articulation angle may represent an angle of a first reference axis which extends between the centres of the wheels on the first axle and a horizontal axis.
  • One or more wheel may be provided on a second axle. Two or more wheels may be provided on the second axle.
  • the control system may be configured to predict the vertical position of each wheel on the second axle.
  • the second axle may be a single component, for example a beam axle, a rigid axle or a solid axle. Alternatively, the second axle may comprise a pair of stub- axles supported by independent suspension assemblies disposed on opposing sides of the vehicle.
  • the control system may be configured to determine a second articulation angle in dependence on the predicted vertical position of each wheel on the second axle.
  • the second articulation angle may represent an angle of a second reference axis which extends between the centres of the wheels on the second axle and a horizontal axis.
  • the control system may be configured to predict a vehicle roll angle and/or a vehicle pitch angle.
  • the vehicle roll angle and/or the vehicle pitch angle may be predicted in dependence on the predicted vertical position of the wheels on the first axle relative to the predicted vertical position of the wheels on the second axle.
  • the control system may be configured to predict the vertical position of the at least one wheel in a plurality of predicted positions.
  • a vehicle data set may define a relative position of each wheel on the vehicle.
  • the vehicle data set may, for example, be stored in memory.
  • the control system may be configured to map each wheel of the vehicle to the three dimensional data to predict the vertical position of each wheel.
  • the control system may be configured to determine a route of the vehicle.
  • the predicted position of the at least one wheel may be determined for a given position of the vehicle on the route.
  • the control system may be configured to generate a route for the vehicle in dependence on the predicted vertical position of the at least one wheel.
  • the image data may comprise image data received from first and second imaging sensors.
  • the control system may be configured to output a vehicle control signal in dependence on the predicted vertical position of the at least one wheel of the vehicle.
  • a method of predicting a vertical position of at least one wheel of a vehicle comprising receiving image data relating to an imaging region; and analysing the image data to generate three dimensional data relating to the imaging region.
  • the method may comprise predicting a position of the at least one wheel; and predicting the vertical position of the at least one wheel at the predicted position in dependence on the three dimensional data.
  • the method may optionally comprise outputting a signal in dependence on the predicted vertical position.
  • the method may comprise predicting the vertical position of each wheel on a first axle.
  • the method may comprise determining a first articulation angle in dependence on the predicted vertical position of each wheel on the first axle.
  • the method may comprise predicting the vertical position of each wheel on a second axle.
  • the method may comprise determining a second articulation angle in dependence on the predicted vertical position of each wheel on the second axle.
  • the method may comprise predicting a vehicle roll angle and/or a vehicle pitch angle.
  • the vehicle roll angle and/or the vehicle pitch angle may be determined in dependence on the predicted vertical position of the wheels on the first axle relative to the predicted vertical position of the wheels on the second axle.
  • the method may comprise predicting the vertical position of the at least one wheel in a plurality of predicted positions.
  • the method may comprise mapping each wheel of the vehicle to the three dimensional data and predicting the vertical position of each wheel.
  • the method may comprise determining a route of the vehicle.
  • the predicted position of the at least one wheel may be determined for a given position of the vehicle on the route.
  • the method may comprise generating a route for the vehicle in dependence on the predicted vertical position of the at least one wheel.
  • the image data may be received from first and second imaging sensors.
  • the method may comprise outputting a vehicle control signal in dependence on the predicted vertical position of the at least one wheel of the vehicle.
  • a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method described herein.
  • Any control unit or controller described herein may suitably comprise a computational device having one or more electronic processors.
  • the system may comprise a single control unit or electronic controller or alternatively different functions of the controller may be embodied in, or hosted in, different control units or controllers.
  • control unit or“control unit” will be understood to include both a single control unit or controller and a plurality of control units or controllers collectively operating to provide any stated control functionality.
  • a suitable set of instructions may be provided which, when executed, cause said control unit or computational device to implement the control techniques specified herein.
  • the set of instructions may suitably be embedded in said one or more electronic processors. Alternatively, the set of instructions may be provided as software saved on one or more memory associated with said controller to be executed on said computational device.
  • the control unit or controller may be implemented in software run on one or more processors.
  • One or more other control unit or controller may be implemented in software run on one or more processors, optionally the same one or more processors as the first controller. Other suitable arrangements may also be used.
  • Figure 1 shows a schematic representation of a vehicle comprising a control system in accordance with an embodiment of the present invention
  • Figure 2 shows a schematic representation of a scanning region of an imaging device provided on the vehicle shown in Figure 1 ;
  • Figure 3 shows an image captured by the imaging device shown schematically in Figure 2;
  • Figure 4 shows an elevation map generated by identifying disparities in the images captured by the imaging device
  • Figure 5 shows a schematic representation of the elevation map shown in Figure 4 differentiating between traversable and un-traversable terrain features
  • Figure 6 shows a schematic representation of the elevation map shown in Figure 5 incorporating a route of the vehicle
  • Figure 7 shows a second image captured by the imaging device having a first graphical overlay representing paths of opposing wheels of the vehicle
  • Figure 8 shows a third image captured by the imaging device according to a first embodiment of the present invention having a graphical overlay representing the predicted paths of the left and right wheels of the vehicle;
  • Figure 9A shows a multi-level surface map generated by analysing the third image shown in Figure 8.
  • Figure 9B shows first and second elongate sequences extracted from the multi-level surface map shown in Figure 9A;
  • Figure 10 shows a graphical overlay representing the topographical relief of the ground surface in the third image shown in Figure 8;
  • Figure 11 is a block diagram representing the implementation of the method of the first embodiment described herein;
  • Figure 12 shows a second image captured by the imaging device according to a second embodiment of the invention having a second graphical overlay representing the predicted positions of the wheels on the paths;
  • Figure 13 shows a third image captured by the imaging device according to the second embodiment of the invention having a third graphical overlay representing the predicted positions of the wheels on the paths and a determined articulation angle of the front and rear axles at the predicted positions;
  • Figure 14 is a block diagram representing the implementation of the method of the third embodiment of the invention described herein.
  • a control system 1 for a vehicle 2 in accordance with an embodiment of the present invention will now be described with reference to the accompanying figures.
  • the vehicle 2 in the present embodiment is an automobile, but it will be understood that the controller 1 may be used in other types of land vehicle.
  • the vehicle 2 is described herein with reference to a reference frame comprising a longitudinal axis X, a transverse axis Y and a vertical axis Z.
  • the vehicle 2 comprises four (4) wheels W1-4, four suspension assemblies S1-4 (each associated with a respective wheel W1-4) and a vehicle body 4.
  • the wheels W1-4 are provided on front and rear axles 5, 6.
  • the first wheel W1 is a front left wheel
  • the second wheel W2 is a front right wheel
  • the third wheel W3 is a rear left wheel
  • the fourth wheel W4 is a rear right wheel.
  • the vehicle 2 comprises a drivetrain comprising an internal combustion engine 7 drivingly connected to the front axle 5 for transmitting a traction torque to the first and second wheels W1 , W2.
  • the internal combustion engine 7 could be drivingly connected to the rear axle 6 for transmitting a traction torque to the first and second wheels W1 , W2.
  • the drivetrain may comprise an electric propulsion unit instead of, or in addition to the internal combustion engine 7.
  • the control system 1 is operable to identify localised relief features formed in a ground surface SRF.
  • the ground surface SRF comprises or consists of the surface of a section of ground over which the vehicle 2 is travelling, such as the surface of an unmetalled road or an off-road track.
  • the control system 1 in the present embodiment is operable to identify relief features comprising a first rut R1 and/or a second rut R2.
  • the first and second ruts R1 , R2 each comprise an elongated relief feature, typically in the form of a channel, formed in the ground surface SRF.
  • the first and second ruts R1 , R2 may be formed by one or more land vehicle travelling over the ground surface SRF.
  • the ground surface SRF may be particularly susceptible to the formation of first and second ruts R1 , R2 if the underlying ground is composed of a deformable medium, such as mud or sand.
  • the first and second ruts R1 , R2 in the present embodiment are formed by the left and right wheels of a vehicle traversing the ground surface SRF. Since the transverse distance between the left and right wheels is fixed, the first and second ruts R1 , R2 are at least substantially parallel to each other.
  • a spacing between the first and second ruts R1 , R2 (in a transverse direction) at least substantially corresponds to an axle (wheel) track (i.e. the transverse distance between the wheels) of the vehicle which formed them.
  • the depth and/or the width of the first and second ruts R1 , R2 may increase as a result of the passage of more than one vehicle.
  • control system 1 is operable to estimate a wheel height of each wheel W1-4 of the vehicle 2 and/or to determine an articulation angle of the wheels W1-4.
  • a front articulation angle is determined in respect of the wheels W1 , W2 on the front axle 5; and a rear articulation angle is determined 10 in respect of the wheels W3, W4 on the rear axle 6.
  • the front articulation angle is an angle of a central axis joining the first and second wheels W1 , W2 on the front axle 5 relative to a horizontal axis.
  • the rear articulation angle is an angle of a central axis joining the wheels W3, W4 on the rear axle 6 relative to a horizontal axis.
  • the vehicle 2 comprises an inertial measurement unit (IMU) 8 for determining an orientation of the vehicle body 4.
  • the IMU 8 comprises one or more accelerometer and/or one or more gyroscope.
  • the IMU 8 in the present embodiment determines a pitch angle of the vehicle body 4 about the transverse axis Y and outputs a pitch angle signal S1 to a communication network (not shown) provided in the vehicle 2.
  • the IMU 8 may optionally also determine a roll angle of the vehicle 2 about the longitudinal axis X and output a roll angle signal.
  • a steering wheel sensor 9 is provided for determining a steering angle of the steering wheel (not shown) in the vehicle 2.
  • the steering wheel sensor 9 outputs a steering angle signal S2 to the communication network.
  • the control system 1 is configured to determine a topographical relief of the ground surface SRF.
  • the control system 1 may model the topographical relief of the ground surface in front of the vehicle 2.
  • the vehicle 2 comprises an imaging device 10 for capturing image data DIMG representing an imaging region RIMG external to the vehicle 2.
  • the imaging device 10 may be operable to capture the image data DIMG at least substantially in real time.
  • the imaging device 10 may capture a predefined number of frames of image data DIMG per second, for example twenty-four (24) frames per second.
  • the captured image data DIMG is composed of data relating to real-world features within the imaging region RIMG.
  • the imaging region RIMG in the present embodiments extends from 5m to 25m in front of the vehicle 2 in the direction of vehicle travel.
  • a first image IMG1 captured by the imaging device 10 is shown in Figure 3 by way of example.
  • the imaging device 10 is configured such that the imaging region RIMG comprises a region of the surface SRF over which the vehicle 2 is travelling.
  • the captured image data DIMG comprises the ground surface SRF proximal to the vehicle 2 and optionally also the surface(s) of one or more obstacle.
  • the captured image data DIMG may include one or more obstacle which may impede or prevent vehicle progress.
  • the imaging device 10 in the present embodiment is forward-facing and the imaging region RIMG is located in front of the vehicle 2.
  • the imaging device 10 may be mounted proximal an upper edge of a front windshield, for example behind a rear-view mirror (not shown).
  • the imaging device 10 in the present embodiment comprises a stereo camera 11 comprising first and second imaging sensors 11-1 , 11-2, as shown in Figure 1.
  • the first and second imaging sensors 11-1 , 11-2 are respective first and second optical cameras in the present embodiment.
  • the image data DIMG comprises a first set of image data DIMG-1 captured by the first camera 11-1 , and a second set of image data DIMG-2 captured by the second camera 11-2.
  • the first and second cameras 11-1 , 11-2 are spatially separated from each other but have overlapping fields of view FOV.
  • the first and second cameras 11-1 , 1 1-2 operate in the visible spectrum.
  • the first and second cameras 11-1 , 11-2 may operate in the non-visible spectrum, for example comprising infrared light.
  • the imaging device 10 may comprise or consist of a radar imaging device.
  • the control system 1 comprises a controller 12 for receiving the captured image data DIMG.
  • the controller 12 includes a processor 13 and a memory 14.
  • a set of computational instructions is stored on the memory 14. When executed, the computational instructions cause the processor 13 to perform the method(s) described herein.
  • the processor 13 is configured to implement an image processing algorithm to analyse the first and second sets of image data DIMG-1 , DIMG-2 to determine characteristics of the ground surface SRF within the imaging region RIMG.
  • the processor 13 identifies disparities between the first and second sets of image data DIMG-1 , DIMG-2 and performs range imaging to determine the distance to features within the imaging region RIMG.
  • the processor 13 With reference to known parameters of the stereo camera 11 , such as the spatial separation of the first and second cameras 11-1 , 11-2, the processor 13 generates three dimensional (3D) data in the form of a point cloud 15 in dependence on the first and second sets of image data DIMG-1 , DIMG-2.
  • the point cloud 15 is composed of a plurality of discrete points located on the external surfaces of objects and features within the imaging region RIMG.
  • a transformation is applied to move an origin of the point cloud 15 to a predefined reference point.
  • the transformation moves the point cloud origin from a centre position CP1 of the stereo camera 11 to a reference point defining an origin of a vehicle co-ordinate system.
  • the reference point is a centre position CP2 of a rear axle (i.e. the position on vehicle centreline) which is coincident with the centre of the rear wheels.
  • the centre position CP2 defines a common centre point of turning of the vehicle 2.
  • the transformation is predefined in dependence on the relative location of the centre positions CP1 , CP2.
  • the modified point cloud 15 thereby defines the vertical height of the points relative to a centre of the vehicle rear wheel.
  • the processor 13 determines the pitch angle of the vehicle 2 in dependence on the pitch angle signal S1 output by the IMU 8.
  • the processor 13 utilises the vehicle pitch angle and the modified point cloud 15 to form an elevation map corresponding to the imaging region RIMG.
  • the elevation map provides a representation of localised relief features formed in a ground surface.
  • the ground surface forms the surface of a section of ground over which the vehicle 2 is travelling, such as the surface of an unmetalled road or an off-road track.
  • the elevation map is referred to herein as a Multi-Level Surface (MLS) map 17.
  • An example of an MLS map 17 generated from the image data DIMG is shown in Figure 4.
  • the MLS map 17 provides terrain geometry within the imaging region RIMG.
  • the MLS map 17 is composed of a grid comprising a plurality of two-dimensional (2D) cells 18 arranged in a horizontal plane.
  • the processor 13 generate the MLS map 17 in dependence on the three- dimensional spatial distribution of the points of the modified point cloud 15 within each cell 18.
  • the processor 13 may, for example, generate the MLS map 17 in dependence on a mean vertical height of the points of the modified point cloud 15 within each cell 18, or in dependence on a maximum or minimum vertical height of the points within the modified point cloud 15.
  • a distribution of the modified point cloud 15 within each cell 18 may provide an indication of a localised change in a vertical height of the ground surface SRF.
  • the MLS map 17 may comprise data representing the distribution of the modified point cloud 15 within each cell 18, for example representing a statistical analysis of the vertical distribution of points of the modified point cloud 15 within each cell 18.
  • the cells 18 each measure 25cm x 25cm.
  • the resolution of the MLS map 17 may be increased or decreased by changing the dimensions of the cells 18.
  • the processor 13 may be configured to determine a gradient (positive or negative) of the terrain in each cell 18.
  • the MLS map 17 may comprise a low-poly model of the terrain in the imaging region.
  • the processor 13 in the present embodiment is configured to refine the MLS map 17 by identifying overhang features, such as a branch of a tree or a space under another vehicle, present within the imaging region RIMG.
  • the processor 13 may identify an overhang by identifying two or more points within the modified point cloud 15 having different vertical heights but at least substantially the same horizontal position. If an overhang feature is identified, the processor 13 refines the MLS map 17 by omitting the point (or points) having a lower vertical height. If an overhang is identified, the processor 13 refines the MLS map 17 based on vehicle traversability analysis using the height difference of the vertical heights. If traversability is positive (i.e. the processor 13 determines that the feature is traversable), the points corresponding to overhang features are omitted.
  • the control system 1 is configured to analyse the image data DIMG to identify obstacles within the imaging region RIMG.
  • an obstacle may be classified as a physical feature or object which will impede progress of the vehicle 2 or which is deemed to be un-traversable by the vehicle 2.
  • the processor 13 is configured to identify any such obstacles within the MLS map 17. In the present embodiment, the processor identifies an obstacle as a feature which results in a change in terrain height between adjacent cells 18 within the MLS map 17.
  • the processor 13 identifies a change in terrain height between two or more adjacent cells 18 exceeding a predefined vertical threshold, the processor 13 characterises the identified cell as representing an obstacle.
  • the predefined vertical threshold may, for example, be 25cm or 50cm.
  • the processor 13 could optionally be configured to implement a route planning algorithm for planning a vehicle route in dependence on the determined position and/or size of any identified obstacle(s). It will be understood that the grading of the cells 18 may be refined, for example by defining a plurality of vertical thresholds or classifying the cells 18 in direct proportion to a detected change in terrain height between two or more adjacent cells 18.
  • an image representing the image data DIMG is shown in Figure 3.
  • the image data DIMG shows an unsurfaced track 19 along which the vehicle 2 is travelling and a tree 20 adjacent to the track 19.
  • the track 19 comprises a dip in which water has collected to form a pool 21.
  • the processor 13 analyses the image data DIMG captured by the imaging device 10 and generates a point cloud 15 which is used to generate the MLS map 17 shown in Figure 4.
  • the features identified through analysis of the image data DIMG are labelled in the MLS map 17 shown in Figure 4.
  • the pool 21 is identified as a region which is at least substantially empty in the image data DIMG.
  • the region behind the tree 20 is obscured from view and is identified in the MLS map 17 as a contiguous extension thereof.
  • the processor 13 analyses the MLS map 17 to identify obstacles.
  • an MLS map 17 is shown in Figure 5 with the cells 18 marked to represent the determination of the processor 13.
  • the cells 18 outside of a field of view FOV of the imaging device 10 are shown unshaded.
  • the cells 18 inside the field of view FOV which are identified as corresponding to traversable terrain (terrain cells) are shown having an intermediate shading.
  • the cells 18 inside the field of view FOV which are identified as corresponding to an obstacle (such as the tree 20 shown in Figure 3) are shown having a dark shading (obstacle cells).
  • the processor 13 is configured to model a route R for the vehicle 2.
  • the vehicle route R may, for example, be modelled in dependence on the current (i.e. instantaneous) steering angle of the first and second wheels W-1 , W-2.
  • Other implementations of the control system 1 may model the vehicle route R in dependence on a user-specified route and/or a route planning algorithm.
  • the processor 13 determines left and right wheel paths P1 , P2 along which the left and right wheels W1-4 will travel respectively.
  • the left and right wheel paths P1 , P2 are overlaid onto the MLS map 17 in Figure 6.
  • the processor 13 may take account of changes in the vertical height of the terrain when determining the left and right wheel paths P1 , P2.
  • the processor 13 may be configured only to analyse the image data DIMG captured by the imaging device 10 in a region along or proximal to the route R to generate the MLS map, optionally discarding image data DIMG distal from the route R.
  • a second image IMG2 captured by the imaging device 10 is shown in Figure 7 by way of example.
  • the change in relative height of the left and right wheel paths P1 , P2 may be determined as the vehicle 2 progresses along the vehicle route R.
  • a third image IMG3 is captured by the imaging device 10 is shown in Figure 8 by way of example.
  • the third image IMG3 comprises an unmetalled track having first and second ruts R1 , R2.
  • the left and right wheel paths P1 , P2 are overlaid onto the third image IMG3 to show the predicted positions of the left and right wheels W1-4 of the vehicle 2 in relation to the first and second ruts R1 , R2.
  • the MLS map 17 generated through analysis of the third image IMG3 is shown in Figure 9A.
  • the MLS map 17 represents the topographical relief of the ground surface SRF identified within the third image IMG3.
  • the processor 13 applies a transform to project the MLS map 17 in a plan elevation, as shown in Figure 9A.
  • the processor 13 analyses the MLS map 17 by performing a height differential analysis.
  • the height differential analysis comprises comparing the height of each cell 18 with the height of each adjacent cell 18 within the MLS map 17.
  • the processor 13 identifies each cell 18 having a height which is offset vertically relative to one or more adjacent cell 18 by a vertical distance greater than or equal to a predefined vertical offset threshold.
  • the processor 13 is configured to identify each cell 18 having a height below that of one or more adjacent cell 18 by at least vertical offset threshold.
  • the cells 18 identified by the processor 13 as a result of the height differential analysis are referred to herein as step-change cells 18’.
  • the vertical offset threshold is defined as 5cm, but larger or smaller vertical offset thresholds may be defined.
  • the step-change cells 18’ each represent a step change (i.e. an abrupt height change over a relatively small distance) in the vertical height of the ground surface SRF, as approximated by the MLS map 17.
  • the processor 13 generates a step-change map 22 comprising each of the step-change cells 18’.
  • a step-change map 22 is shown in Figure 9B representing the results of a height differential analysis of the MLS map 17 shown in Figure 9A.
  • the step-change map 22 also represents the height differential between adjacent cells 18 and characterises each cell 18 as having a LOW, MEDIUM or HIGH height differential.
  • the processor 13 flags each cell 18 identified in the MLS map 17 as having a HIGH height differential (i.e. a vertical offset greater than or equal to 5cm) and the step-change cells 18’ are represented in the map shown in Figure 9B.
  • the ruts R1 , R2 typically comprise left and right channels (which are formed by the left and right wheels of one or more vehicles).
  • the control system 1 is configured to analyse the step-change map 22 to identify elongate sequences having a profile which at least substantially matches the expected features and characteristics of the ruts R1 , R2.
  • the processor 13 analyses the step-change map 22 to identify first and second elongate sections 23A, 23B corresponding to the first and second ruts R1 , R2 respectively.
  • the first and second elongate sections 23A, 23B are shown in Figure 10 which shows a graphical overlay 24 on the third image IMG3.
  • the processor 13 analyses the step-change map 22 to identify a plurality of the step- change cells 18’ arranged in one or more of the following: a continuous sequence; a substantially continuous sequence; or an interrupted sequence.
  • the continuous sequence may comprise a plurality of the step-change cells 18’ arranged in an uninterrupted sequence (i.e. composed of contiguous step-change cells 18’).
  • the substantially continuous sequence may comprise a plurality of step-change cells 18’ which are offset from each other in a diagonal direction and/or which are separated from each other by a distance less than or equal to a predefined distance threshold (for example a separation of less than or equal to n cells 18, where n is a whole number less than or equal to one, two or three).
  • the interrupted sequence may comprise one or more continuous sequences and/or one or more substantially continuous sequences which are separated from each other by a distance greater than or equal to a predefined distance threshold (for example a separation of greater than or equal to n cells 18, where n is a whole number greater than or equal to three, four or five).
  • a predefined distance threshold for example a separation of greater than or equal to n cells 18, where n is a whole number greater than or equal to three, four or five).
  • the processor 13 in the present embodiment is configured to apply a pattern detection algorithm to identify each elongate section forming a continuous line in plan elevation.
  • the processor 13 applies a curve detection algorithm to detect each sequence (continuous, substantially continuous, or interrupted) of the step-change cells 18’ which forms a curve within the MLS map 17.
  • the processor 13 could be configured to identify a curved sequence of the step-change cells 18’ as corresponding to one of the first and second ruts R1 , R2.
  • the processor 13 is configured to analyse the MLS map 17 to identify pairs of curved sequences corresponding to the respective first and second ruts R1 , R2.
  • the processor 13 identifies first and second elongate sections forming first and second curves which are at least substantially parallel to each other.
  • the first and second elongate sections identified within the MLS map 17 as being at least substantially parallel to each other are identified as the first and second ruts R1 , R2.
  • the first and second ruts R1 , R2 are typically spaced apart from each other by a distance corresponding to a wheel track of a vehicle.
  • an upper wheel track threshold and/or a lower wheel track threshold may be defined.
  • the processor 13 may optionally determine a distance between the first and second elongate sections identified within the MLA map 17.
  • the processor may identify the first and second elongate sections as corresponding to the first and second ruts R1 , R2 only if the distance between the first and second elongate sections is less than the upper wheel track threshold and/or greater than the lower wheel track threshold.
  • the processor 13 is configured to output a rut identification signal RSIG in dependence on identification of the first and second ruts R1 , R2.
  • the rut identification signal RSIG may, for example, be output to a vehicle communication network.
  • One or more vehicle systems may be controlled in dependence on the output of the rut identification signal RSIG.
  • one or more of the following vehicle systems may be controlled: a throttle response; a drivetrain; a vehicle transmission (for example to select a particular gear ratio); a transfer case (for example to select a high or low ratio); an electrical power steering unit (for example to modify a steering ratio and/or to change feedback from the steering wheel); and a suspension system (for example to adjust suspension travel and/or to adjust a damping setting).
  • the processor 13 may output a steering control signal to control a steering angle of the vehicle 2 so as to match or follow the profile of the first and second ruts R1 , R2. For example, the steering angle of the vehicle 2 may be controlled.
  • the operation of the control system 1 is illustrated in a flow diagram 100 shown in Figure 11.
  • the imaging device 10 is provided to capture image data DIMG (BLOCK 105) corresponding to an imaging region RIMG in front of the vehicle 2.
  • the first and second cameras 11-1 , 11-2 of the imaging device 10 capture respective first and second sets of image data DIMG-1 , DIMG-2 (BLOCK 110).
  • the processor 13 generates a disparity image in dependence on the first and second sets of image data DIMG- 1 , DIMG-2 (BLOCK 115).
  • the processor 13 retrieves known parameters of the imaging device (BLOCK 120) and generates a point cloud 15 in dependence on the disparity image (BLOCK 125).
  • the processor 13 reads the pitch angle signal S1 output by the IMU 8 and determines the pitch angle of the vehicle body 4 (BLOCK 130).
  • the MLS map 17 is generated in dependence on the point cloud 15 and the determined pitch of the vehicle 2 (BLOCK 135).
  • the processor 13 analyses the MLS map 17 to classify the constituent cells 18 as corresponding to either an obstacle (i.e. cannot be traversed by the vehicle 2) or a traversable section of terrain (BLOCK 140).
  • the cells 18 corresponding to an obstacle may optionally be discarded from subsequent analysis.
  • the processor 13 reads the steering angle signal S2 and determines the current steering angle of the vehicle 2 (BLOCK 145), for example by reading a steering signal published by a steering angle sensor.
  • a vehicle route R is determined in dependence on the current steering angle and the left and right wheel paths P1 , P2 determined (BLOCK 150).
  • a height comparison is performed to compare the height of each cell 18 along the left and right wheel paths P1 , P2 with the adjacent (eight) cells 18.
  • the processor 13 determines if the height differential is greater than or less than the predefined vertical offset threshold (BLOCK 155). If the height differential of a cell 18 is less than the predefined vertical offset threshold, the cell 18 is discarded (BLOCK 160). If the height differential for a cell 18 is greater than the predefined vertical offset threshold, the cell 18 is flagged as a step-change cell 18’ (BLOCK 165).
  • a step-change map 22 is created by projecting each step-change cell 18’ identified by the processor 13 (BLOCK 170). The step-change map 22 provides a two-dimensional representation of the topographical relief of the ground surface SRF.
  • the processor 13 utilises a curve detection algorithm to detect sequences of the step-change cells 18’ which form a curve (BLOCK 175). The processor 13 then analyses the detected curved sequences of step-change cells 18’ to identify sequence pairs which are at least substantially parallel to each other (BLOCK 180). If the processor 13 does not identify a pair of sequences which are at least substantially parallel to each other, a determination is made that first and second ruts R1 , R2 are not present in the captured image data (BLOCK 185). If the processor 13 identifies a pair of sequences which are at least substantially parallel to each other, a determination is made that first and second ruts R1 , R2 are present in the captured image data (BLOCK 190). The processor 13 may output a rut detection signal RSIG in dependence on this determination.
  • the processor 13 may be configured to determine further features of the identified first and second ruts R1 , R2. For example, the processor 13 may analyse the MLS map 17 to determine the depth and/or the width of the first and second ruts R1 , R2. If the depth of one or both of the first and second ruts R1 , R2 exceeds a predefined threshold, the processor 13 may output a notification, for example to warn a driver of a potential risk that the vehicle 2 will become stranded.
  • the processor 13 may be configured to identify where the depth of one or both of the first and second ruts R1 , R2 is less than a predefined threshold, for example to identify a location for entry into, or exit from the first and second ruts R1 , R2.
  • the processor 13 may be configured to determine the height of a (central) ridge between the first and second ruts R1 , R2 relative to the first and second ruts R1 , R2. If the relative height of the ridge exceeds a predefined threshold, the processor 13 may output a notification, for example to warn a driver of a potential scenario in which the vehicle 2 may be high-centred.
  • the processor 13 may optionally supplement this functionality be detecting one or more obstacle, such as a rock, on the ridge between the first and second ruts R1 , R2.
  • the processor 13 has been described herein as identifying first and second curves which are substantially parallel to each other. Alternatively, or in addition, the processor 13 may identify first and second curved sequences which are spaced apart from each other by a distance within a predefined wheel track range.
  • the wheel track range may, for example, define upper and lower wheel track thresholds.
  • the upper and lower wheel track thresholds may be defined in dependence on the wheel track of the vehicle 2.
  • the control system 1 has been described herein with reference to the identification of first and second ruts R1 , R2. It will be understood that the control system 1 can be modified to identify a single rut R1. Alternatively, or in addition, the control system 1 may be configured to identify larger channels, such as a ditch or a gulley, formed in the ground surface SRF. The techniques described herein in relation to the analysis of the MSL map 17 to identify a rut RS1 , RS2 may be modified to identify the ditch or gulley. For example, the processor 13 may identify a series of step-change cells 18’ representing a V-shaped or U-shaped channel. Alternatively, or in addition, the processor 13 may be configured to identify the sides and/or the bottom of the channel within the MSL map 17.
  • control system 1 may be configured to analyse the MSL map 17 to identify a ridge or raised region in ground surface SRF.
  • a vehicle geometry is stored in the system memory 14 as a vehicle data set.
  • the vehicle geometry comprises a wheel track and a wheel base of the vehicle 2.
  • the processor 13 is configured to predict a vertical height of each wheel W1-4 as the vehicle 2 travels along the vehicle route R.
  • a third image IMG3 captured by the imaging device 10 is shown in Figure 12 by way of example.
  • each wheel 30 W1-4 on the left and right wheel paths P1 , P2 is illustrated in the third image IMG3 at a position on the vehicle route R in which the front wheels W1 , W2 are a predetermined distance in front of their current location.
  • the predetermined distance may, for example, be defined as a number‘n’ of meters from the current location of the vehicle 2 (where“n” is greater than zero).
  • the predetermined distance may be user-selected, for example in dependence on a user 35 input.
  • the topographical relief of the terrain is defined by the MLS map 17. In dependence on the MLS map 17, the processor 13 may determine the height of each wheel W1-4 for any given vehicle position.
  • the processor 13 may predict the height values to reduce errors associated with the MLS map 17. For example, the processor 13 may compare the height of the first and second wheels W1 , W2 when the vehicle 2 is at a first location on the vehicle route R and predict a front articulation angle a1 (i.e. the angle of a central axis joining the first and second wheels W1 , W2 relative to a horizontal axis).
  • a1 i.e. the angle of a central axis joining the first and second wheels W1 , W2 relative to a horizontal axis.
  • the processor 13 may compare the height of the third and fourth wheels W3, W4 when the vehicle 2 is at the first location on vehicle route R and predict a rear articulation angle a2 (i.e. the angle of a central axis joining the third and fourth wheels W3, W4 relative to a horizontal axis).
  • the processor 13 may repeat this analysis along the vehicle route R to model changes in the front and rear articulation angles a1 , a2.
  • the changes in the 15 front and rear articulation angles a1 , a2 may thereby be determined along the vehicle route R.
  • An articulation angle threshold may be predefined (for the front axle 5 and/or the rear axle 6).
  • the processor 13 may determine if one or both of the front and rear articulation angles a1 , a2 20 exceeds the predefined articulation angle threshold.
  • the processor 13 may generate an advance warning to notify a driver of the vehicle that the articulation angle 2 threshold will be exceeded if the vehicle 2 proceeds along the vehicle route R.
  • the processor 13 predicts the front and rear articulation angles a1 , a2, thereby allowing the warning to be generated in advance.
  • the processor 13 may modify the vehicle route R to such 25 that the front and rear articulation angles a1 , a2 are less than the predefined articulation angle threshold.
  • the processor 13 may output a vehicle control signal for controlling one or more vehicle systems in dependence on the determined front and rear articulation angles a1 , a2.
  • the control signal may control one or more of the following: a throttle response; a drivetrain; a vehicle transmission (for example to select a particular gear ratio); a 30 transfer case (for example to select a high or low ratio); an electrical power steering unit (for example to modify a steering ratio and/or to change feedback from the steering wheel); and a suspension system (for example to adjust suspension travel and/or to adjust a damping setting).
  • a fourth image IMG4 captured by the imaging device 10 is shown in Figure 13 by way of example.
  • the determination of the front and rear articulation angles a1 , a2 for a vehicle route R is illustrated in the fourth image IMG4.
  • the imaging device 10 captures the image data DIMG for an imaging region RIMG in front of the vehicle 2.
  • the processor 13 analyses the image data DIMG to generate the MLS map 17.
  • the left and right wheel paths P1 , P2 are determined for the vehicle route R.
  • the position of each wheel W1-4 is determined for a predicted position of the vehicle 2 on the vehicle route R.
  • the vertical position of each wheel 5 W1-4 may be determined relative to a reference point, such as the centre position CP2, on the vehicle 2.
  • the front and rear articulation angles a1 , a2 are then determined for the predicted position.
  • the front articulation angle a1 is -15.7°; and the rear articulation angle a2 is -11.5°.
  • the front and rear articulation angles a1 , a2 can be output to a display screen, for example to provide a graphical representation of the predicted 10 orientation of the vehicle 2.
  • the operation of the control system 1 is illustrated in a flow diagram 100 shown in Figure 14.
  • the imaging device 10 is provided to capture image data DIMG (BLOCK 105) corresponding to an imaging region RIMG in front of the vehicle 2.
  • the first and second cameras 11-1 , 11-2 15 of the imaging device 10 capture respective first and second sets of image data DIMG-1 , DIMG- 2 (BLOCK 110).
  • the processor 13 generates a disparity image in dependence on the first and second sets of image data DIMG-1 , DIMG-2 (BLOCK 115).
  • the processor 13 retrieves known parameters of the imaging device (BLOCK 120) and generates a point cloud 15 in dependence on the disparity image (BLOCK 125).
  • the processor 13 reads the pitch 20 angle signal S1 output by the IMU 8 and determines the pitch angle of the vehicle body 4 (BLOCK 130).
  • the MLS map 17 is generated in dependence on the point cloud 15 and the determined pitch of the vehicle 2 (BLOCK 135).
  • the processor 13 analyses the MLS map 17 to classify the constituent cells 18 as corresponding to either an obstacle (i.e. cannot be traversed by the vehicle 2) or a traversable section of terrain (BLOCK 140).
  • the cells 18 25 corresponding to an obstacle may optionally be discarded from subsequent analysis.
  • the processor 13 reads the steering angle signal S2 and determines the current steering angle of the vehicle 2 (BLOCK 145), for example by reading a steering signal published by a steering angle sensor.
  • a vehicle route R is determined in dependence on the current steering angle and the left and right wheel paths P1 , P2 determined (BLOCK 150).
  • the processor 13 receives 30 a distance metric‘n’ identifying a location along the vehicle route R (BLOCK 155); and determines the location of each wheel W1-4 on the left and right wheel paths P1 , P2 at the identified location (BLOCK 160).
  • the processor 13 estimates the height of each wheel W1-4 at the identified location in dependence on the MLS map 17 (BLOCK 165).
  • the front and rear articulation angles a1 , a2 are determined in dependence on the estimated height of 35 each wheel W1-4 (BLOCK 170).
  • the processor 13 described herein utilises the pitch angle of the vehicle body 4 to generate 5 the MLS map 17.
  • the processor 13 may optionally also utilise the roll angle of the vehicle body 4 to generate the MLS map 17.
  • the processor 13 described herein may be configured to receive suspension travel signals for indicating a travel (or height) of each of the suspension assemblies S1-4. An uneven loading 10 of the vehicle 2 may result in a change in the pitch angle or the roll angle of the vehicle 4.
  • the processor 13 may apply a correction factor to compensate for any such variations.
  • the correction factor may be determined in dependence on the suspension travel signals.
  • the processor 13 may utilise the MLS map 17 as a kinematic model for determining the orientation of the vehicle 2. For example, the processor 13 may use the MLS map 17 to estimate a roll angle and/or a pitch angle of the vehicle body 4.
  • the imaging device 10 has been described herein as comprising first and second imaging sensors 11-1 , 1 1-2.
  • the first and second imaging sensors 11-1 , 11-2 have been described as comprising first and second optical cameras. It will be understood that different types of sensors may be used to generate the image data used to generate the three dimensional data used in the prediction of the vertical position of the at least one wheel.
  • the imaging system may, for example, comprise or consist of a lidar (Light Detection and Ranging) system for generating the three dimensional data.
  • the lidar system may comprise a laser transmitter and sensor array.
  • the imaging device 10 may comprise a radar system operable to generate the three dimensional data.

Abstract

The present disclosure relates to a control system (1) for identifying one or more rut (R1, R2) in a surface (SRF). The control system (1) has a controller (12) configured to receive image data (DIMG1, DIMG2) representing an imaging region (RIMG). The controller (12) analyses the image data (DIMG1, DIMG2) to generate three dimensional data relating to the imaging region (RIMG). The three dimensional data is analysed to identify one or more elongate section (23A, 23B) having a vertical offset relative to an adjacent section. A rut identification signal is output to identifying each identified elongate section (23A, 23B) as corresponding to a rut (R1, R2). The present disclosure also relates to a vehicle (2) having a control system (1) of the type described herein. The present disclosure also relates to a method of identifying one or more rut (R1, R2) in a surface (SRF).

Description

VEHICLE CONTROL SYSTEM AND METHOD
TECHNICAL FIELD
The present disclosure relates to a vehicle control system and method. Aspects of the invention relate to a control system for identifying one or more rut in a surface and to a control system for predicting a vertical position of one or more wheel of the vehicle.
BACKGROUND
A rut may be formed in a ground surface by the wheels of a vehicle, particularly if the ground is composed of a deformable medium, such as mud. The rut is usually in the form of an elongated open channel. Depending on the local conditions, the wheels of the vehicle may form left and right ruts which extend substantially parallel to each other. The rut(s) may present an obstacle to a following vehicle and it may be appropriate to configure the powertrain and/or the suspension of the following vehicle to aid progress along the rut(s) or traversal of the rut(s). However, the detection of ruts may prove problematic due to limitations in sensor perception. For example, optical sensors operating in very bright or very dark conditions may result in generation of false positives.
A vertical position of a vehicle wheel may be measured based on a suspension travel, for example as the vehicle traverses a terrain comprising changes in height. The vertical position of the vehicle wheel may be used to control a vehicle system, such as a suspension assembly, to control dynamic behaviour of the vehicle. However, since the vertical position of the vehicle wheel is measured, control of the vehicle system is reactive. Thus, the vehicle system cannot be pre-configured to anticipate changes in the terrain in a direction of travel of the vehicle.
At least in certain embodiments, the present invention seeks to overcome or address at least some of the limitations associated with known systems.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide a control system, a vehicle, and a non-transitory computer-readable medium as claimed in the appended claims.
According to an aspect of the present invention there is provided a control system for identifying one or more rut in a surface, the control system comprising one or more controllers, the control system being configured to:
receive image data representing an imaging region; and
analyse the image data to generate three dimensional data relating to the imaging region. The control system may be configured to analyse the three dimensional data to identify one or more elongate section having a vertical offset relative to an adjacent section. The control system may be configured to identify the elongate section as having a vertical height which is below that of the adjacent section. The three dimensional data generated by the control system may represent a ground surface (i.e. a surface of the ground within the imaging region). The control system may be installed in a host vehicle.
The control system may output a rut identification signal for identifying each identified elongate section as corresponding to a rut. The rut identification signal may be output to one or more vehicle system, for example via a communication network. The one or more vehicle system may be controlled in dependence on the rut identification signal. At least in certain embodiments, the control system may enable advance detection of the one or more rut (i.e. before the vehicle encounters the rut). The one or more vehicle system may be pre-configured to facilitate progress, for example to enable progress of the vehicle within the identified rut(s), or traversal of the identified rut(s). By way of example, the vehicle powertrain and/or the vehicle suspension may be pre-configured in dependence on the rut identification signal.
Alternatively, or in addition, the rut identification signal may comprise rut data defining one or more characteristic of each identified rut. The rut data may comprise one or more of the following: the location of the rut; a profile of the rut in plan elevation; a depth profile of the rut; and a width profile of the rut. The rut data may be used to generate a graphical representation of the rut, for example to display the rut in relation to the vehicle.
The controller may comprise a processor having an input for receiving the image data; and a memory coupled to the processor and having instructions stored thereon for controlling operation of the processor. The processor may be configured to analyse the image data to generate the three dimensional data. The processor may identify the one or more elongate section having a vertical offset relative to an adjacent section.
The control system may be configured to identify an elongate section having a vertical offset relative to a first adjacent section disposed on a first side thereof; and/or having a vertical offset relative to a second adjacent section disposed on a second side thereof. The identified elongate section may be located at a lower height than the first adjacent section and/or the second adjacent section.
The control system may be configured to analyse the three dimensional data to identify said one or more elongate section by identifying a step change in vertical height relative to the adjacent section. The control system may be configured to analyse the three dimensional data to identify said one or more elongate section by identifying a vertical offset greater than or equal to a predetermined threshold value. The control system may be configured to analyse the three dimensional data to identify said one or more elongate section having a width less than a predefined threshold width; and/or a length greater than or equal to a predefined threshold length. The control system may be configured to analyse the three dimensional data to identify said one or more elongate section having a substantially continuous profile in plan elevation. The elongate section may comprise a curved section and/or a rectilinear section.
The three dimensional data may comprise a plurality of cells. The control system may be configured to analyse the three dimensional data to identify said one or more elongate section by identifying a sequence composed of a plurality of cells. Each cell in the sequence may be vertically offset from at least one adjacent cell.
The control system may be configured to identify first and second said elongate sections as corresponding to first and second ruts. The first and second ruts may form a vehicle track, for example on an unmetalled surface.
The identification of said first and second elongate sections may comprise identifying elongate sections which are substantially parallel to each other. Alternatively, or in addition, the identification of said first and second elongate sections may comprise identifying elongate sections having at least substantially the same depth and/or at least substantially the same width.
The identification of said first and second elongate sections may comprise identifying elongate sections having a predetermined spacing therebetween; or having a spacing therebetween which is within a predetermined range. The identification of the elongate section may comprise identifying each cell having first and second adjacent cells (disposed on opposing sides thereof) which are at a greater height. The identification of a plurality of said cells forming a continuous or substantially continuous line may represent a rut. This configuration may be indicative of the profile of a rut in a transverse direction.
The control system may be configured to identify a sequence of cells representing a substantially planar surface extending in a horizontal plane. This functionality may be used in conjunction with the other techniques described herein, for example to identify first and second sequences representing respective planar surfaces which extend substantially parallel to each other. The processor could optionally assess whether the first and second sequences represent surfaces at the same vertical height (which may be indicative of first and second ruts in liquid communication with each other).
The control system may be configured to analyse the three dimensional data to determine the vertical offset between the elongate section and the adjacent section to determine a depth of the corresponding rut.
The control system may be configured to output an alert if the determined vertical offset is determined to be greater than or equal to a predetermined threshold.
The image data may be received from first and second imaging sensors. The first and second imaging sensors may, for example, each comprise an optical camera, for example a video camera. The image data may comprise video image data. The imaging sensors may capture the image data at least substantially in real time. Alternatively, or in addition, the three dimensional data may comprise data received from a lidar sensor or a radar sensor. The image data may be received from a suitable sensor array.
According to a further aspect of the present invention there is provided a control system for identifying first and second ruts in a surface, the control system comprising one or more controllers, the control system being configured to:
receive image data representing an imaging region;
analyse the image data to generate three dimensional data relating to the imaging region;
analyse the three dimensional data to identify first and second elongate sections which are substantially parallel to each other; and
output a rut identification signal for identifying each identified elongate section as corresponding to a rut.
According to a further aspect of the present invention there is provided a vehicle comprising a control system as described herein.
According to a further aspect of the present invention there is provided a method of identifying one or more rut in a surface, the method comprising:
receiving image data representing an imaging region;
analysing the image data to generate three dimensional data relating to the imaging region;
analysing the three dimensional data to identify one or more elongate section having a vertical offset relative to an adjacent section; and
outputting a rut identification signal for identifying each identified elongate section as corresponding to a rut. The method may comprise identifying said one or more elongate section by identifying a step change in vertical height.
The one or more elongate section may each have a substantially continuous profile in plan elevation.
The three dimensional data may comprise a plurality of cells. The identification of said one or more elongate section may comprise identifying a sequence composed of a plurality of said cells. The cells in the sequence may each be vertically offset from at least one adjacent cell.
The method may comprise identifying first and second said elongate sections corresponding to first and second ruts.
The method may comprise identifying first and second elongate sections which are substantially parallel to each other.
The method may comprise identifying elongate sections having a predetermined spacing therebetween.
The method may comprise determining a vertical offset between the elongate section and the adjacent section to determine a depth of the corresponding rut.
The method may comprise generating an alert if the determined vertical offset is greater than or equal to a predetermined threshold.
The method may comprise receiving the image data from first and second imaging sensors.
According to a further aspect of the present invention there is provided a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method described herein.
According to an aspect of the present invention there is provided a control system for predicting a vertical position of at least one wheel of a vehicle, the control system comprising one or more controllers, the control system being configured to: receive image data representing an imaging region; and analyse the image data to generate three dimensional data relating to the imaging region. The control system may be configured to receive an indication of a predicted position of the at least one wheel; and predict the vertical position of the at least one wheel at the predicted position in dependence on the three dimensional data. The control system may optionally output a signal in dependence on the predicted vertical position. In certain embodiments, the control system may determine the predicted position of the at least one wheel. The three dimensional data may comprise topographical terrain relief data for representing terrain relief within the imaging region. The three dimensional data may comprise or consist of relief features of a ground surface (i.e. a surface of the ground within 35 the imaging region). The three dimensional data may comprise one or more obstacle within the imaging region. The obstacle may be a vertical projection, such as a rock or a tree; or a depression, such as a hole. By using the imaging data relating to the imaging region to generate the three dimensional data, the control system can predict or anticipate changes in the vertical position of at least one wheel. The control system may pre-configure one or more vehicle systems in dependence on the predicted vertical position of the at least one wheel, for example to facilitate traversal of the terrain. The control system may be configured to predict the vertical position of the at least one wheel relative to a reference point on the vehicle. The reference point may, for example, define an origin of a vehicle co-ordinate system. The reference point may be disposed on a centreline of the vehicle. The reference point may, for example, define a centre position of a rear axle of the vehicle.
The controller may comprise a processor having an input for receiving the image data; and a memory coupled to the processor and having instructions stored thereon for controlling operation of the processor. The processor may be configured to analyse the image data to generate the three dimensional data. The processor may predict the vertical position of the at least one wheel for a given location of the vehicle.
The control signal may control one or more of the following: a throttle response; a drivetrain; a vehicle transmission (for example to select a particular gear ratio); a transfer case (for example to select a high or low ratio); an electrical power steering unit (for example to modify a steering ratio and/or to change feedback from the steering wheel); and a suspension system (for example to adjust suspension travel and/or to adjust a damping setting).
The position of the at least one wheel may be predicted for a given geospatial position of the vehicle. The geospatial position of the vehicle may be defined at a position on a planned or projected route of the vehicle. The position of the at least one wheel may be predicted when the vehicle is at the defined geospatial position. The geospatial position of the vehicle may be defined in a reference plane, for example a horizontal reference plane or a reference plane of the vehicle. The vehicle route may be determined in dependence on a current steering angle of the vehicle. The steering angle may be measured by a steering wheel angular position sensor.
A wheel path may be determined for each wheel along the vehicle route. The wheel path may be determined in dependence on the vehicle route, for example referencing a predefined vehicle geometry. The vehicle geometry may comprise the wheel track and/or the wheel base of the vehicle.
One or more wheel may be provided on a first axle. Two or more wheels may be provided on the first axle. The control system may be configured to predict the vertical position of each wheel on the first axle. The first axle may be a single component, for example a beam axle, a rigid axle or a solid axle. Alternatively, the first axle may comprise a pair of stub axles supported by independent suspension assemblies disposed on opposing sides of the vehicle. For example, first and second wheels may be provided on opposite ends of the first axle. The control system may determine an articulation angle of each stub axle.
The control system may be configured to determine a first articulation angle in dependence on the predicted vertical position of each wheel on the first axle. The first articulation angle may represent an angle of a first reference axis which extends between the centres of the wheels on the first axle and a horizontal axis.
One or more wheel may be provided on a second axle. Two or more wheels may be provided on the second axle. The control system may be configured to predict the vertical position of each wheel on the second axle. The second axle may be a single component, for example a beam axle, a rigid axle or a solid axle. Alternatively, the second axle may comprise a pair of stub- axles supported by independent suspension assemblies disposed on opposing sides of the vehicle. The control system may be configured to determine a second articulation angle in dependence on the predicted vertical position of each wheel on the second axle. The second articulation angle may represent an angle of a second reference axis which extends between the centres of the wheels on the second axle and a horizontal axis.
The control system may be configured to predict a vehicle roll angle and/or a vehicle pitch angle. The vehicle roll angle and/or the vehicle pitch angle may be predicted in dependence on the predicted vertical position of the wheels on the first axle relative to the predicted vertical position of the wheels on the second axle.
The control system may be configured to predict the vertical position of the at least one wheel in a plurality of predicted positions.
A vehicle data set may define a relative position of each wheel on the vehicle. The vehicle data set may, for example, be stored in memory. The control system may be configured to map each wheel of the vehicle to the three dimensional data to predict the vertical position of each wheel.
The control system may be configured to determine a route of the vehicle. The predicted position of the at least one wheel may be determined for a given position of the vehicle on the route. The control system may be configured to generate a route for the vehicle in dependence on the predicted vertical position of the at least one wheel. The image data may comprise image data received from first and second imaging sensors. The control system may be configured to output a vehicle control signal in dependence on the predicted vertical position of the at least one wheel of the vehicle.
According to a further aspect of the present invention there is provided a method of predicting a vertical position of at least one wheel of a vehicle, the method comprising receiving image data relating to an imaging region; and analysing the image data to generate three dimensional data relating to the imaging region. The method may comprise predicting a position of the at least one wheel; and predicting the vertical position of the at least one wheel at the predicted position in dependence on the three dimensional data. The method may optionally comprise outputting a signal in dependence on the predicted vertical position. The method may comprise predicting the vertical position of each wheel on a first axle. The method may comprise determining a first articulation angle in dependence on the predicted vertical position of each wheel on the first axle. The method may comprise predicting the vertical position of each wheel on a second axle. The method may comprise determining a second articulation angle in dependence on the predicted vertical position of each wheel on the second axle. The method may comprise predicting a vehicle roll angle and/or a vehicle pitch angle. The vehicle roll angle and/or the vehicle pitch angle may be determined in dependence on the predicted vertical position of the wheels on the first axle relative to the predicted vertical position of the wheels on the second axle. The method may comprise predicting the vertical position of the at least one wheel in a plurality of predicted positions. The method may comprise mapping each wheel of the vehicle to the three dimensional data and predicting the vertical position of each wheel. The method may comprise determining a route of the vehicle. The predicted position of the at least one wheel may be determined for a given position of the vehicle on the route. The method may comprise generating a route for the vehicle in dependence on the predicted vertical position of the at least one wheel. The image data may be received from first and second imaging sensors. The method may comprise outputting a vehicle control signal in dependence on the predicted vertical position of the at least one wheel of the vehicle. According to a further aspect of the present invention there is provided a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method described herein. Any control unit or controller described herein may suitably comprise a computational device having one or more electronic processors. The system may comprise a single control unit or electronic controller or alternatively different functions of the controller may be embodied in, or hosted in, different control units or controllers. As used herein the term“controller” or“control unit” will be understood to include both a single control unit or controller and a plurality of control units or controllers collectively operating to provide any stated control functionality. To configure a controller or control unit, a suitable set of instructions may be provided which, when executed, cause said control unit or computational device to implement the control techniques specified herein. The set of instructions may suitably be embedded in said one or more electronic processors. Alternatively, the set of instructions may be provided as software saved on one or more memory associated with said controller to be executed on said computational device. The control unit or controller may be implemented in software run on one or more processors. One or more other control unit or controller may be implemented in software run on one or more processors, optionally the same one or more processors as the first controller. Other suitable arrangements may also be used.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the present invention will now be described, by way of example only, with reference to the accompanying figures, in which:
Figure 1 shows a schematic representation of a vehicle comprising a control system in accordance with an embodiment of the present invention;
Figure 2 shows a schematic representation of a scanning region of an imaging device provided on the vehicle shown in Figure 1 ;
Figure 3 shows an image captured by the imaging device shown schematically in Figure 2;
Figure 4 shows an elevation map generated by identifying disparities in the images captured by the imaging device;
Figure 5 shows a schematic representation of the elevation map shown in Figure 4 differentiating between traversable and un-traversable terrain features;
Figure 6 shows a schematic representation of the elevation map shown in Figure 5 incorporating a route of the vehicle;
Figure 7 shows a second image captured by the imaging device having a first graphical overlay representing paths of opposing wheels of the vehicle;
Figure 8 shows a third image captured by the imaging device according to a first embodiment of the present invention having a graphical overlay representing the predicted paths of the left and right wheels of the vehicle;
Figure 9A shows a multi-level surface map generated by analysing the third image shown in Figure 8;
Figure 9B shows first and second elongate sequences extracted from the multi-level surface map shown in Figure 9A;
Figure 10 shows a graphical overlay representing the topographical relief of the ground surface in the third image shown in Figure 8; Figure 11 is a block diagram representing the implementation of the method of the first embodiment described herein;
Figure 12 shows a second image captured by the imaging device according to a second embodiment of the invention having a second graphical overlay representing the predicted positions of the wheels on the paths;
Figure 13 shows a third image captured by the imaging device according to the second embodiment of the invention having a third graphical overlay representing the predicted positions of the wheels on the paths and a determined articulation angle of the front and rear axles at the predicted positions; and
Figure 14 is a block diagram representing the implementation of the method of the third embodiment of the invention described herein.
DETAILED DESCRIPTION
A control system 1 for a vehicle 2 in accordance with an embodiment of the present invention will now be described with reference to the accompanying figures. The vehicle 2 in the present embodiment is an automobile, but it will be understood that the controller 1 may be used in other types of land vehicle. The vehicle 2 is described herein with reference to a reference frame comprising a longitudinal axis X, a transverse axis Y and a vertical axis Z.
As illustrated in Figure 1 , the vehicle 2 comprises four (4) wheels W1-4, four suspension assemblies S1-4 (each associated with a respective wheel W1-4) and a vehicle body 4. The wheels W1-4 are provided on front and rear axles 5, 6. The first wheel W1 is a front left wheel; the second wheel W2 is a front right wheel; the third wheel W3 is a rear left wheel; and the fourth wheel W4 is a rear right wheel. The vehicle 2 comprises a drivetrain comprising an internal combustion engine 7 drivingly connected to the front axle 5 for transmitting a traction torque to the first and second wheels W1 , W2. It will be understood that the internal combustion engine 7 could be drivingly connected to the rear axle 6 for transmitting a traction torque to the first and second wheels W1 , W2. In alternative implementations, the drivetrain may comprise an electric propulsion unit instead of, or in addition to the internal combustion engine 7.
As described herein, the control system 1 is operable to identify localised relief features formed in a ground surface SRF. The ground surface SRF comprises or consists of the surface of a section of ground over which the vehicle 2 is travelling, such as the surface of an unmetalled road or an off-road track. The control system 1 in the present embodiment is operable to identify relief features comprising a first rut R1 and/or a second rut R2. The first and second ruts R1 , R2 each comprise an elongated relief feature, typically in the form of a channel, formed in the ground surface SRF. The first and second ruts R1 , R2 may be formed by one or more land vehicle travelling over the ground surface SRF. The ground surface SRF may be particularly susceptible to the formation of first and second ruts R1 , R2 if the underlying ground is composed of a deformable medium, such as mud or sand. The first and second ruts R1 , R2 in the present embodiment are formed by the left and right wheels of a vehicle traversing the ground surface SRF. Since the transverse distance between the left and right wheels is fixed, the first and second ruts R1 , R2 are at least substantially parallel to each other. A spacing between the first and second ruts R1 , R2 (in a transverse direction) at least substantially corresponds to an axle (wheel) track (i.e. the transverse distance between the wheels) of the vehicle which formed them. The depth and/or the width of the first and second ruts R1 , R2 may increase as a result of the passage of more than one vehicle.
As also described herein, the control system 1 is operable to estimate a wheel height of each wheel W1-4 of the vehicle 2 and/or to determine an articulation angle of the wheels W1-4. A front articulation angle is determined in respect of the wheels W1 , W2 on the front axle 5; and a rear articulation angle is determined 10 in respect of the wheels W3, W4 on the rear axle 6. The front articulation angle is an angle of a central axis joining the first and second wheels W1 , W2 on the front axle 5 relative to a horizontal axis. The rear articulation angle is an angle of a central axis joining the wheels W3, W4 on the rear axle 6 relative to a horizontal axis.
The vehicle 2 comprises an inertial measurement unit (IMU) 8 for determining an orientation of the vehicle body 4. The IMU 8 comprises one or more accelerometer and/or one or more gyroscope. The IMU 8 in the present embodiment determines a pitch angle of the vehicle body 4 about the transverse axis Y and outputs a pitch angle signal S1 to a communication network (not shown) provided in the vehicle 2. The IMU 8 may optionally also determine a roll angle of the vehicle 2 about the longitudinal axis X and output a roll angle signal. A steering wheel sensor 9 is provided for determining a steering angle of the steering wheel (not shown) in the vehicle 2. The steering wheel sensor 9 outputs a steering angle signal S2 to the communication network.
As described herein, the control system 1 is configured to determine a topographical relief of the ground surface SRF. The control system 1 may model the topographical relief of the ground surface in front of the vehicle 2. As illustrated in Figure 2, the vehicle 2 comprises an imaging device 10 for capturing image data DIMG representing an imaging region RIMG external to the vehicle 2. The imaging device 10 may be operable to capture the image data DIMG at least substantially in real time. The imaging device 10 may capture a predefined number of frames of image data DIMG per second, for example twenty-four (24) frames per second. The captured image data DIMG is composed of data relating to real-world features within the imaging region RIMG. The imaging region RIMG in the present embodiments extends from 5m to 25m in front of the vehicle 2 in the direction of vehicle travel. A first image IMG1 captured by the imaging device 10 is shown in Figure 3 by way of example. The imaging device 10 is configured such that the imaging region RIMG comprises a region of the surface SRF over which the vehicle 2 is travelling. Thus, the captured image data DIMG comprises the ground surface SRF proximal to the vehicle 2 and optionally also the surface(s) of one or more obstacle. The captured image data DIMG may include one or more obstacle which may impede or prevent vehicle progress. The imaging device 10 in the present embodiment is forward-facing and the imaging region RIMG is located in front of the vehicle 2. The imaging device 10 may be mounted proximal an upper edge of a front windshield, for example behind a rear-view mirror (not shown).
The imaging device 10 in the present embodiment comprises a stereo camera 11 comprising first and second imaging sensors 11-1 , 11-2, as shown in Figure 1. The first and second imaging sensors 11-1 , 11-2 are respective first and second optical cameras in the present embodiment. The image data DIMG comprises a first set of image data DIMG-1 captured by the first camera 11-1 , and a second set of image data DIMG-2 captured by the second camera 11-2. The first and second cameras 11-1 , 11-2 are spatially separated from each other but have overlapping fields of view FOV. In the present embodiment, the first and second cameras 11-1 , 1 1-2 operate in the visible spectrum. Alternatively, or in addition, the first and second cameras 11-1 , 11-2 may operate in the non-visible spectrum, for example comprising infrared light. Alternatively, or in addition, the imaging device 10 may comprise or consist of a radar imaging device.
The control system 1 comprises a controller 12 for receiving the captured image data DIMG. As shown schematically in Figure 1 , the controller 12 includes a processor 13 and a memory 14. A set of computational instructions is stored on the memory 14. When executed, the computational instructions cause the processor 13 to perform the method(s) described herein. The processor 13 is configured to implement an image processing algorithm to analyse the first and second sets of image data DIMG-1 , DIMG-2 to determine characteristics of the ground surface SRF within the imaging region RIMG. The processor 13 identifies disparities between the first and second sets of image data DIMG-1 , DIMG-2 and performs range imaging to determine the distance to features within the imaging region RIMG. With reference to known parameters of the stereo camera 11 , such as the spatial separation of the first and second cameras 11-1 , 11-2, the processor 13 generates three dimensional (3D) data in the form of a point cloud 15 in dependence on the first and second sets of image data DIMG-1 , DIMG-2. The point cloud 15 is composed of a plurality of discrete points located on the external surfaces of objects and features within the imaging region RIMG.
A transformation is applied to move an origin of the point cloud 15 to a predefined reference point. The transformation moves the point cloud origin from a centre position CP1 of the stereo camera 11 to a reference point defining an origin of a vehicle co-ordinate system. In the present embodiment the reference point is a centre position CP2 of a rear axle (i.e. the position on vehicle centreline) which is coincident with the centre of the rear wheels. The centre position CP2 defines a common centre point of turning of the vehicle 2. The transformation is predefined in dependence on the relative location of the centre positions CP1 , CP2. The modified point cloud 15 thereby defines the vertical height of the points relative to a centre of the vehicle rear wheel.
The processor 13 determines the pitch angle of the vehicle 2 in dependence on the pitch angle signal S1 output by the IMU 8. The processor 13 utilises the vehicle pitch angle and the modified point cloud 15 to form an elevation map corresponding to the imaging region RIMG. The elevation map provides a representation of localised relief features formed in a ground surface. The ground surface forms the surface of a section of ground over which the vehicle 2 is travelling, such as the surface of an unmetalled road or an off-road track. The elevation map is referred to herein as a Multi-Level Surface (MLS) map 17. An example of an MLS map 17 generated from the image data DIMG is shown in Figure 4. The MLS map 17 provides terrain geometry within the imaging region RIMG. The MLS map 17 is composed of a grid comprising a plurality of two-dimensional (2D) cells 18 arranged in a horizontal plane. The processor 13 generate the MLS map 17 in dependence on the three- dimensional spatial distribution of the points of the modified point cloud 15 within each cell 18. The processor 13 may, for example, generate the MLS map 17 in dependence on a mean vertical height of the points of the modified point cloud 15 within each cell 18, or in dependence on a maximum or minimum vertical height of the points within the modified point cloud 15. A distribution of the modified point cloud 15 within each cell 18 may provide an indication of a localised change in a vertical height of the ground surface SRF. The MLS map 17 may comprise data representing the distribution of the modified point cloud 15 within each cell 18, for example representing a statistical analysis of the vertical distribution of points of the modified point cloud 15 within each cell 18. In the present embodiment, the cells 18 each measure 25cm x 25cm. The resolution of the MLS map 17 may be increased or decreased by changing the dimensions of the cells 18. In a variant, the processor 13 may be configured to determine a gradient (positive or negative) of the terrain in each cell 18. In a variant, the MLS map 17 may comprise a low-poly model of the terrain in the imaging region.
The processor 13 in the present embodiment is configured to refine the MLS map 17 by identifying overhang features, such as a branch of a tree or a space under another vehicle, present within the imaging region RIMG. The processor 13 may identify an overhang by identifying two or more points within the modified point cloud 15 having different vertical heights but at least substantially the same horizontal position. If an overhang feature is identified, the processor 13 refines the MLS map 17 by omitting the point (or points) having a lower vertical height. If an overhang is identified, the processor 13 refines the MLS map 17 based on vehicle traversability analysis using the height difference of the vertical heights. If traversability is positive (i.e. the processor 13 determines that the feature is traversable), the points corresponding to overhang features are omitted. If traversability is negative (i.e. the processor 13 determines that the feature cannot be traversed), the points in the 2 vertical patches are combined and the cell 18 is characterised as representing an obstacle. The control system 1 is configured to analyse the image data DIMG to identify obstacles within the imaging region RIMG. In the context of the present application, an obstacle may be classified as a physical feature or object which will impede progress of the vehicle 2 or which is deemed to be un-traversable by the vehicle 2. The processor 13 is configured to identify any such obstacles within the MLS map 17. In the present embodiment, the processor identifies an obstacle as a feature which results in a change in terrain height between adjacent cells 18 within the MLS map 17. If the processor 13 identifies a change in terrain height between two or more adjacent cells 18 exceeding a predefined vertical threshold, the processor 13 characterises the identified cell as representing an obstacle. The predefined vertical threshold may, for example, be 25cm or 50cm. The processor 13 could optionally be configured to implement a route planning algorithm for planning a vehicle route in dependence on the determined position and/or size of any identified obstacle(s). It will be understood that the grading of the cells 18 may be refined, for example by defining a plurality of vertical thresholds or classifying the cells 18 in direct proportion to a detected change in terrain height between two or more adjacent cells 18.
By way of example, an image representing the image data DIMG is shown in Figure 3. The image data DIMG shows an unsurfaced track 19 along which the vehicle 2 is travelling and a tree 20 adjacent to the track 19. The track 19 comprises a dip in which water has collected to form a pool 21. The processor 13 analyses the image data DIMG captured by the imaging device 10 and generates a point cloud 15 which is used to generate the MLS map 17 shown in Figure 4. The features identified through analysis of the image data DIMG are labelled in the MLS map 17 shown in Figure 4. The pool 21 is identified as a region which is at least substantially empty in the image data DIMG. The region behind the tree 20 is obscured from view and is identified in the MLS map 17 as a contiguous extension thereof.
The processor 13 analyses the MLS map 17 to identify obstacles. By way of example, an MLS map 17 is shown in Figure 5 with the cells 18 marked to represent the determination of the processor 13. The cells 18 outside of a field of view FOV of the imaging device 10 are shown unshaded. The cells 18 inside the field of view FOV which are identified as corresponding to traversable terrain (terrain cells) are shown having an intermediate shading. The cells 18 inside the field of view FOV which are identified as corresponding to an obstacle (such as the tree 20 shown in Figure 3) are shown having a dark shading (obstacle cells).
The processor 13 is configured to model a route R for the vehicle 2. The vehicle route R may, for example, be modelled in dependence on the current (i.e. instantaneous) steering angle of the first and second wheels W-1 , W-2. Other implementations of the control system 1 may model the vehicle route R in dependence on a user-specified route and/or a route planning algorithm. The processor 13 determines left and right wheel paths P1 , P2 along which the left and right wheels W1-4 will travel respectively. The left and right wheel paths P1 , P2 are overlaid onto the MLS map 17 in Figure 6. The processor 13 may take account of changes in the vertical height of the terrain when determining the left and right wheel paths P1 , P2. In a variant, the processor 13 may be configured only to analyse the image data DIMG captured by the imaging device 10 in a region along or proximal to the route R to generate the MLS map, optionally discarding image data DIMG distal from the route R.
A second image IMG2 captured by the imaging device 10 is shown in Figure 7 by way of example. As shown in the second image IMG2, the change in relative height of the left and right wheel paths P1 , P2 may be determined as the vehicle 2 progresses along the vehicle route R. According to the first embodiment, a third image IMG3 is captured by the imaging device 10 is shown in Figure 8 by way of example. The third image IMG3 comprises an unmetalled track having first and second ruts R1 , R2. The left and right wheel paths P1 , P2 are overlaid onto the third image IMG3 to show the predicted positions of the left and right wheels W1-4 of the vehicle 2 in relation to the first and second ruts R1 , R2.
The MLS map 17 generated through analysis of the third image IMG3 is shown in Figure 9A. The MLS map 17 represents the topographical relief of the ground surface SRF identified within the third image IMG3. The processor 13 applies a transform to project the MLS map 17 in a plan elevation, as shown in Figure 9A. The processor 13 analyses the MLS map 17 by performing a height differential analysis. The height differential analysis comprises comparing the height of each cell 18 with the height of each adjacent cell 18 within the MLS map 17. The processor 13 identifies each cell 18 having a height which is offset vertically relative to one or more adjacent cell 18 by a vertical distance greater than or equal to a predefined vertical offset threshold. In the present embodiment, the processor 13 is configured to identify each cell 18 having a height below that of one or more adjacent cell 18 by at least vertical offset threshold. The cells 18 identified by the processor 13 as a result of the height differential analysis are referred to herein as step-change cells 18’. In the present embodiment, the vertical offset threshold is defined as 5cm, but larger or smaller vertical offset thresholds may be defined. The step-change cells 18’ each represent a step change (i.e. an abrupt height change over a relatively small distance) in the vertical height of the ground surface SRF, as approximated by the MLS map 17. The processor 13 generates a step-change map 22 comprising each of the step-change cells 18’. By way of example, a step-change map 22 is shown in Figure 9B representing the results of a height differential analysis of the MLS map 17 shown in Figure 9A. In the present embodiment, the step-change map 22 also represents the height differential between adjacent cells 18 and characterises each cell 18 as having a LOW, MEDIUM or HIGH height differential. The processor 13 flags each cell 18 identified in the MLS map 17 as having a HIGH height differential (i.e. a vertical offset greater than or equal to 5cm) and the step-change cells 18’ are represented in the map shown in Figure 9B.
The ruts R1 , R2 typically comprise left and right channels (which are formed by the left and right wheels of one or more vehicles). The control system 1 is configured to analyse the step-change map 22 to identify elongate sequences having a profile which at least substantially matches the expected features and characteristics of the ruts R1 , R2. The processor 13 analyses the step-change map 22 to identify first and second elongate sections 23A, 23B corresponding to the first and second ruts R1 , R2 respectively. The first and second elongate sections 23A, 23B are shown in Figure 10 which shows a graphical overlay 24 on the third image IMG3. The processor 13 analyses the step-change map 22 to identify a plurality of the step- change cells 18’ arranged in one or more of the following: a continuous sequence; a substantially continuous sequence; or an interrupted sequence. The continuous sequence may comprise a plurality of the step-change cells 18’ arranged in an uninterrupted sequence (i.e. composed of contiguous step-change cells 18’). The substantially continuous sequence may comprise a plurality of step-change cells 18’ which are offset from each other in a diagonal direction and/or which are separated from each other by a distance less than or equal to a predefined distance threshold (for example a separation of less than or equal to n cells 18, where n is a whole number less than or equal to one, two or three). The interrupted sequence may comprise one or more continuous sequences and/or one or more substantially continuous sequences which are separated from each other by a distance greater than or equal to a predefined distance threshold (for example a separation of greater than or equal to n cells 18, where n is a whole number greater than or equal to three, four or five).
The processor 13 in the present embodiment is configured to apply a pattern detection algorithm to identify each elongate section forming a continuous line in plan elevation. In particular, the processor 13 applies a curve detection algorithm to detect each sequence (continuous, substantially continuous, or interrupted) of the step-change cells 18’ which forms a curve within the MLS map 17. The processor 13 could be configured to identify a curved sequence of the step-change cells 18’ as corresponding to one of the first and second ruts R1 , R2. In the present embodiment, however, the processor 13 is configured to analyse the MLS map 17 to identify pairs of curved sequences corresponding to the respective first and second ruts R1 , R2. In particular, the processor 13 identifies first and second elongate sections forming first and second curves which are at least substantially parallel to each other. The first and second elongate sections identified within the MLS map 17 as being at least substantially parallel to each other are identified as the first and second ruts R1 , R2.
The first and second ruts R1 , R2 are typically spaced apart from each other by a distance corresponding to a wheel track of a vehicle. To facilitate identification of the first and second ruts R1 , R2, an upper wheel track threshold and/or a lower wheel track threshold may be defined. The processor 13 may optionally determine a distance between the first and second elongate sections identified within the MLA map 17. The processor may identify the first and second elongate sections as corresponding to the first and second ruts R1 , R2 only if the distance between the first and second elongate sections is less than the upper wheel track threshold and/or greater than the lower wheel track threshold.
The processor 13 is configured to output a rut identification signal RSIG in dependence on identification of the first and second ruts R1 , R2. The rut identification signal RSIG may, for example, be output to a vehicle communication network. One or more vehicle systems may be controlled in dependence on the output of the rut identification signal RSIG. By way of example, one or more of the following vehicle systems may be controlled: a throttle response; a drivetrain; a vehicle transmission (for example to select a particular gear ratio); a transfer case (for example to select a high or low ratio); an electrical power steering unit (for example to modify a steering ratio and/or to change feedback from the steering wheel); and a suspension system (for example to adjust suspension travel and/or to adjust a damping setting). The processor 13 may output a steering control signal to control a steering angle of the vehicle 2 so as to match or follow the profile of the first and second ruts R1 , R2. For example, the steering angle of the vehicle 2 may be controlled.
The operation of the control system 1 is illustrated in a flow diagram 100 shown in Figure 11. The imaging device 10 is provided to capture image data DIMG (BLOCK 105) corresponding to an imaging region RIMG in front of the vehicle 2. The first and second cameras 11-1 , 11-2 of the imaging device 10 capture respective first and second sets of image data DIMG-1 , DIMG-2 (BLOCK 110). The processor 13 generates a disparity image in dependence on the first and second sets of image data DIMG- 1 , DIMG-2 (BLOCK 115). The processor 13 retrieves known parameters of the imaging device (BLOCK 120) and generates a point cloud 15 in dependence on the disparity image (BLOCK 125). The processor 13 reads the pitch angle signal S1 output by the IMU 8 and determines the pitch angle of the vehicle body 4 (BLOCK 130). The MLS map 17 is generated in dependence on the point cloud 15 and the determined pitch of the vehicle 2 (BLOCK 135). The processor 13 analyses the MLS map 17 to classify the constituent cells 18 as corresponding to either an obstacle (i.e. cannot be traversed by the vehicle 2) or a traversable section of terrain (BLOCK 140). The cells 18 corresponding to an obstacle may optionally be discarded from subsequent analysis. The processor 13 reads the steering angle signal S2 and determines the current steering angle of the vehicle 2 (BLOCK 145), for example by reading a steering signal published by a steering angle sensor. A vehicle route R is determined in dependence on the current steering angle and the left and right wheel paths P1 , P2 determined (BLOCK 150).
A height comparison is performed to compare the height of each cell 18 along the left and right wheel paths P1 , P2 with the adjacent (eight) cells 18. The processor 13 determines if the height differential is greater than or less than the predefined vertical offset threshold (BLOCK 155). If the height differential of a cell 18 is less than the predefined vertical offset threshold, the cell 18 is discarded (BLOCK 160). If the height differential for a cell 18 is greater than the predefined vertical offset threshold, the cell 18 is flagged as a step-change cell 18’ (BLOCK 165). A step-change map 22 is created by projecting each step-change cell 18’ identified by the processor 13 (BLOCK 170). The step-change map 22 provides a two-dimensional representation of the topographical relief of the ground surface SRF. The processor 13 utilises a curve detection algorithm to detect sequences of the step-change cells 18’ which form a curve (BLOCK 175). The processor 13 then analyses the detected curved sequences of step-change cells 18’ to identify sequence pairs which are at least substantially parallel to each other (BLOCK 180). If the processor 13 does not identify a pair of sequences which are at least substantially parallel to each other, a determination is made that first and second ruts R1 , R2 are not present in the captured image data (BLOCK 185). If the processor 13 identifies a pair of sequences which are at least substantially parallel to each other, a determination is made that first and second ruts R1 , R2 are present in the captured image data (BLOCK 190). The processor 13 may output a rut detection signal RSIG in dependence on this determination.
The processor 13 may be configured to determine further features of the identified first and second ruts R1 , R2. For example, the processor 13 may analyse the MLS map 17 to determine the depth and/or the width of the first and second ruts R1 , R2. If the depth of one or both of the first and second ruts R1 , R2 exceeds a predefined threshold, the processor 13 may output a notification, for example to warn a driver of a potential risk that the vehicle 2 will become stranded. Alternatively, or in addition, the processor 13 may be configured to identify where the depth of one or both of the first and second ruts R1 , R2 is less than a predefined threshold, for example to identify a location for entry into, or exit from the first and second ruts R1 , R2. Alternatively, or in addition, the processor 13 may be configured to determine the height of a (central) ridge between the first and second ruts R1 , R2 relative to the first and second ruts R1 , R2. If the relative height of the ridge exceeds a predefined threshold, the processor 13 may output a notification, for example to warn a driver of a potential scenario in which the vehicle 2 may be high-centred. The processor 13 may optionally supplement this functionality be detecting one or more obstacle, such as a rock, on the ridge between the first and second ruts R1 , R2.
The processor 13 has been described herein as identifying first and second curves which are substantially parallel to each other. Alternatively, or in addition, the processor 13 may identify first and second curved sequences which are spaced apart from each other by a distance within a predefined wheel track range. The wheel track range may, for example, define upper and lower wheel track thresholds. The upper and lower wheel track thresholds may be defined in dependence on the wheel track of the vehicle 2.
It will be appreciated that various modifications may be made to the embodiment(s) described herein without departing from the scope of the appended claims.
The control system 1 has been described herein with reference to the identification of first and second ruts R1 , R2. It will be understood that the control system 1 can be modified to identify a single rut R1. Alternatively, or in addition, the control system 1 may be configured to identify larger channels, such as a ditch or a gulley, formed in the ground surface SRF. The techniques described herein in relation to the analysis of the MSL map 17 to identify a rut RS1 , RS2 may be modified to identify the ditch or gulley. For example, the processor 13 may identify a series of step-change cells 18’ representing a V-shaped or U-shaped channel. Alternatively, or in addition, the processor 13 may be configured to identify the sides and/or the bottom of the channel within the MSL map 17. Conversely, the control system 1 may be configured to analyse the MSL map 17 to identify a ridge or raised region in ground surface SRF. In a second embodiment a vehicle geometry is stored in the system memory 14 as a vehicle data set. The vehicle geometry comprises a wheel track and a wheel base of the vehicle 2. In dependence on the stored vehicle geometry, the processor 13 is configured to predict a vertical height of each wheel W1-4 as the vehicle 2 travels along the vehicle route R. A third image IMG3 captured by the imaging device 10 is shown in Figure 12 by way of example. The position of each wheel 30 W1-4 on the left and right wheel paths P1 , P2 is illustrated in the third image IMG3 at a position on the vehicle route R in which the front wheels W1 , W2 are a predetermined distance in front of their current location. The predetermined distance may, for example, be defined as a number‘n’ of meters from the current location of the vehicle 2 (where“n” is greater than zero). The predetermined distance may be user-selected, for example in dependence on a user 35 input. The topographical relief of the terrain is defined by the MLS map 17. In dependence on the MLS map 17, the processor 13 may determine the height of each wheel W1-4 for any given vehicle position. Any cells 18 of the MLS map 17 classified as corresponding to an obstacle are avoided when determining the height of the wheels W1-4. If there is a large deviation 5 between the determined height of the wheels W1-4, the processor 13 may predict the height values to reduce errors associated with the MLS map 17. For example, the processor 13 may compare the height of the first and second wheels W1 , W2 when the vehicle 2 is at a first location on the vehicle route R and predict a front articulation angle a1 (i.e. the angle of a central axis joining the first and second wheels W1 , W2 relative to a horizontal axis).
Alternatively, or in addition, the processor 13 may compare the height of the third and fourth wheels W3, W4 when the vehicle 2 is at the first location on vehicle route R and predict a rear articulation angle a2 (i.e. the angle of a central axis joining the third and fourth wheels W3, W4 relative to a horizontal axis). The processor 13 may repeat this analysis along the vehicle route R to model changes in the front and rear articulation angles a1 , a2. The changes in the 15 front and rear articulation angles a1 , a2 may thereby be determined along the vehicle route R. An articulation angle threshold may be predefined (for the front axle 5 and/or the rear axle 6). The processor 13 may determine if one or both of the front and rear articulation angles a1 , a2 20 exceeds the predefined articulation angle threshold. The processor 13 may generate an advance warning to notify a driver of the vehicle that the articulation angle 2 threshold will be exceeded if the vehicle 2 proceeds along the vehicle route R. The processor 13 predicts the front and rear articulation angles a1 , a2, thereby allowing the warning to be generated in advance. Alternatively, or in addition, the processor 13 may modify the vehicle route R to such 25 that the front and rear articulation angles a1 , a2 are less than the predefined articulation angle threshold.
Alternatively, or in addition, the processor 13 may output a vehicle control signal for controlling one or more vehicle systems in dependence on the determined front and rear articulation angles a1 , a2. The control signal may control one or more of the following: a throttle response; a drivetrain; a vehicle transmission (for example to select a particular gear ratio); a 30 transfer case (for example to select a high or low ratio); an electrical power steering unit (for example to modify a steering ratio and/or to change feedback from the steering wheel); and a suspension system (for example to adjust suspension travel and/or to adjust a damping setting).
A fourth image IMG4 captured by the imaging device 10 is shown in Figure 13 by way of example. The determination of the front and rear articulation angles a1 , a2 for a vehicle route R is illustrated in the fourth image IMG4. The imaging device 10 captures the image data DIMG for an imaging region RIMG in front of the vehicle 2. The processor 13 analyses the image data DIMG to generate the MLS map 17. The left and right wheel paths P1 , P2 are determined for the vehicle route R. The position of each wheel W1-4 is determined for a predicted position of the vehicle 2 on the vehicle route R. The vertical position of each wheel 5 W1-4 may be determined relative to a reference point, such as the centre position CP2, on the vehicle 2. The front and rear articulation angles a1 , a2 are then determined for the predicted position. In the illustrated example, the front articulation angle a1 is -15.7°; and the rear articulation angle a2 is -11.5°. The front and rear articulation angles a1 , a2 can be output to a display screen, for example to provide a graphical representation of the predicted 10 orientation of the vehicle 2.
The operation of the control system 1 is illustrated in a flow diagram 100 shown in Figure 14. The imaging device 10 is provided to capture image data DIMG (BLOCK 105) corresponding to an imaging region RIMG in front of the vehicle 2. The first and second cameras 11-1 , 11-2 15 of the imaging device 10 capture respective first and second sets of image data DIMG-1 , DIMG- 2 (BLOCK 110). The processor 13 generates a disparity image in dependence on the first and second sets of image data DIMG-1 , DIMG-2 (BLOCK 115). The processor 13 retrieves known parameters of the imaging device (BLOCK 120) and generates a point cloud 15 in dependence on the disparity image (BLOCK 125). The processor 13 reads the pitch 20 angle signal S1 output by the IMU 8 and determines the pitch angle of the vehicle body 4 (BLOCK 130). The MLS map 17 is generated in dependence on the point cloud 15 and the determined pitch of the vehicle 2 (BLOCK 135). The processor 13 analyses the MLS map 17 to classify the constituent cells 18 as corresponding to either an obstacle (i.e. cannot be traversed by the vehicle 2) or a traversable section of terrain (BLOCK 140). The cells 18 25 corresponding to an obstacle may optionally be discarded from subsequent analysis. The processor 13 reads the steering angle signal S2 and determines the current steering angle of the vehicle 2 (BLOCK 145), for example by reading a steering signal published by a steering angle sensor. A vehicle route R is determined in dependence on the current steering angle and the left and right wheel paths P1 , P2 determined (BLOCK 150). The processor 13 receives 30 a distance metric‘n’ identifying a location along the vehicle route R (BLOCK 155); and determines the location of each wheel W1-4 on the left and right wheel paths P1 , P2 at the identified location (BLOCK 160). The processor 13 then estimates the height of each wheel W1-4 at the identified location in dependence on the MLS map 17 (BLOCK 165). The front and rear articulation angles a1 , a2 are determined in dependence on the estimated height of 35 each wheel W1-4 (BLOCK 170). It will be appreciated that various modifications may be made to the embodiment(s) described herein without departing from the scope of the appended claims.
The processor 13 described herein utilises the pitch angle of the vehicle body 4 to generate 5 the MLS map 17. The processor 13 may optionally also utilise the roll angle of the vehicle body 4 to generate the MLS map 17. The processor 13 described herein may be configured to receive suspension travel signals for indicating a travel (or height) of each of the suspension assemblies S1-4. An uneven loading 10 of the vehicle 2 may result in a change in the pitch angle or the roll angle of the vehicle 4. When determining the MLS map 14, the processor 13 may apply a correction factor to compensate for any such variations. The correction factor may be determined in dependence on the suspension travel signals. 15 The processor 13 may utilise the MLS map 17 as a kinematic model for determining the orientation of the vehicle 2. For example, the processor 13 may use the MLS map 17 to estimate a roll angle and/or a pitch angle of the vehicle body 4.
The imaging device 10 has been described herein as comprising first and second imaging sensors 11-1 , 1 1-2. The first and second imaging sensors 11-1 , 11-2 have been described as comprising first and second optical cameras. It will be understood that different types of sensors may be used to generate the image data used to generate the three dimensional data used in the prediction of the vertical position of the at least one wheel. The imaging system may, for example, comprise or consist of a lidar (Light Detection and Ranging) system for generating the three dimensional data. The lidar system may comprise a laser transmitter and sensor array. Alternatively, or in addition, the imaging device 10 may comprise a radar system operable to generate the three dimensional data.

Claims

CLAIMS:
1. A control system for identifying one or more rut in a surface, the control system comprising one or more controllers, the control system being configured to:
receive image data representing an imaging region;
analyse the image data to generate three dimensional data relating to the imaging region;
analyse the three dimensional data to identify one or more elongate section having a vertical offset relative to an adjacent section; and
output a rut identification signal for identifying each identified elongate section as corresponding to a rut.
2. A control system as claimed in claim 1 , wherein the control system is configured to analyse the three dimensional data to identify said one or more elongate section by identifying a step change in vertical height relative to the adjacent section.
3. A control system as claimed in claim 1 or claim 2, wherein the control system is configured to analyse the three dimensional data to identify said one or more elongate section having a substantially continuous profile in plan elevation.
4. A control system as claimed in any one of claims 1 to 3, wherein the three dimensional data comprises a plurality of cells, the control system being configured to analyse the three dimensional data to identify said one or more elongate section by identifying a sequence composed of a plurality of said cells, each of the cells in the sequence being vertically offset from at least one adjacent cell.
5. A control system as claimed in any one of the preceding claims, wherein the control system is configured to identify first and second said elongate sections as corresponding to first and second ruts, and optionally wherein identifying said first and second elongate sections comprises identifying elongate sections which are substantially parallel to each other.
6. A control system as claimed in any one of the preceding claims, wherein the control system is configured to analyse the three dimensional data to determine a vertical offset between the elongate section and the adjacent section to determine a depth of the corresponding rut and optionally wherein the control system is configured to output an alert if the determined vertical offset is determined to be greater than or equal to a predetermined threshold.
7. A method of identifying one or more rut in a surface, the method comprising:
receiving image data representing an imaging region;
analysing the image data to generate three dimensional data relating to the imaging region;
analysing the three dimensional data to identify one or more elongate section having a vertical offset relative to an adjacent section; and
outputting a rut identification signal for identifying each identified elongate section as corresponding to a rut.
8. A control system for predicting a vertical position of at least one wheel of a vehicle, the control system comprising one or more controllers, the control system being configured to:
receive image data representing an imaging region;
analyse the image data to generate three dimensional data relating to the imaging region;
receive an indication of a predicted position of the at least one wheel; and predict the vertical position of the at least one wheel at the predicted position in dependence on the three dimensional data.
9. A control system as claimed in claim 8 wherein the control system is configured to predict the vertical position of each wheel on a first axle and/or second axle and optionally wherein the control system is configured to determine a first and/or second articulation angle in dependence on the predicted vertical position of each wheel on the first and/or second axle.
10. A control system as claimed in claim 9, wherein the control system is configured to predict a vehicle roll angle and/or a vehicle pitch angle in dependence on the predicted vertical position of the wheels on the first axle relative to the predicted vertical position of the wheels on the second axle.
11. A control system as claimed in any of claims 8-10, wherein the control system is configured to predict the vertical position of the at least one wheel in a plurality of predicted positions.
12. A control system as claimed in any one of the preceding claims, wherein a vehicle data set defines a relative position of each wheel on the vehicle, the control system being configured to map each wheel of the vehicle to the three dimensional data to predict the vertical position of each wheel.
13. A control system as claimed in any one of claims 8-12, wherein the control system is configured to determine a route of the vehicle, the predicted position of the at least one wheel being determined for a given position of the vehicle on the route.
14. A control system as claimed in any one of claims 8-13, wherein the control system is configured to output a vehicle control signal in dependence on the predicted vertical position of the at least one wheel of the vehicle.
15. A method of predicting a vertical position of at least one wheel of a vehicle, the method comprising:
receiving image data relating to an imaging region;
analysing the image data to generate three dimensional data relating to the imaging region;
predicting a position of the at least one wheel; and
predicting the vertical position of the at least one wheel at the predicted position in dependence on the three dimensional data.
16. A non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method claimed in claim 7 or the method claimed in claim 15.
17. A vehicle comprising a control system as claimed in any of claims 1-6 or any of claims 8-14.
PCT/EP2020/051683 2019-02-08 2020-01-23 Vehicle control system and method WO2020160927A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112020000735.9T DE112020000735T5 (en) 2019-02-08 2020-01-23 VEHICLE CONTROL SYSTEM AND METHODS

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1901749.0A GB2584383B (en) 2019-02-08 2019-02-08 Vehicle control system and method
GB1901749.0 2019-02-08
GB1902191.4 2019-02-18
GB1902191.4A GB2581954B (en) 2019-02-18 2019-02-18 Vehicle control system and method

Publications (1)

Publication Number Publication Date
WO2020160927A1 true WO2020160927A1 (en) 2020-08-13

Family

ID=69192080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/051683 WO2020160927A1 (en) 2019-02-08 2020-01-23 Vehicle control system and method

Country Status (2)

Country Link
DE (1) DE112020000735T5 (en)
WO (1) WO2020160927A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329150A (en) * 2020-11-19 2021-02-05 湖北汽车工业学院 Optimization design method for non-independent suspension
CN112937588A (en) * 2021-04-01 2021-06-11 吉林大学 Vehicle stability analysis method for ice and snow track road condition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3285485A1 (en) * 2016-08-16 2018-02-21 Samsung Electronics Co., Ltd Stereo camera-based autonomous driving method and apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3285485A1 (en) * 2016-08-16 2018-02-21 Samsung Electronics Co., Ltd Stereo camera-based autonomous driving method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAJRACHARYA MAX ET AL: "High fidelity day/night stereo mapping with vegetation and negative obstacle detection for vision-in-the-loop walking", 2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IEEE, 3 November 2013 (2013-11-03), pages 3663 - 3670, XP032537945, ISSN: 2153-0858, [retrieved on 20131227], DOI: 10.1109/IROS.2013.6696879 *
SANJAY C SOLANKI: "AFRL-RX-TY-TM-2009-4554 DEVELOPMENT OF SENSOR COMPONENT FOR TERRAIN EVALUATION AND OBSTACLE DETECTION FOR AN UNMANNED AUTONOMOUS VEHICLE DISTRIBUTION STATEMENT A: Approved for public release; distribution unlimited. AIRBASE TECHNOLOGIES DIVISION MATERIALS AND MANUFACTURING DIRECTORATE AIR FORCE RESE", 1 January 2007 (2007-01-01), XP055682153, Retrieved from the Internet <URL:https://ia803108.us.archive.org/2/items/DTIC_ADA506844/DTIC_ADA506844.pdf> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329150A (en) * 2020-11-19 2021-02-05 湖北汽车工业学院 Optimization design method for non-independent suspension
CN112329150B (en) * 2020-11-19 2022-06-17 湖北汽车工业学院 Optimization design method for dependent suspension
CN112937588A (en) * 2021-04-01 2021-06-11 吉林大学 Vehicle stability analysis method for ice and snow track road condition
CN112937588B (en) * 2021-04-01 2022-03-25 吉林大学 Vehicle stability analysis method for ice and snow track road condition

Also Published As

Publication number Publication date
DE112020000735T5 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
CN109212542B (en) Calibration method for autonomous vehicle operation
CN111442776B (en) Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
KR102558055B1 (en) Suboptimal estimation method
US9773177B2 (en) Surrounding environment recognition device
US8428305B2 (en) Method for detecting a clear path through topographical variation analysis
JP4856656B2 (en) Vehicle detection device
JP2017109740A (en) Vehicle control system and control method
CN113329927A (en) Laser radar based trailer tracking
GB2577485A (en) Control system for a vehicle
US20210354725A1 (en) Control system for a vehicle
US20210012119A1 (en) Methods and apparatus for acquisition and tracking, object classification and terrain inference
JP2015125760A (en) Mine work machine
WO2020160927A1 (en) Vehicle control system and method
GB2571589A (en) Terrain inference method and apparatus
WO2019031137A1 (en) Roadside object detection device, roadside object detection method, and roadside object detection system
CN114442101A (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN110053625B (en) Distance calculation device and vehicle control device
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
CN113569778A (en) Pavement slippery area detection and early warning method based on multi-mode data fusion
GB2584383A (en) Vehicle control system and method
CN103213579A (en) Lane departure early warning method independent of camera parameters and vehicle system
GB2581954A (en) Vehicle control system and method
CN115769286A (en) Image processing apparatus
CN114599567A (en) Vehicle-mounted cluster tracking system
JP5452518B2 (en) Vehicle white line recognition device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20701993

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20701993

Country of ref document: EP

Kind code of ref document: A1