GB2595983A - Acquisition and tracking method and apparatus - Google Patents

Acquisition and tracking method and apparatus Download PDF

Info

Publication number
GB2595983A
GB2595983A GB2110577.0A GB202110577A GB2595983A GB 2595983 A GB2595983 A GB 2595983A GB 202110577 A GB202110577 A GB 202110577A GB 2595983 A GB2595983 A GB 2595983A
Authority
GB
United Kingdom
Prior art keywords
image
target
vehicle
movement vector
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2110577.0A
Other versions
GB2595983B (en
GB202110577D0 (en
Inventor
Boyd Robin
Shamshiri Navid
Raveendran Arun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jaguar Land Rover Ltd
Original Assignee
Jaguar Land Rover Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jaguar Land Rover Ltd filed Critical Jaguar Land Rover Ltd
Publication of GB202110577D0 publication Critical patent/GB202110577D0/en
Publication of GB2595983A publication Critical patent/GB2595983A/en
Application granted granted Critical
Publication of GB2595983B publication Critical patent/GB2595983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Abstract

The present disclosure relates to a target object tracking system for a vehicle. The target object tracking system includes a processor for receiving image data captured by one or more sensor disposed on the vehicle. The processor is configured to analyse the image data to identify image components IMC(1-3) and to determine a movement vector V(1-3) of each image component IMC(1-3). A first set of components having a first movement vector is formed and these are classified as non-target image components while a second set of component(s) having a different movement vector is formed and classified as target image components and using this set to acquire the target object with the target object being acquired in dependence on the components of the second set.

Description

ACQUISITION AND TRACKING METHOD AND APPARATUS
TECHNICAL FIELD
The present disclosure relates to an acquisition and tracking method and apparatus. More particularly, but not exclusively, the present disclosure relates to a target object tracking method and apparatus; and a target object acquisition method and apparatus. The present disclosure has particular application in a vehicle, such as an automobile.
BACKGROUND
It is known to provide a host vehicle with an object detection system for detecting an obstacle or a target vehicle proximal to the host vehicle. Known object detection systems are often used to offer a feature to assist in cruise or traffic jam situations to maintain a distance to the target vehicle, typically the vehicle in front. The object detection systems are usually optimised for road type conditions, where it is possible to make number of assumptions with relative certainty. For example, it may be assumed that the host vehicle and the target vehicle are both travelling on a predominantly continuous surface and, accordingly, that the position of the target vehicle will change in a progressive manner between frames of the image data. However, these assumptions cannot be made when operating in an off-road environment where the host vehicle and/or the target vehicle may experience sharp displacements in any direction (due to surface irregularities, for example). Due to the different operating conditions, the assumptions relied on for known object detection systems are no longer valid. In such cases it can become difficult for a system to establish a valid target and extract it from the surrounding environment.
At least in certain embodiments, the present invention seeks to provide an improved tracking and acquisition apparatus and method.
SUMMARY OF THE INVENTION
Aspects of the present invention relate to a target object tracking method and apparatus; a target object acquisition method and apparatus; a non-transitory computer-readable medium and a vehicle as claimed in the appended claims.
According to a further aspect of the present invention there is provided a target object tracking system for a vehicle, the target object tracking system comprising: a processor for receiving image data captured by one or more sensor disposed on the vehicle, wherein the processor is configured to: analyse the image data to identify image components; determine a movement vector of each image component, the movement vectors each comprising a magnitude and a direction; classify at least one of the image components as a target image component relating to the target object and at least one of the remaining image components as a non-target image component; modify the movement vector of the at least one target image component in dependence on the movement vector of the or each non-target image component; and track the target object in dependence on the modified movement vector of the at least one target image component.
The non-target image component may correspond to a static or stationary feature. The target object tracking system modifies the movement vector of the at least one target image component in dependence on the movement vectors of the non-target image components. At least in certain embodiments, this modification may at least partially correct for changes in the position and/or orientation of the sensing means, for example as a result of movements of the vehicle. Applying this correction to any potential target image components may improve the object detection system, for example over a rough surface. The modified movement vector may provide more accurate positioning information of the target object relative to the vehicle.
The processor may be configured to form at least a first set of said non-target image components. The first set may comprise a plurality of said non-target image components identified as having movement vectors in a first direction. The processor may form said first set by comparing the movement vectors of the image components and identifying at least one image component having a first movement vector comprising a first direction and/or a first magnitude. The processor may be configured to compare a rate of change of the movement vectors of the image components. For example, the processor may compare the rate of change of the magnitude and/or the direction of the movement vectors. The processor may be configured to identify at least one image component having a first movement vector comprising a first direction changing at a first rate and/or a first magnitude changing at a first rate. Thus, the first set may be formed of non-target image components having at least substantially the same direction.
The processor may be configured to compare the magnitude of the movement vectors of the non-target image components. The non-target image components in the first set may have substantially the same magnitude. Thus, the first set may be formed of non-target image components having at least substantially the same magnitude.
The processor may be configured to determine a correction factor in dependence on the movement vector of the non-target image components in said first set. Alternatively, or in addition, the processor may be configured to modify the movement vector of the at least one target image component by subtracting the movement vector of the non-target image components in said first set.
The processor may be configured to identify image components which are spatially separated from each other. For example, the processor may be configured to identify image components that are distal from each other within the image.
The image data may be video image data captured by one or more image sensors disposed on the vehicle. The processor may be configured to identify the or each image component as a persistent image component. A persistent image component is an image component which may be identified for a predetermined period of time, for example over successive frames of the video image.
The target object tracking system may be configured to track a moving target object. The target object may be a pedestrian or cyclist, for example. Alternatively, the target object may be a target vehicle. The target vehicle may be a wheeled vehicle, such as an automobile.
According to a further aspect of the present invention there is provided a vehicle comprising a target object acquisition system as described herein. The vehicle may comprise sensing means for generating the image data. The sensing means may comprise one or more image sensors, such as a camera. The vehicle may be a wheeled vehicle, such as an automobile.
According to a further aspect of the present invention there is provided a method of tracking a target object from a vehicle in dependence on image data captured by one or more sensor disposed on the vehicle; wherein the method comprises: analysing the image data to identify image components; determining a movement vector of each image component, the movement vectors each comprising a magnitude and a direction; classifying at least one of the image components as a target image component relating to the target object and at least one of the remaining image components as a non-target image component; modifying the movement vector of the at least one target image component in dependence on the movement vector of the or each non-target image component; and tracking the target object in dependence on the modified movement vector of the at least one target image component.
The non-target image component may correspond to a static or stationary feature. The method may comprise forming at least a first set of said non-target image components. The first set may comprise a plurality of said non-target image components identified as having movement vectors in a first direction. The method may comprise forming said first set by comparing the movement vectors of the image components. The method may comprise identifying at least one image component having a first movement vector comprising a first direction and/or a first magnitude. The method may comprise forming said first set by comparing the rate of change of the movement vectors of the image components. For example, the method may comprise comparing the rate of change of the magnitude and/or the direction of the movement vectors. The method may comprise identifying at least one image component having a first movement vector comprising a first direction changing at a first rate and/or a first magnitude changing at a first rate.
The method may comprise comparing the magnitude of the movement vectors of the non-target image components. The non-target image components in the first set may have substantially the same magnitude.
The method may comprise modifying the movement vector of the at least one target image component comprises subtracting the movement vector of the non-target image components in said first set.
The method may comprise identifying image components in the image data which are spatially separated from each other.
The image data may be video image data captured by one or more image sensors disposed on the vehicle; and the or each image component is a persistent image component. A persistent image component is an image component which may be identified for a predetermined period of time, for example over successive frames of the video image.
The method may comprise tracking a moving target object. The target object may be a pedestrian or cyclist, for example. Alternatively, the target object may be a target vehicle. The target vehicle may be a wheeled vehicle, such as an automobile.
According to a further aspect of the present invention there is provided a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method(s) described herein.
According to a further aspect of the present invention there is provided a target object acquisition system for a vehicle, the target object acquisition system comprising: a processor for receiving image data captured by one or more sensor disposed on the vehicle, wherein the processor is configured to: analyse the image data to identify image components; determine a movement vector of each identified image component, the movement vectors each having a magnitude and a direction; form a first set comprising a plurality of said image components having a first movement vector, and classifying the image components in said first set as non-target image components; form a second set comprising an image component having a second movement vector, the second movement vector being different from the first movement vector, and classifying the or each image component in said second set as a target image component relating to the target object; and acquire the target object in dependence on the target image component in said second set.
The non-target image component may correspond to a static or stationary feature. The first set may comprise a plurality of image components; and the second set consists of a single image component.
The processor may form said first set by comparing the movement vectors of the image components and identifying at least one image component having a first movement vector comprising a first direction and/or a first magnitude. The processor may be configured to compare a rate of change of the movement vectors of the image components. For example, the processor may compare the rate of change of the magnitude and/or the direction of the movement vectors. The processor may be configured to identify at least one image component having a first movement vector comprising a first direction changing at a first rate and/or a first magnitude changing at a first rate.
The processor may form said second set by comparing the movement vectors of the image components and identifying at least one image component having a second movement vector comprising a second direction and/or a second magnitude. The processor may be configured to compare a rate of change of the movement vectors of the image components. For example, the processor may compare the rate of change of the magnitude and/or the direction of the movement vectors. The processor may be configured to identify at least one image component having a second movement vector comprising a second direction changing at a first rate and/or a second magnitude changing at a first rate.
The first direction and the second direction may be different from each other; and/or the first magnitude and the second magnitude may be different from each other.
The image components identified in the image data may be spatially separated from each other. For example, the processor may be configured to identify image components that are distal from each other within the image.
The techniques described herein for correcting the movement vector of the at least one target image component are applicable to the target object acquisition system. The processor may be configured to modify the movement vector of the at least one target image component in dependence on the movement vector of the or each non-target image component.
The image data may be video image data captured by one or more image sensors disposed on the vehicle. The or each image component may be a persistent image component. A persistent image component is an image component which may be identified for a predetermined period of time, for example over successive frames of the video image.
The processor may be configured to acquire a moving target object. The target object may be a pedestrian or cyclist, for example. Alternatively, the target object may be a target vehicle.
The target vehicle may be a wheeled vehicle, such as an automobile.
According to a further aspect of the present invention there is provided a vehicle comprising a target object tracking system as described herein. The vehicle may comprise sensing means for generating the image data. The sensing means may comprise one or more image sensors, such as a camera. The vehicle may be a wheeled vehicle, such as an automobile.
According to a further aspect of the present invention there is provided a method of acquiring a target object from a vehicle in dependence on image data captured by one or more sensor disposed on the vehicle; wherein the method comprises: analysing the image data to identify image components; determining a movement vector of each identified image component, the movement vectors each having a magnitude and a direction; forming a first set comprising a plurality of said image components having a first movement vector, and classifying the image components in said first set as non-target image components; forming a second set comprising an image component having a second movement vector, the second movement vector being different from the first movement vector, and classifying the or each image component in said second set as a target image component relating to the target object; and acquire the target object in dependence on the target image component in said second set.
The non-target image component may correspond to a static or stationary feature. The first set may comprise a plurality of image components. The second set may consist of a single image component.
The method may comprise forming said first set by comparing the movement vectors of the image components. The method may comprise identifying at least one image component having a first movement vector comprising a first direction and/or a first magnitude. The method may comprise forming said first set by comparing the rate of change of the movement vectors of the image components. For example, the method may comprise comparing the rate of change of the magnitude and/or the direction of the movement vectors. The method may comprise identifying at least one image component having a first movement vector comprising a first direction changing at a first rate and/or a first magnitude changing at a first rate.
The method may comprise forming said second set comprises comparing the movement vectors of the image components. The method may comprise identifying at least one image component having a second movement vector comprising a second direction and/or a second magnitude. The method may comprise forming said second set by comparing the rate of change of the movement vectors of the image components. For example, the method may comprise comparing the rate of change of the magnitude and/or the direction of the movement vectors. The method may comprise identifying at least one image component having a second movement vector comprising a second direction changing at a second rate and/or a second magnitude changing at a first rate.
The first direction and the second direction may be different from each other. The first magnitude and the second magnitude may be different from each other.
The method may comprise identifying image components in the image data which are spatially separated from each other.
The method may comprise modifying the movement vector of the at least one target image component in dependence on the movement vector of the or each non-target image component.
The image data may be video image data captured by one or more image sensors disposed on the vehicle. The or each image component is a persistent image component.
According to a further aspect of the present invention there is provided a non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method(s) described herein.
The host vehicle may be a land vehicle. The target vehicle may be a land vehicle. The term "land vehicle" is used herein to refer to a vehicle configured to apply steering and drive (traction) forces against the ground. The vehicle may, for example, be a wheeled vehicle or a tracked vehicle.
The term "location" is used herein to refer to the relative position of an object on the surface of the earth. Unless indicated to the contrary, either explicitly or implied by the context, references herein to the location of an object refer to the geospatial location of that object.
It is to be understood that by the term 'type of terrain' is meant the material comprised by the terrain over which the vehicle is driving such as asphalt, grass, gravel, snow, mud, rock and/or sand. By 'off-road' is meant a surface traditionally classified as off-road, being surfaces other than asphalt, concrete or the like. For example, off-road surfaces may be relatively compliant surfaces such as mud, sand, grass, earth, gravel or the like. Alternatively, or in addition off-road surfaces may be relatively rough, for example stony, rocky, rutted or the like. Accordingly in some arrangements an off-road surface may be classified as a surface that has a relatively high roughness and/or compliance compared with a substantially flat, smooth asphalt or concrete road surface.
Any control unit or controller described herein may suitably comprise a computational device having one or more electronic processors. The system may comprise a single control unit or electronic controller or alternatively different functions of the controller may be embodied in, or hosted in, different control units or controllers. As used herein the term "controller" or "control unit" will be understood to include both a single control unit or controller and a plurality of control units or controllers collectively operating to provide any stated control functionality. To configure a controller or control unit, a suitable set of instructions may be provided which, when executed, cause said control unit or computational device to implement the control techniques specified herein. The set of instructions may suitably be embedded in said one or more electronic processors. Alternatively, the set of instructions may be provided as software saved on one or more memory associated with said controller to be executed on said computational device. The control unit or controller may be implemented in software run on one or more processors. One or more other control unit or controller may be implemented in software run on one or more processors, optionally the same one or more processors as the first controller. Other suitable arrangements may also be used.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the present invention will now be described, by way of example only, with reference to the accompanying figures, in which: Figure 1 shows a plan view of a host vehicle incorporating a target acquisition and tracking system in accordance with an embodiment of the present invention; Figure 2 shows a side elevation of the following vehicle shown in Figure 1 incorporating the target acquisition and tracking system in accordance with an embodiment of the present invention; Figure 3 shows a schematic representation of the target acquisition and tracking system incorporated into the following vehicle shown in Figures 1 and 2; and Figure 4 illustrates the operation of the target acquisition and tracking system to compare the movement vectors of image components identified in an image capture by an optical system on the host vehicle.
DETAILED DESCRIPTION
A target acquisition and tracking system 1 in accordance with an embodiment of the present invention will now be described with reference to the accompanying figures.
As illustrated in Figures 1 and 2, the target acquisition and tracking system 1 is installed in a host vehicle 2. The host vehicle 2 is a wheeled vehicle, such as an automobile or an off-road vehicle. The target acquisition and tracking system 1 is operable to acquire and/or to track a target vehicle 3 which in the present embodiment is another wheeled vehicle, such as an automobile or an off-road vehicle. The target vehicle 3 may, for example, be a vehicle travelling in front of the host vehicle 2. For example, the target vehicle 3 may be a lead vehicle or a vehicle in front of the host vehicle 2 in a convoy. In this scenario, the host vehicle 2 may be a following vehicle which is travelling along the same route as the target vehicle 3. The target acquisition and tracking system 1 is described herein with reference to a host vehicle reference frame comprising a longitudinal axis X, a transverse axis Y and a vertical axis Z. In certain embodiments, the target acquisition and tracking system 1 may be operable partially or completely to control the host vehicle 2 particularly, but not exclusively, in an off-road driving scenario.
The host vehicle 2 comprises four wheels W1-4. A torque is transmitted to the wheels W1-4 to apply a tractive force to propel the host vehicle 2. The torque is generated by one or more torque generating machine, such as an internal combustion engine or an electric traction machine, and transmitted to the driven wheels W1-4 via a vehicle powertrain. The host vehicle 2 in the present embodiment has four-wheel drive and, in use, torque is transmitted selectively to each of said wheels W1-4. It will be understood that the target acquisition and tracking system 1 could also be installed in a host vehicle 2 having two-wheel drive. The host vehicle 2 in the present embodiment is an automobile having off-road driving capabilities. For example, the host vehicle 2 may be capable of driving on an un-metalled road, such as a dirt road or track. The host vehicle 2 may, for example, be a sports utility vehicle (SUV) or a utility vehicle, but it will be understood that the target acquisition and tracking system 1 may be installed in other types of vehicle. The target acquisition and tracking system 1 may be installed in other types of wheeled vehicles, such as light, medium or heavy trucks. The target vehicle 3 may have the same configuration as the host vehicle 2 or may have a different configuration.
A schematic representation of the target acquisition and tracking system 1 installed in the host vehicle 2 is shown in Figure 3. The target acquisition and tracking system 1 comprises a controller 4 having at least one electronic processor 5 and a memory 6. The processor 5 is operable to receive a data signal Si from a sensing means 7. As described herein, the processor 5 is operable to process the image data signal Si. In the present embodiment, the processor 5 is configured to implement an image processing module 8 to analyse the image data signal Si to acquire and/or to track the target vehicle 3. The processor 5 may optionally also control operation of the host vehicle 2 in dependence on the relative location of the target vehicle 3. For example, the processor 5 may be operable to control a target follow distance D1 between the host vehicle 2 and the target vehicle 3. The processor 5 may, for example, output a target follow distance signal SW to a cruise control module 9. The cruise control module 9 may be selectively operable in a follow mode suitable for controlling a target speed of the host vehicle 2 to maintain the target follow distance D1 between the host vehicle 2 and the target vehicle 3. The cruise control module 9 may output a target speed signal SV1 to an engine control module 10 which controls the output torque transmitted to the wheels W1-4. The cruise control module 9 may also generate a brake control signal for controlling a braking torque applied to said wheels W1-4. The processor 5 may optionally also output a steering control signal SW to control an electronic power assisted steering module (not shown) to control a steering angle of the host vehicle 2.
As illustrated in Figure 2, the sensing means 7 is mounted in a forward-facing orientation to establish a detection region in front of the host vehicle 2. The sensing means 7 comprises at least one optical sensor 11 mounted to the host vehicle 2. The sensing means 7 may comprise a single camera. Alternatively, the sensing means 7 may comprise a stereoscopic camera. The at least one optical sensor 11 may be mounted at the front of the vehicle, for example incorporated into a front bumper or engine bay grille; or may be mounted within the vehicle cabin, for example in front of a rear-view mirror. The at least one optical sensor 11 has a field of view FOV having a central optical axis VX extending substantially parallel to a longitudinal axis X of the host vehicle 2. The field of view FOV is generally conical in shape and extends in horizontal and vertical directions. The at least one optical sensor 11 comprises a digital imaging sensor for capturing image data. The image data comprises an image IMG1 corresponding to a scene within the field of view FOV of the at least one optical sensor 11.
The image data is captured substantially in real-time, for example at 30 frames per second.
The at least one optical sensor 11 in the present embodiment is operable to detect light in the visible spectrum of light. The sensing means 7 comprises optics (not shown) for directing the incident light onto an imaging sensor, such as a charge-coupled device (CCD), operable to generate image data for transmission in the image data signal Si. Alternatively, or in addition, the sensing means 7 may be operable to detect light outside of the visible light spectrum, for example in the infra-red range to generate a thermographic image. Alternatively, or in addition, the sensing means 7 may comprise a Lidar sensor for projecting a laser light in front of the host vehicle 2. Other types of sensor are also contemplated.
The sensing means 7 is connected to the controller 4 over a communication bus 12 provided in the host vehicle 2. The image data signal Si is published to the communication bus 12 by the sensing means 7. In the present embodiment, the connection between the sensing means 7 and the controller 4 comprises a wired connection. In alternative embodiments, the connection between the sensing means 7 and the controller 4 may comprise a wireless connection, for example to enable remote positioning of the sensing means 7. By way of example, the sensing means 7 may be provided in a remote targeting system, such as a drone vehicle. The processor 5 is operable to read the image data signal Si from the communication bus 12. The processor 5 extracts the image data from the image data signal Si. The image processing module 8 implements an image processing algorithm to acquire the target vehicle 3 within the image data. The operation of the image processing module Swill now be described in more detail.
The image processing module 8 analyses the image data to identify one or more image components IMC(n) within the image IMG1. The image components IMC(n) are preferably persistent features within the image IMG1 detectable within the image data for at least a predetermined time period or over a predetermined number of frames, for example two or more successive frames. In certain embodiments, the image components IMC(n) may comprise an identifiable feature or element contained within of the image IMG1, for example comprising a plurality of pixels which are present in successive frames. Alternatively, or in addition, the image components IMC(n) may comprise an identified shape or pattern within the image data, for example identified using pattern matching techniques. An embodiment in which the image processing module 8 employs pattern matching techniques to identify the image components IMC(n) will now be described.
The image processing module 8 may implement an edge detection algorithm to detect edges within the image data. The image processing algorithm may, for example, be configured to identify points where the image brightness comprises discontinuities, particularly those points arranged into linear or curved line segments which may correspond to an edge. The image processing module 8 may apply a brightness threshold (which may be a predetermined threshold or a dynamic threshold) to identify the edges of the image components IMC(n) within the image IMG1. The identified edge(s) may be incomplete, for example in regions where image discontinuities are less pronounced. The image processing module 8 may complete the edges, for example utilising a morphological closing technique, to form a closed region. The or each closed region is identified as a discrete image component IMC(n). By repeating this process, the image processing algorithm may identify each image component IMC(n) contained within the image data.
The image processing module 8 is configured to determine if any of the identified image components IMC(n) correspond or potentially correspond to the target vehicle 3. The image processing module 8 uses pattern matching techniques to determine if any of the discrete image component IMC(n) identified in the image data (partially or completely) match one or more predefined patterns. The predefined patterns may, for example, comprise an object model defined in two-dimensions (2-D) or three-dimensions (3-D). The predefined patterns may be stored in the memory 6 and accessed by the image processing module 8. Known pattern matching techniques may be used to perform the comparative analysis. The predefined patterns may, for example, correspond to a shape and/or profile of the target vehicle 3. Optionally, the predefined patterns may define a colour of the target vehicle 3, for example specified by a user or identified during an initial calibration procedure. The image processing module 8 uses the pattern matching techniques to classify each discrete image component IMC(n) which corresponds to the target vehicle 3 as a target image component. In the exemplary image IMG1 shown in Figure 4, a first discrete image component IMC(1) is identified as the target image component. The image processing module 8 classifies each of the remaining discrete image components IMC(n) (i.e. the discrete image component(s) IMC(n) which do not correspond to the target vehicle 3 or which cannot be identified) as a non-target image component. The non-target image component(s) correspond to a static feature having a fixed geospatial location. In the exemplary image IMG1 shown in Figure 4, a second discrete image component IMC(2) and a third discrete image component IMC(3) are identified as non-target image components. The image processing module 8 may be operative to characterise the second and third image components IMC(2), IMC(3). By way of example, in the image IMG1 shown in Figure 4 the image processing module 8 may use pattern matching techniques to determine that the second and third image components IMC(2), IMC(3) correspond to a tree and a rock respectively. It will be understood that it is not essential that the image processing module 8 characterises the second and third image components IMC(2), IMC(3).
The image processing module 8 is configured to track the movements of each of the image components IMC(n) in the image IMG1. In particular, the image processing module 8 determines a movement vector V(n) for each discrete image component IMC(n). The movement vectors V(n) each comprise a magnitude and a direction. The image processing module 8 may optionally also determine a rate of change of the magnitude and/or the direction of the movement vectors V(n) (representative of linear acceleration and/or rotational acceleration). In accordance with the present invention, the image processing module 8 applies a correction factor to the movement vector V(n) of the target image component in dependence on the movement vector(s) V(n) of one or more non-target image components.
In the present embodiment, the image processing module 8 compares the movement vectors V(n) of a plurality of non-target image components. The image processing module 8 is configured to compare the movement vectors V(n) of a plurality of the non-target image components. If the movement vectors V(n) of multiple non-target image components are identified as having the same direction and/or the same magnitude, the image processing module 8 groups these non-target image components in a first set. As the non-target image components in the first set are determined as having moved in concert or in unison, the image processing module 8 considers these non-target image components as having a fixed geospatial location (i.e. they are a static or stationary feature) and that their movement in the image IMG1 is due to local movement of the optical sensor 11, for example as a result of movement of the host vehicle 2. The image processing module 8 applies the movement vector V(n) of the non-target image components as a correction factor to the movement vector V(n) of the target image component to compensate for the movement in the optical sensor 11. In the present embodiment, the movement vector V(n) of the non-target image component is subtracted from the movement vector V(n) of the target image component IMC(n). Applying this correction to any potential targets image components IMC(n) may improve the object detection system, for example over a rough surface. The target vehicle 3 may be tracked in dependence on the corrected movement vector V(n).
The acquisition of the target vehicle 3 within the image IMG1 enables identification of the location of the target vehicle 3 relative to the host vehicle 2. By correcting for local movement of the optical sensor 11, the image processing module 8 may more accurately determine the relative location of the target vehicle 3. The target acquisition and tracking system 1 may determine the geospafial position of the target vehicle 3 by referencing a known position of the host vehicle 2, for example by referencing an on-board global positioning system (GPS).
The image processing module 8 may track the target vehicle 3 by tracking the target vehicle 3 with respect to time. The target acquisition and tracking system 1 may thereby determine a route or path along which the target vehicle 3 is travelling.
The image processing module 8 may compare the movement vectors V(n) of the non-target image components with the movement vector V(n) of the target image component. The image processing module 8 may form the first set with non-target image components having movement vectors V(n) which are sufficiently different from the movement vector V(n) of the target image component. When comparing said movement vectors V(n), the image processing module 8 may apply one or more of the following set: a magnitude threshold, a rate of change of magnitude threshold, a direction threshold and a rate of change of direction threshold.
The comparison of the movement vectors V(n) of the image components IMC(n) within the image IMG1 may facilitate identification of the target image component. The image processing module 8 may, for example, form a second set comprising image components IMC(n) having movement vectors V(n) which are different from the movement vectors V(n) of the first set. The image components IMC(n) in the second set may be classified as target image component relating to the target vehicle 3. The image processing module 8 may compare the movement vectors V(n) for each of the image components V(n) to acquire the target image component IMC(n). The image processing module 8 may seek to acquire the target image component IMC(n) by identifying which of the image components IMC(n) have a different movement vector (i.e. a different direction and/or magnitude). The image processing module 8 may form a first set consisting of a plurality of image components IMC(n) each having movement vectors V(n) at least substantially in the same direction and/or having the same magnitude. The image processing module 8 may form a second set consisting of a plurality of image components IMC(n) each having movement vectors V(n) at least substantially in the same direction and/or having the same magnitude. For example, the first set may consist of a plurality of image components IMC(n) each having movement vectors V(n) in a first direction; and the second set may consist of a single image component IMC(n) having a movement vector V(n) in a second direction. If the first and second directions are different from each other, the image processing module 8 may classify the image components IMC(n) in the second set as corresponding to the target vehicle 3. The image processing module 8 may perform spatial distribution analysis of the image components IMC(n) within the image IMG1 to determine whether the first set or the second set corresponds to the target image component. For example, if the image components IMC(n) in the first set are distributed throughout the image IMG1 or in different regions of the image IMG1, the second set is more likely to correspond to objects having a fixed geospatial location and the image processing module 8 classifies these image components IMC(n) as non-target image components. Conversely, if the image components IMC(n) in the second set are grouped together, or the second set consists of one image component IMC(n), the second is more likely to correspond to a moving object within the image IMG1 and the image processing module 8 classifies these image components IMC(n) as target image components.
The operation of the image processing module 8 will now be described with reference to the exemplary image IMG1 shown in Figure 4. The image processing module 8 is operable to analyse the image IMG1 to identify a plurality of the image components IMC(n). The image processing module 8 implements the pattern matching algorithm to identify the image component IMC(n) corresponding to the target vehicle 3. In the illustrated arrangement, a first image component IMC(1) is classified as the target image component IMC(n); and second and third image components IMC(2), IMC(3) are classified as non-target image components IMC(2), IMC(3). The image processing module 8 determines movement vectors V(n) for each of the image components IMC(n). A first movement vector V(1) is calculated for the target image component(s) IMC(1); and second and third movement vectors V(2), V(3) are calculated for the non-target image component(s) IMC(2), IMC(3). As illustrated in Figure 4, the first movement vector V(1) is in a first direction; and the second and third movement vectors V(2), V(3) are both in a second direction, the first and second directions being different. In the present case, the second and third movement vectors V(2), V(3) are substantially equal to each other. In order to improve the acquisition and/or tracking of the target vehicle 3, the image processing module 8 subtracts one of the second and third movement vectors V(2), V(3) from the first movement vector V(1). This correction may allow at least partially for movements of the optical sensor 11 on the host vehicle 2. Thus, the corrected first movement vector V(1) may provide a more accurate positioning information of the target vehicle 3 relative to the host vehicle 2.
The target acquisition and tracking system 1 may determine the route taken by the target vehicle 3 and generate a corresponding target route for the host vehicle 2. At least in certain embodiments, the image processing module 8 may calculate the speed and/or the trajectory of the target vehicle 3. The calculated speed and/or trajectory at a given location may be defined as a movement vector Vn having a magnitude (representing the target vehicle speed) and direction (representing the trajectory of the target vehicle 3).
The target acquisition and tracking system 1 has particular application in an off-road environment. When the host vehicle 2 is travelling off-road, the host vehicle 2 may be subject to sudden changes in direction and/or orientation that make the acquisition and tracking of the target vehicle 3 more challenging. The target acquisition and tracking system 1 may be selectively activated when the host vehicle 2 is travelling off-road, for example in response to a user input or automatically when an off-road driving mode is selected.
It will be appreciated that various modifications may be made to the embodiment(s) described herein without departing from the scope of the appended claims. The present invention has been described with particular reference to sensing means 7 which is forward facing to enable acquisition and tracking of a target vehicle 3 in front of the host vehicle 2. It will be understood that the invention may be implemented in other configurations, for example comprising sensing means 7 which is side-facing or rear-facing.
The target acquisition and tracking system 1 has been described with particular reference to identifying a single target vehicle 3. It will be understood that the target acquisition and tracking system 1 may be operable to identify more than one target vehicle, for example to identify a plurality of target vehicles 3 travelling in front of the host vehicle 2 in a convoy.
Various further aspects and features of the present invention are set out in the following numbered clauses: CLAUSES: 1. A target object tracking system for a vehicle, the target object tracking system comprising: a processor for receiving image data captured by one or more sensor disposed on the vehicle, wherein the processor is configured to: analyse the image data to identify image components); determine a movement vector) of each image component), the movement vectors each comprising a magnitude and a direction; classify at least one of the image components) as a target image component relating to the target object and at least one of the remaining image components) as a non-target image component; modify the movement vector) of the at least one target image component in dependence on the movement vector of the or each non-target image component; and track the target object in dependence on the modified movement vector of the at least one target image component.
2. A target object tracking system as per clause 1, wherein the processor is configured to form at least a first set of said non-target image components, the first set comprising a plurality of said non-target image components identified as having movement vectors) in a first direction.
3. A target object tracking system as per clause 2, wherein the processor is configured to compare the magnitude of the movement vectors) of the non-target image components, the non-target image components in the first set having substantially the same magnitude.
4. A target object tracking system as per clause 2 or clause 3, wherein the processor is configured to modify the movement vector) of the at least one target image component by subtracting the movement vector of the non-target image components in said first set.
5. A target object tracking system as per any one of clauses 1 to 4, wherein the processor is configured to identify image components) which are spatially separated from each other.
6. A target object tracking system as per any one of the preceding clauses, wherein the image data is video image data captured by one or more image sensors disposed on the vehicle; and the processor is configured to acquire the or each image component) as a persistent image component.
7. A target object tracking system as per any one of the preceding clauses, wherein the target object is a moving target.
8. A vehicle comprising a target object tracking system as per any one of the preceding clauses. 20 9. A method of tracking a target object from a vehicle in dependence on image data captured by one or more sensor disposed on the vehicle; wherein the method comprises: analysing the image data to acquire image components; determining a movement vector) of each image component), the movement vectors) each comprising a magnitude and a direction; classifying at least one of the image components) as a target image component relating to the target object and at least one of the remaining image components) as a non-target image component; modifying the movement vector of the at least one target image component in 30 dependence on the movement vector of the or each non-target image component; and tracking the target object in dependence on the modified movement vector of the at least one target image component.
10. A method as per clause 9 comprising forming at least a first set of said non-target image components, the first set comprising a plurality of said non-target image components identified as having movement vectors) in a first direction.
11. A method as per clause 10 comprising comparing the magnitude of the movement vectors) of the non-target image components, the non-target image components in the first set having substantially the same magnitude.
12. A method as per clause 10 or clause 11, wherein modifying the movement vector of the at least one target image component comprises subtracting the movement vector of the non-target image components in said first set.
13. A method as per any one of clauses 9 to 12 comprising identifying image components in the image data which are spatially separated from each other.
14. A method as per any one of clauses 9 to 13, wherein the image data is video image data captured by one or more image sensors disposed on the vehicle; and the or each image component) is a persistent image component.
15. A method as as per any one of clauses 9 to 14, wherein the target object is a target vehicle.
16. A non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method claimed in any one of clauses 9 to 15.
17. A target object acquisition system for a vehicle, the target object acquisition system comprising: a processor for receiving image data captured by one or more sensor disposed on the vehicle, wherein the processor is configured to: analyse the image data to identify image components); determine a movement vector of each identified image component), the movement vectors) each having a magnitude and a direction; form a first set comprising a plurality of said image components) having a first movement vector, and classifying the image components in said first set as non-target image components; form a second set comprising an image component) having a second movement vector, the second movement vector being different from the first movement vector, and classifying the or each image component) in said second set as a target image component relating to the target object; and acquire the target object in dependence on the target image component in said second set.
18. A target object acquisition system as per clause 17, wherein said first set comprises a plurality of image components); and the second set consists of a single image component).
19. A target object acquisition system as per clause 17 or clause 18, wherein forming said first set comprises comparing the movement vectors of the image components and identifying at least one image component) having a first movement vector comprising a first direction and/or a first magnitude.
20. A target object acquisition system as per any one of clauses 17, 18 or 19, wherein forming said second set comprises comparing the movement vectors) of the image components and identifying at least one image component having a second movement vector) comprising a second direction and/or a second magnitude.
21. A target object acquisition system as per clause 19 and clause 20, wherein the first direction and the second direction are different from each other; and/or the first magnitude and the second magnitude are different from each other.
22. A target object acquisition system as per any one of clauses 17 to 21, wherein the image components) identified in the image data are spatially separated from each other.
23. A target object acquisition system as per any one of clauses 17 to 22 wherein the image data is video image data captured by one or more image sensors disposed on the vehicle; and the or each image component) is a persistent image component.
24. A vehicle comprising a target object acquisition system as claimed in any one of the clauses 17 to 23.
25. A method of acquiring a target object from a vehicle in dependence on image data captured by one or more sensor disposed on the vehicle; wherein the method comprises: analysing the image data to acquire image components); determining a movement vector) of each identified image component), the movement vectors) each having a magnitude and a direction; forming a first set comprising a plurality of said image components) having a first movement vector, and classifying the image components in said first set as non-target image components; forming a second set comprising an image component) having a second movement vector, the second movement vector being different from the first movement vector, and classifying the or each image component in said second set as a target image component relating to the target object; and acquire the target object in dependence on the target image component in said second set.
26. A method system as per clause 25, wherein said first set comprises a plurality of image components); and the second set consists of a single image component).
27. A method system as per clause 25 or clause 26, wherein forming said first set comprises comparing the movement vectors) of the image components and identifying at least one image component) having a first movement vector comprising a first direction and/or a first magnitude.
28. A method as per any one of clauses 25, 26 or 27, wherein forming said second set comprises comparing the movement vectors) of the image components) and identifying at least one image component having a second movement vector comprising a second direction and/or a second magnitude.
29. A method as per clause 27 and clause 28, wherein the first direction and the second direction are different from each other; and/or the first magnitude and the second magnitude are different from each other.
30. A method as per any one of clauses 25 to 29, wherein the image components) identified in the image data are spatially separated from each other.
31. A method as per any one of clauses 25 to 30, wherein the image data is video image data captured by one or more image sensors disposed on the vehicle; and the or each image component) is a persistent image component.
32. A non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method claimed in any one of clauses 25 to 31.

Claims (16)

  1. CLAIMS1. A target object acquisition system for a vehicle, the target object acquisition system comprising: a processor for receiving image data captured by one or more sensor disposed on the vehicle, wherein the processor is configured to: analyse the image data to identify image components); determine a movement vector of each identified image component), the movement vectors) each having a magnitude and a direction; form a first set comprising a plurality of said image components) having a first movement vector, and classifying the image components in said first set as non-target image components; form a second set comprising an image component) having a second movement vector, the second movement vector being different from the first movement vector, and classifying the or each image component) in said second set as a target image component relating to the target object; and acquire the target object in dependence on the target image component in said second set.
  2. 2. A target object acquisition system as claimed in claim 1, wherein said first set comprises a plurality of image components); and the second set consists of a single image component).
  3. 3. A target object acquisition system as claimed in claim 1 or claim 12, wherein forming said first set comprises comparing the movement vectors of the image components and identifying at least one image component) having a first movement vector comprising a first direction and/or a first magnitude.
  4. 4. A target object acquisition system as claimed in any one of claims 1, 2 or 3, wherein forming said second set comprises comparing the movement vectors) of the image components and identifying at least one image component having a second movement vector) comprising a second direction and/or a second magnitude.
  5. 5. A target object acquisition system as claimed in claim 3 and claim 4, wherein the first direction and the second direction are different from each other; and/or the first magnitude and the second magnitude are different from each other.
  6. 6. A target object acquisition system as claimed in any one of claims 1 to 5, wherein the image components) identified in the image data are spatially separated from each other.
  7. 7. A target object acquisition system as claimed in any one of claims 1 to 6 wherein the image data is video image data captured by one or more image sensors disposed on the vehicle; and the or each image component) is a persistent image component.
  8. 8. A vehicle comprising a target object acquisition system as claimed in any one of the claims 1 to 7. 10
  9. 9. A method of acquiring a target object from a vehicle in dependence on image data captured by one or more sensor disposed on the vehicle; wherein the method comprises: analysing the image data to acquire image components); determining a movement vector) of each identified image component), the movement vectors) each having a magnitude and a direction; forming a first set comprising a plurality of said image components) having a first movement vector, and classifying the image components in said first set as non-target image components; forming a second set comprising an image component) having a second movement vector, the second movement vector being different from the first movement vector, and classifying the or each image component in said second set as a target image component relating to the target object; and acquire the target object in dependence on the target image component in said second set.
  10. 10. A method system as claimed in claim 9, wherein said first set comprises a plurality of image components); and the second set consists of a single image component).
  11. 11. A method system as claimed in claim 9 or claim 10, wherein forming said first set comprises comparing the movement vectors) of the image components and identifying at least one image component) having a first movement vector comprising a first direction and/or a first magnitude.
  12. 12. A method as claimed in any one of claims 9, 10 or 11, wherein forming said second set comprises comparing the movement vectors) of the image components) and identifying at least one image component having a second movement vector comprising a second direction and/or a second magnitude.
  13. 13. A method as claimed in claim 11 and claim 12, wherein the first direction and the second direction are different from each other; and/or the first magnitude and the second magnitude are different from each other.
  14. 14. A method as claimed in any one of claims 9 to 13, wherein the image components) identified in the image data are spatially separated from each other.
  15. 15. A method as claimed in any one of claims 9 to 14, wherein the image data is video image data captured by one or more image sensors disposed on the vehicle; and the or each image component) is a persistent image component.
  16. 16. A non-transitory computer-readable medium having a set of instructions stored therein which, when executed, cause a processor to perform the method claimed in any one of claims 9 to 15.
GB2110577.0A 2018-03-01 2018-04-24 Acquisition and tracking method and apparatus Active GB2595983B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201811007657 2018-03-01
GB1806626.6A GB2571586B (en) 2018-03-01 2018-04-24 Acquisition and tracking method and apparatus

Publications (3)

Publication Number Publication Date
GB202110577D0 GB202110577D0 (en) 2021-09-08
GB2595983A true GB2595983A (en) 2021-12-15
GB2595983B GB2595983B (en) 2022-09-07

Family

ID=62236142

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1806626.6A Active GB2571586B (en) 2018-03-01 2018-04-24 Acquisition and tracking method and apparatus
GB2110577.0A Active GB2595983B (en) 2018-03-01 2018-04-24 Acquisition and tracking method and apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB1806626.6A Active GB2571586B (en) 2018-03-01 2018-04-24 Acquisition and tracking method and apparatus

Country Status (1)

Country Link
GB (2) GB2571586B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019166141A1 (en) * 2018-03-01 2019-09-06 Jaguar Land Rover Limited Vehicle control method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3915746B2 (en) * 2003-07-01 2007-05-16 日産自動車株式会社 Vehicle external recognition device
JP2007317018A (en) * 2006-05-26 2007-12-06 Toyota Motor Corp Collision determination device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
GB2595983B (en) 2022-09-07
GB201806626D0 (en) 2018-06-06
GB2571586B (en) 2021-09-01
GB2571586A (en) 2019-09-04
GB202110577D0 (en) 2021-09-08

Similar Documents

Publication Publication Date Title
EP3784505B1 (en) Device and method for determining a center of a trailer tow coupler
US9643617B2 (en) Friction coefficient estimation from camera and wheel speed data
US9516277B2 (en) Full speed lane sensing with a surrounding view system
US11958485B2 (en) Vehicle control method and apparatus
US10442438B2 (en) Method and apparatus for detecting and assessing road reflections
CN106568448A (en) System and method for verifying road position information for motor vehicle
JP5363920B2 (en) Vehicle white line recognition device
US20200322588A1 (en) Vision system and method for a motor vehicle
US20210012119A1 (en) Methods and apparatus for acquisition and tracking, object classification and terrain inference
GB2571589A (en) Terrain inference method and apparatus
US20100045449A1 (en) Method for detecting a traffic space
JPWO2018179281A1 (en) Object detection device and vehicle
US11753002B2 (en) Vehicular control system
CN109421730A (en) It is detected using the cross traffic of camera
GB2571590A (en) Vehicle control method and apparatus
CN110053625B (en) Distance calculation device and vehicle control device
GB2571587A (en) Vehicle control method and apparatus
GB2595983A (en) Acquisition and tracking method and apparatus
GB2571588A (en) Object classification method and apparatus
CN115144849A (en) Sensor fusion for object avoidance detection
GB2571585A (en) Vehicle control method and apparatus
JP6259239B2 (en) Vehicle white line recognition device
JP5452518B2 (en) Vehicle white line recognition device
GB2584383A (en) Vehicle control system and method
JP7225079B2 (en) Obstacle recognition device