WO2020049089A1 - Methods and systems for determining the position of a vehicle - Google Patents

Methods and systems for determining the position of a vehicle Download PDF

Info

Publication number
WO2020049089A1
WO2020049089A1 PCT/EP2019/073677 EP2019073677W WO2020049089A1 WO 2020049089 A1 WO2020049089 A1 WO 2020049089A1 EP 2019073677 W EP2019073677 W EP 2019073677W WO 2020049089 A1 WO2020049089 A1 WO 2020049089A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
vehicle
data indicative
scene
Prior art date
Application number
PCT/EP2019/073677
Other languages
French (fr)
Inventor
Jeroen TRUM
Original Assignee
Tomtom Global Content B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tomtom Global Content B.V. filed Critical Tomtom Global Content B.V.
Publication of WO2020049089A1 publication Critical patent/WO2020049089A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/485Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an optical system or imaging system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates, in certain aspects at least, to methods and systems for determining the position of a vehicle using images obtained from a camera associated with the vehicle.
  • the present invention is of particular utility in the context of determining the position of an autonomous vehicle, although is not limited thereto.
  • the invention relates to a database for use in identifying objects in an image, and methods of using such a database.
  • a digital map may be relatively accurate in terms of the relative distances between points, it may not so accurately reflect absolute positions in the real world.
  • Techniques are known for determining the position of a vehicle using images captured by a camera associated with the vehicle. For example, objects such as traffic signs may be detected in captured images, and used to determine the position of the vehicle based upon a known position of the detected traffic sign. Localization of a vehicle in this manner based on a detected object is typically used in conjunction with other localization techniques. For example, position data obtained based on the analysis of captured images may be used to correct a rough position of a vehicle obtained using position data, such as GPS data. This may help to more accurately determine the absolute position of the vehicle, for example accurately enough to obtain a lane-level degree of accuracy.
  • Known techniques for determining the position of a vehicle using images obtained from a camera associated with the vehicle utilise the same method to detect and analyse all objects used in position determination.
  • a particular algorithm with the same operating parameters is used in each instance. For example, this may implement a neural network trained in the particular manner.
  • the Applicant has realised that there is a need for an improved method of determining the position of a vehicle.
  • a method of determining a position of a vehicle when traversing a road network comprising;
  • the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of the scene obtained from the camera;
  • the algorithm data used is obtained from a database storing specific algorithm data in respect of that object.
  • the algorithm used may be tailored to a particular instance of a particular object.
  • the same algorithm data may be applicable to more than one particular object, or to a particular object in more than one context i.e. to more than one instance of a particular object.
  • the same algorithm data may be appropriate for each instance of the sign. It is thus not necessary that the algorithm data in respect of each instance of a particular object in the database is different.
  • the same algorithm data may be applicable to each instance of a particular object e.g. a specific traffic sign, with different algorithm data being specified for other specific traffic signs.
  • these will still typically be a relatively limited set of objects, or instances of a particular object.
  • a particular object may appear at various points in a road network.
  • a rectangular 50km/hr speed limit sign may appear multiple times.
  • the Applicant has realised that when analysing an image of a scene obtained by a camera associated with a vehicle to determine the presence of the sign, or to perform any other steps required in relation to at least a portion of the image expected to contain the sign which may be necessary in determining the position of the vehicle, using the same algorithm in respect of each instance of the sign throughout the network may not provide optimal results. Accordingly, using the same algorithm data in respect of each instance of the sign may not lead to the best possible position determination. Instead, in accordance with the invention, the algorithm used for a particular instance of the sign may be customised to that instance.
  • the algorithm may be selected to provide the optimal results in respect of that particular instance of the sign.
  • the sign may appear superimposed on a grey background e.g. where the sign appears in front of a concrete bridge.
  • the sign may have a green background, corresponding to vegetation at the side of the road. The best algorithm to be used in analysing an image thought to contain the sign will differ in these two situations.
  • any reference to an object being found, expected to be found, or detected in an image, or similar, should be understood as referring to an image of the object being found, expected to be found, or detected in the image etc.
  • the present invention extends to a system for carrying out a method in accordance with any of the embodiments of the invention described herein.
  • a system for a system for determining a position of a vehicle when traversing a road network comprising;
  • the present invention in these further aspects may include any or all of the features described in relation to method aspects of the invention, and vice versa, to the extent that they are not mutually inconsistent.
  • the system of the present invention in any of its aspects may comprise means for carrying out any of the steps of the method described in relation to any of the aspects of the invention.
  • the means for carrying out any of the steps of the method of the invention in any of its aspects may comprise a set of one or more processors configured, e.g. programmed, for doing so.
  • a given step may be carried out using the same or a different set of processors to any other step.
  • Any given step may be carried out using a combination of sets of processors.
  • the methods of the present invention are computer implemented methods.
  • at least one camera is associated with the vehicle.
  • One or more camera may be associated with the vehicle.
  • the or each camera is mounted to the vehicle.
  • the image obtained from a camera associated with the vehicle may be a frame of a video captured by a camera.
  • the image obtained from a camera is an image of a scene encountered by the vehicle when traversing the road network.
  • the method comprises obtaining, from the database, data indicative of one or more object which is expected to be encountered by the vehicle in the future.
  • data in respect of a plurality of objects is obtained from the database.
  • the one or more object may be one or more object expected to be encountered in a future time period.
  • data in respect of each object included in the database and expected to be encountered in the future time period is obtained from the database.
  • the future period may be a predetermined period.
  • the future time period may be defined as desired. In general, the longer the time period used, the more objects may be expected to be encountered in that period, and hence the larger the amount of data which will need to be obtained from the database.
  • the length of the future time period may therefore be defined having regard to the amount of data which it is desired to be stored locally.
  • the step of obtaining the object data in respect of future object(s) to be encountered may be performed continually or at
  • the one or more object expected to be encountered may be identified in any suitable manner. It will be appreciated that, as each object in the database is associated with data indicative of a position of the object, the position data may be used to identify the object(s) expected to be encountered e.g. over a future time period. In some embodiments, the identification of the object(s) may be performed using at least the position data associated with the object(s) in the database and position data indicative of the approximate position of the vehicle e.g. a current position. The position data indicative of the approximate position of the vehicle may be used to determine the expected position of the vehicle at one or more future times e.g. over the future time period. This may be e.g. based upon the speed of travel of the vehicle.
  • the position data in respect of the vehicle may be data obtained from a positioning module of the vehicle e.g. GPS data. Such position data may be regarded as assumed approximate position data, in contrast to the more accurate position data which may be determined using the results of the analysis of the image(s) of an object.
  • the position of the vehicle determined using the results of the analysis may be used to refine an assumed approximate position of the vehicle.
  • the one or more objects may be identified based on data indicative of a route expected to be traversed by the vehicle e.g. over the future time period.
  • the expected route may be a predetermined route being navigated or an inferred route e.g. based upon knowledge of the road network and e.g. previous routes travelled by the vehicle etc.
  • other data may alternatively or additionally be used in identifying an object expected to be encountered e.g. data indicative of a road stretch associated with the object where the object data comprises such data.
  • the one or more objects are objects associated with positions such that they may be expected to be encountered by the vehicle based upon one or more expected future positions of the vehicle e.g. over the future time period.
  • the method may comprise identifying the or each one of said one or more objects in respect of which data is obtained from the database based upon at least the data indicative of the position of the object and data indicative of one or more expected future positions of the vehicle.
  • the one or more expected positions of the vehicle may be determined based upon data indicative of an approximate position of the vehicle or data indicative of a route expected to be traversed by the vehicle.
  • the method of the present invention may implemented in any desired manner.
  • the steps may be performed in the same or different locations.
  • the method may be performed by any combination of one or more servers and one or more vehicle mounted systems, or may be performed solely by one or more server, or solely by one or more vehicle mounted system.
  • the method is performed by a vehicle mounted system, such as a navigation system.
  • the navigation system may be a navigation system of an ADAS (Advanced Driver Assistance System), or system for autonomously driving a vehicle.
  • ADAS Advanced Driver Assistance System
  • the present invention is of utility in any context where it is desired to obtain a more accurate determination of the position of a vehicle, not merely in the context of assisted or autonomous driving.
  • the database from which the object data is obtained is a remote database, and the method may comprise locally storing the obtained data in respect of one or more objects expected to be encountered by the vehicle. Where the obtained data is stored locally, the data may be stored for a limited period, such as until a time period to which the data relates has passed.
  • the stored data may be continually updated, as travel proceeds, such that it always relates to a future time period, i.e. to objects which are expected to be encountered in a future time period. This may help to reduce the amount of data which needs to be stored.
  • the method may be performed by a server.
  • the database may then be stored by the server or may be stored remote from the server, or combinations thereof.
  • the server may be arranged to receive from the vehicle e.g. from a vehicle mounted system, the image of a scene from a camera associated with the vehicle.
  • the image data may be transmitted from the vehicle to the server for use in the method of the present invention.
  • the server may then perform the step of using the obtained data to analyse the image of the object in the image obtained from the camera, and use the results of the analysis in determining a position of the vehicle.
  • the server may then be arranged to generate data indicative of the position of the vehicle, and, in embodiments, transmit such data to the vehicle e.g.
  • a vehicle may transmit camera image data to a remote server, which then performs analysis of the image data using data obtained from the database to determine position data for the vehicle.
  • the position determination may be performed off-board.
  • the server may obtain the data indicative of object(s) expected to be encountered from the database in accordance with any of the embodiments described above e.g. based on the expected future position of the vehicle.
  • Such arrangements might be envisaged where available bandwidth enabled the communication of data between the vehicle and server as necessary to determine the position of the vehicle approximately in real time, reducing the need for data to be stored local to the vehicle.
  • the steps of the method may be performed in any combination of positions, and need not be performed solely by a server, or solely by a vehicle mounted system.
  • a server might perform the analysis based on image data transmitted to the server from a vehicle, and then transmit the results of the analysis to the vehicle for position determination, and so on. Numerous possibilities are envisaged, depending upon the storage capacities of the various components, and/or bandwidth availability etc.
  • the one or more objects in respect of which data is obtained from the database are objects whose images expected to be encountered by the vehicle in the future i.e. such that their image(s) may be expected to be found in an image captured by a camera associated with the vehicle.
  • the method comprises analysing an image obtained from the camera associated with the vehicle using the obtained data in respect of an object to attempt to detect an image of the object in the image obtained from the camera.
  • the object data used in analysing the image obtained from the camera is in respect of one or more object whose image is expected to be found in the image obtained from the camera. It will be appreciated that it is possible to know which object(s) is expected to appear in a given image obtained from the camera.
  • the object data may be selected based upon a position associated with the image and the position of the object as indicated by the position data for the object.
  • the one or more object whose data is used in respect of a particular camera image may be a subset of the one or more objects in respect of which data is obtained from the database.
  • the database may be obtained from the database, with the data in respect of the applicable object(s) from the plurality of objects whose image(s) may be expected to be present in a particular image being used in analysing that image.
  • the image will be associated with a position which may correspond to a particular one of a number of positions traversed by the vehicle in the time period for which object data was obtained from the database.
  • the data may be obtained from the database in advance of, or simultaneously with the obtaining of the image of a scene from the camera associated with the vehicle.
  • Obtaining the data in advance in relation to object(s) expected to be found in an image of a scene obtained from the camera may be advantageous in enabling the method to be performed more rapidly, allowing the determining of accurate position data for the vehicle approximately in real-time.
  • the subsequent process of analysing the obtained image only requires analysis of the image based on a limited set of possible object data, rather than searching an entire database to extract possible relevant object data.
  • potentially relevant object data may be continually or periodically updated based on the expected future position of the vehicle, so that a limited set of object data for use in analysing images obtained is always available. This has particular benefits in the context of ADAS or autonomous vehicle navigation.
  • the object data is obtained from a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing the road network, wherein the database comprises, for each object, data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use.
  • a database storing data indicative of objects which may be expected to be encountered by vehicles traversing a road network, in which the data indicative of each object is associated with algorithm data for use in analysing an image obtained from a camera associated with a vehicle to attempt to detect the object in the image is advantageous in its own right.
  • the database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing a road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use.
  • any of the features or steps described in relation to one aspect of the invention are equally applicable to any other aspect of the invention, unless the context demands otherwise.
  • the further features relating to a database described herein are equally applicable to the database of this further aspect, and the database from which the object data is obtained in accordance with the earlier aspects of the invention.
  • the database from which the object data is obtained may incorporate any of the features described in relation to a database.
  • a method of determining a position of a vehicle comprising;
  • a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing a road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use;
  • the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of a scene obtained from the camera;
  • a system for determining a position of a vehicle comprising;
  • a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing a road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use;
  • the present invention in accordance with these further aspects may include any or all of the features described in relation to the other aspects of the invention, to the extent that they are not mutually exclusive.
  • the method of the present invention in any of its aspects or embodiments may be performed in relation to one or more further image of a scene obtained from a camera associated with the vehicle, and expected to contain an image of the same object or objects.
  • the image may be obtained from the same or a different camera where multiple cameras are associated with the vehicle.
  • the images of a scene may correspond to images obtained from a given camera when the vehicle is located at different positions and/or to images obtained from cameras mounted to different positions on the vehicle.
  • the method may comprise obtaining one or more further image of a scene from a camera associated with the vehicle and expected to contain an image of the same object, using the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the or each further image of a scene to attempt to detect an image of the object in the image; and, for each image of a scene in which an image of the object is deemed to be found, using the results of the analysis in determining a position of the vehicle.
  • the results of the analysis of more than one image obtained from a camera and containing an image of the object may be used in determining the position of the vehicle.
  • the or each image used in the analysis includes an image of the object from a different viewpoint. Using the results of analysing images of the same object from different viewpoints in this way may provide more precise position data.
  • the data indicative of any one or ones or all objects stored in the database in the invention in accordance with any of its aspects or embodiments may include any of the features described below.
  • the data indicative of each object may include any of these features.
  • the data indicative of any one or ones of the objects which is obtained from the database, and/or which is used in attempting to detect an image of the object in an image of a scene captured by the camera may be in accordance with any of the embodiments described.
  • the algorithm data associated with the data indicative of an object may be indicative of a set of one or more algorithms for use in analysing at least a portion of an image of a scene expected to contain an image of the object to attempt to detect the image of the object, and optionally a set of one or more parameters for configuring the or each algorithm for performing the analysis.
  • the algorithm data comprises a set of one or more parameters for configuring the or each algorithm
  • the method may comprise configuring the or each algorithm using the respective set of one or more parameters, and may then comprise using the resulting set of one or more configured algorithms to analyse the at least a portion of the image.
  • the step of configuring the or each algorithm in respect of an object may be performed in relation to the or each object for which data is obtained from the database. This may be performed in advance of obtaining an image of a scene expected to contain an image of the object.
  • the method may comprise storing data indicative of the or each configured algorithm once determined for at least a given time period.
  • the time period may correspond to a time period in which the object to which the algorithm data relates is expected to be observed by a vehicle traversing the road network. As described above, this may correspond to a time period for which object data is stored locally. This then provides an appropriately configured algorithm ready to operate on an image of a scene expected to contain an image of the object as soon as it is obtained.
  • the invention extends to a method of providing the database provided or used in accordance with the invention in any of its aspects and embodiments, the method comprising determining, for each object, algorithm data comprising a set of one or more algorithms, and optionally a set of one or more parameters for configuring the or each algorithm, wherein the algorithm data for an object is optimised in relation to one or more criteria for detecting the object in an image of a scene obtained from a camera associated with a vehicle, and storing the data in association with data indicative of the object.
  • the criteria may be selected as desired, depending upon the particular requirements for a given application or user.
  • the criteria may be detection probability, computation time, or a particular combination thereof.
  • the criteria may provide a tradeoff between detection probability and computation time or latency.
  • the algorithm data may be indicative of any suitable algorithm(s).
  • the data may be indicative of a set of one or more algorithms.
  • suitable algorithms may include feature point detector algorithms, such as SIFT, SURF, ORB, FAST and BRIEF.
  • the algorithm data may be indicative of a neural network, or a correlation detector.
  • the algorithm data may be indicative of multiple types of algorithm.
  • the algorithms may be used at different stages in the analysis process.
  • the algorithm data may be indicative of one or more of a feature point detector, neural network or correlation detector.
  • the or each algorithm is preferably scale invariant, and alternatively or additionally may be rotation invariant.
  • the algorithm data is indicative of a set of one or more algorithms for use in analysing at least a portion of an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of a scene encountered by the vehicle when traversing the road network.
  • the algorithm data may be indicative in any manner of the set of one or more algorithms.
  • the data indicative of an algorithm may be indicative of an already configured algorithm, or alternatively the algorithm data may comprise data indicative of one or more algorithm, and data for configuring the or each algorithm.
  • the algorithm data is indicative of multiple algorithms, there may be a mixture of pre-configured algorithms, and algorithms associated with configuring data to be used in configuring the algorithms.
  • the database may comprise data indicative of a multiple instances of a particular object associated with different positions, wherein different ones of the instances of the object is associated with different algorithm data.
  • the stored data may be a database indicative of objects which may be encountered when traversing at least a portion of any one or ones of the road elements of the road network.
  • the database may relate to the entire road network or any subset thereof.
  • the algorithm data in a database may comprise one or more algorithms stored in the database, or a pointer to where such data is stored.
  • an identifier for the algorithm may be stored, enabling the algorithm data to be looked up and retrieved from a separate store. This may reduce the number of instances of storing the same data. Similar techniques may be used in relation to parameters for configuring algorithms.
  • any other data described as being stored in the database may be stored in another location, provided that a suitable pointer is provided thereto in the database.
  • the database may be a distributed database.
  • the database may be made up of multiple (sub) databases, which databases may be stored in the same or differing locations, and any particular (sub)database may be stored in stored distributed across multiple locations. Any suitable techniques may be used to reduce the quantity of data stored. For example, a dictionary type arrangement may be used. Certain more generic data e.g. general dimension data, in relation to a particular object may be stored in one (sub) database, with instance specific data e.g. location data for a particular instance of an object, being stored in another (sub)database, with a pointer to the applicable generic data in the other database for the object. Some examples are given below.
  • Some algorithm and appearance data may also be generic to multiple instances of an object, and may be stored using a similar structure, with specific instance data referencing generic data, to avoid storing multiple instances of the generic data.
  • the data indicative of the appearance of an object may be a pointer to such data, which may be stored in a separate list. This may be particularly applicable where there are multiple instances of the same object, and each instance uses the same appearance data.
  • the data indicative of an object includes position data.
  • the position data is preferably three dimensional position data.
  • the data indicative of an object further comprises data indicative of the shape and/or orientation of the object.
  • the object data includes position, shape and orientation data.
  • the position, and, where applicable shape and orientation data are used in analysing at least a portion of an image of a scene to attempt to detect an image of the object in the image. It will be appreciated that the shape and orientation data may enable detection to occur regardless of the particular position of the object relative to the vehicle when captured in an image obtained by a camera associated with the vehicle. Such data also enables the position of the vehicle relative to the object to be accurately determined using an image of the object in an image captured by the camera associated with the vehicle.
  • the data indicative of an object may further comprise data indicative of a road stretch with which the object is associated.
  • the road stretch is the road stretch, from which, when the stretch is traversed by a vehicle, the object may be expected to appear in an image obtained from a camera associated with the vehicle.
  • a road stretch herein refers to at least a portion of one or more road elements of a road network.
  • the road stretch may be defined in any suitable manner. For example, this may be by reference to a digital map comprising data indicative of a plurality of road segments representing road elements of the road network.
  • the road stretch may be defined by reference to at least a portion of one or more road segments of the digital map.
  • any suitable position reference system may be used.
  • the position reference system may be a position agnostic reference system, such as the Open LRTM system.
  • the data indicative of a road stretch may be indicative of a geographic stretch. Such data may be defined using a polyline.
  • the data indicative of a road stretch may optionally be associated with data enabling road stretch to be determined in respect of a given digital map.
  • the data indicative of the appearance of an object is data which may be (and in embodiments is) used in attempting to detect the presence of the object in an obtained image i.e. by the algorithm(s) associated with the object.
  • the data may be a stored image of the object or any other data describing the appearance of at least a part of the object.
  • the data may be indicative of one or more feature points and/or descriptors for the object.
  • the most appropriate form of the data may depend upon the algorithm data associated with the object, and vice versa.
  • the selection of the appearance data to be included in the database may be made by reference to the algorithm data for an object, or vice versa, when providing the database. It will be appreciated that, for different instances of the same object, different appearance data may be stored. For example, where the algorithm data for an object is indicative a neural network, it may be most appropriate to store appearance data in the form of a complete image of the object.
  • the obtained appearance data may be used directly in performing the analysis of the at least a portion of the image of the scene, or some processing of the appearance data may be required to provide data that may be used in the analysis.
  • the appearance data comprises an image of the object
  • the step of analysing the image obtained from the camera may comprise analysing at least a portion, and preferably only a portion of the image obtained from the camera to attempt to detect the image of the object.
  • references to e.g. the algorithm data being for use or used in analysing an image of a scene may involve the data being used to analyse at least a portion of the image of the scene. Analysing only a portion of the image may enable the process to be achieved more rapidly, and reduces the processing power required.
  • the method comprises identifying an area of interest in the image obtained from the camera, the area of interest being expected to contain an image of the object, and analysing the area of interest using the obtained data indicative of at least the appearance of the object and the algorithm data associated with the obtained data.
  • the area of interest corresponds to only a portion of the obtained image.
  • the step of analysing the area of interest may comprise providing the area of interest as input into one or more algorithm defined by the algorithm data associated with the object.
  • the system may know which object is expected to be found in a particular image of a scene, and, in some cases, where in the scene the object is expected to be found. Defining the area of interest in which the image of an object is expected to appear may be carried out using the data obtained from the database in relation to the object, and/or may involve performing a pre-processing step to initially detect the object. This initial detection may be a coarse detection step pending detailed analysis of the area of interest to confirm the detection of the object with an appropriate level of certainty. This pre-processing might be performed using a neural network e.g. for all objects.
  • Such a two-step approach may only be performed where improved performance is to be expected. For example, where a detection algorithm defined for a particular sign is relatively expensive in terms of processing power when performed over a large area, and a faster algorithm exists to detect red sign boundaries, and the sign is known to have such a boundary, then the faster algorithm may be used to identify a smaller area of interest for the detection algorithm to operate on.
  • the data indicative of an object comprises data indicative of where, in a scene encountered by a vehicle when traversing the road network, the object may be expected to appear. For example, the object may be expected to be appear on the right, or the top left of the scene.
  • the data indicative of the object may further comprise data indicative of a lane of the road element with which the position of the object is associated. This might be applicable e.g. for a road sign located above a particular lane of the road.
  • the step of using the algorithm data to analyse the at least a portion of an image of a scene to attempt to detect an image of an object may involve causing a set of one or more algorithms defined by the algorithm data to operate on the at least a portion of the image of the scene in any suitable manner to determine whether there is a sufficient correspondence between the appearance of the object as determined using the appearance data in respect of the object, and the at least a portion of the image of the scene that is analysed, to result in a determination that an image of the object has been detected.
  • the criteria for determining whether the object has been detected based on the results of the analysis may be set as desired e.g. depending upon the confidence in the result required, computation time constraints etc.
  • the method may comprise determining a likelihood that an image of the object is present in the image obtained from the camera using the results of the analysis. Such data may be used in determining whether or not to use the results of the analysis in respect of a given object in determining the position of the vehicle.
  • the method may comprise inputting the at least a portion of the image to the or each algorithm defined by the algorithm data.
  • the set of one or more algorithms may be arranged to attempt to determine a match between each of one or more features identified in the at least a portion of the image of the scene that is analysed and a feature of the object as determined using the appearance data for the object.
  • the features may be feature points.
  • the appearance data for the object may be indicative of one or more feature points, or the method may further comprise using the appearance data to identify one or more feature points of the object.
  • the method may comprise analysing the image to identify the one or more feature points.
  • the one or more features may be at least some of the at least a portion of the image of the scene that is analysed and at least a portion of an image of the object as defined by the appearance data for the object. This may be appropriate where the algorithm is a neural network which attempts to recognise an image of the object according to the obtained data in the obtained image of a scene.
  • the step of analysing the at least a portion of the image may simply attempt to detect an image of the object i.e. to determine whether it is present or not with a desired level of certainty (which may be set by reference to one or more criteria). This may provide sufficient data to enable the results of the analysis to be used in determining a position of the vehicle.
  • the position of the object according to the object data may be used together with other data as desired e.g. an assumed approximate position of the vehicle, data indicative of the position of the camera etc. to determine a position of the vehicle.
  • the analysis to attempt to detect the presence of an image may include other steps to provide additional data for use in determining a position of the vehicle.
  • the method may comprise determining a projection between one or more feature points of the object determined using the appearance data for the object and one or more feature points detected in the at least a portion of the image of the scene that is analysed.
  • the method may comprise determining a homography between the one or more feature points determined using the appearance data and the one or more feature points detected in the analysed at least a portion of the image of the scene.
  • a homography computation involves obtaining a projection of points in one plane and points in another plane.
  • a homography projection may identify subsets of features from those identified using the appearance data, and those identified through analysis of the at least a portion of the image of the scene, which can be considered to have a suitable correspondence.
  • the method may comprise determining a projection of the object, or one or more parts thereof, into the obtained image.
  • an object may be deemed to be found in the obtained image when a homography may be found between a predetermined number of features in the obtained image and a corresponding feature determined using the appearance data for the object.
  • the step of attempting to detect an image of the object in the image obtained from the camera may or may not be a distinct step from the step of using the results of the analysis in determining a position of the vehicle.
  • matching between feature points of the portion of the image of the scene and one or more feature points determined using the appearance data may be performed to determine the presence of the image of the object, and, at the same time, determine a position of the object relative to the vehicle.
  • the matching process may involve determining a projection between the feature points of the image obtained from the camera and described by the appearance data, which projection may be used in determining the position of the vehicle.
  • the method may comprise determining the position of the object relative to the vehicle using the results of the analysis. For example, a distance e.g. a three- dimensional distance of the object from the vehicle may be determined. This may then be used in determining an absolute position of the vehicle, or a position relative to another reference system e.g. the road network.
  • a distance e.g. a three- dimensional distance of the object from the vehicle may be determined. This may then be used in determining an absolute position of the vehicle, or a position relative to another reference system e.g. the road network.
  • the results of the analysis may be used in any suitable manner to determine the position of the vehicle.
  • the results of the analysis may be used together with other data in the determination. This may enable greater confidence in the result to be achieved.
  • the other data may include camera calibration data and/or data in respect of the object obtained from the database, such as position data for the object, and optionally orientation, shape, and/or dimension data.
  • the other data may alternatively or additionally include assumed approximate position data obtained in respect of the vehicle e.g. GPS or other position data. Data obtained in relation to the detection of an image of the object in other images e.g. camera frames may also be used.
  • the method may comprise using the results of the analysis to determine a distance e.g.
  • a three-dimensional distance of the object from the vehicle may be carried out using the results of a projection e.g. homography obtained between feature points in the obtained image and determined based on the stored appearance data for an object, and camera calibration data.
  • the step of using the results of the analysis to determine the distance of the object from the vehicle may comprise using camera calibration data.
  • the determined position of the vehicle may be used as desired. In some embodiments the position is used in determining a position of the vehicle relative to a digital map. The position determined may be in relation to a segment of the digital map. Alternatively or additionally, the determined position of the vehicle is used in navigation of or by the vehicle. The vehicle may or may not be an autonomous vehicle. The determined position may, alternatively or additionally, be used in determining a position of the vehicle in relation to a given lane of a multiple lane road element.
  • the method may comprise refining an assumed approximate position of the vehicle using the results of the analysis (and optionally other data) e.g. an approximate position based on GPS or other position data.
  • the results of the analysis may be used as an input for a statistical localization method e.g. based upon a particle filter.
  • an image of the object has not been found in the obtained image from the camera based on analysis of the at least a portion of the image obtained from the camera. This may be because the object is no longer present, or is obscured for some reason. For example, the image may not appear as expected in obtained image of a scene if it is obscured from view by a tree in the captured image, which was did not obscure the object when the object was added to the database e.g. if this was at a different time of year. Of course, an object may also not be found in the area of interest if it is no longer present.
  • the objects are objects which may be visible, under at least some conditions, when traversing element(s) of the road network. It will be appreciated that an object may not be visible to the naked eye under all conditions. For example, the object may be visible only in certain light conditions, or at certain times of year. An object might be obscured by vegetation at certain times of year. An object may be visible only when imaged by a camera apparatus and not by the naked eye e.g. the object may be visible in an image obtained by an infrared camera but not using the naked eye.
  • Each object is an object which may be expected to form part of an image of a scene captured by a camera associated with a vehicle when traversing a road element of the network.
  • An object for which data is stored may be any static object.
  • an object may be selected from a sign, lamp post, bridge front, road marking, building, skyline.
  • a sign may be a traffic sign, or may include a brand name or logo.
  • the sign might be an advertising sign.
  • the plurality of objects are or include traffic signs.
  • the database may include data indicative of more than one type of object.
  • the plurality of objects may include any one or ones of the types of objects mentioned above.
  • the invention may be more broadly applicable to determining a position of a vehicle through analysis of a representation of a scene obtained from a set of one or more sensors associated with the vehicle.
  • the database will then include algorithm data to be used in analysing a representation of a scene obtained using a set of one or more sensors associated with a vehicle to attempt to detect a representation of the object.
  • the representation may be a 2-D representation e.g. an image, or may be three-dimensional representation.
  • the representation of a scene or object may be an image of the scene or object.
  • the set of one or more sensors associated with a vehicle may be a set of one or more image sensors.
  • the image sensors may comprise one or more cameras as in the earlier aspects, but may be any form of image sensor.
  • the representation may be obtained through three-dimensional scanning.
  • the representation may be a radar or LIDAR (laser scanning) representation.
  • the set of one or more sensors may comprise one or more laser sensors, or one or more radar sensors.
  • the database used would include appropriate algorithm data for use in conjunction with type of sensors and/or representation involved. For example, suitable algorithms used for detecting features within a LIDAR representation may be identified for use.
  • the present invention provides a method of determining a position of a vehicle when traversing a road network, the method comprising;
  • the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing a representation of a scene obtained using a set of one or more sensors associated with a vehicle to attempt to detect a representation of the object in the representation of the scene obtained from the set of one or more sensors; obtaining a representation of a scene from a set of one or more sensors associated with the vehicle while traversing the road network;
  • a system for a system for determining a position of a vehicle when traversing a road network comprising;
  • the methods of the invention in its various aspects described herein may be performed on the fly e.g. while the vehicle is traversing road elements of the road network.
  • the technology described herein comprises computer software specifically adapted to carry out the methods herein described when installed on a data processor, a computer program element comprising computer software code portions for performing the methods herein described when the program element is run on a data processor, and a computer program comprising code adapted to perform all the steps of a method or of the methods herein described when the program is run on a data processor.
  • the data processor may be a microprocessor system, a programmable FPGA (field programmable gate array), etc.
  • the technology extends to a computer program product comprising computer readable instructions adapted to carry out any or all of the methods described herein when executed on suitable data processing means.
  • the technology described herein also extends to a computer software carrier comprising such software which when used to operate a data processing apparatus or system comprising a data processor causes in conjunction with said data processor said apparatus or system to carry out the steps of the methods of the technology described herein.
  • a computer software carrier could be a physical storage medium such as a ROM chip, CD ROM, RAM, flash memory, or disk, or could be a signal such as an electronic signal over wires, an optical signal or a radio signal such as to a satellite or the like.
  • implementation may comprise a series of computer readable instructions either fixed on a tangible, non- transitory medium, such as a computer readable medium, for example, diskette, CD, DVD, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, either over a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques.
  • the series of computer readable instructions embodies all or part of the functionality previously described herein.
  • Such computer readable instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Further, such instructions may be stored using any memory technology, present or future, including but not limited to, semiconductor, magnetic, or optical, or transmitted using any communications technology, present or future, including but not limited to optical, infrared, or microwave. It is contemplated that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation, for example, shrink wrapped software, pre- loaded with a computer system, for example, on a system ROM or fixed disk, or distributed from a server or electronic bulletin board over a network, for example, the Internet or World Wide Web.
  • Figure 1 illustrates examples of objects which may be included in a database used in embodiments of the invention
  • Figure 2A illustrates one way in which data relating to signs may be stored in a database
  • Figure 2B illustrates one way in which the algorithm data in respect of the signs included in the database in Figure 2A may be stored;
  • Figure 3 is a flow chart illustrating one embodiment of the invention.
  • any static objects which may be expected to appear in an image captured by a camera associated with a vehicle traversing a road network may be included in a database as described below, and subsequently used in position determination.
  • Examples of such objects include lamp posts, bridge fronts, specific road markings, clearly visible buildings e.g. building fronts, brand names, skylines, etc.
  • Figure 1 illustrates images of certain static objects which may be used. Appearance data included in the database indicative of the object may correspond to such an image, or be derived therefrom e.g. a set of feature points etc.
  • a database is constructed storing instances of traffic signs. For each sign, data indicative of a 3D location, shape and orientation of the sign is stored.
  • the sign is stored in association with information identifying a particular road stretch with which the sign is associated. This may be a road stretch along which the sign is expected to be encountered i.e. be detected in an image obtained from a camera associated with a vehicle traversing the stretch.
  • the road stretch may be defined in relation to digital map data e.g. being at least a portion of one or more segments of a digital map.
  • the digital map comprises segments representing elements of a road network.
  • the road stretch may be defined in a map agnostic manner e.g. as a geographic stretch. This might be as a polyline e.g. of WGS-84 coordinates.
  • further information may be associated with the geographical stretch to enable the stretch to be determined in relation to a digital map. This may be carried out using a map agnostic location referencing system.
  • Such systems define a location or stretch in a manner which enables the location or stretch to be determined in relation to any digital map.
  • Such systems may enable a location defined in relation to one digital map to be determined in relation to another, different digital map. Examples of such map agnostic systems include the
  • Additional information may be associated with the sign in the database.
  • This information may include information about where in a scene i.e. a scene captured by a camera associated with a vehicle traversing a road network, the sign may be expected to appear. This might be e.g. on the right, top left etc. This information may be used when identifying an area of interest of an image obtained from a camera associated with a vehicle for analysis to try to detect an image of the sign. If it is known where the sign is likely to appear, other parts of the image may be excluded from analysis, saving computation time. Optionally, for example for signs above the road, this can be extended with lane information, so that even a more accurate portion of the image can be selected for analysis when the current lane is known.
  • the data relating to each given sign i.e. each particular instance of a sign
  • algorithm data is associated with algorithm data to be used in attempting to detect an image of the sign in an image captured by a camera associated with a vehicle when traversing a road network.
  • the algorithm data may be indicative of an identifier for a particular algorithm and a set of parameters that can be used to configure the algorithm for detecting this specific instance of a sign.
  • the algorithm and/or parameters for use in detecting one instance of a sign e.g. a 50km/h sign
  • the sign may appear on a grey background corresponding to a concrete road bridge, while in another location it appears against the sky.
  • Different algorithms and/or configuration of algorithms may be more appropriate e.g. result in more reliable detection of an image of the sign in these different contexts.
  • Detection algorithms are Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), ORB, Features from Accelerated Segment Test (FAST), and Binary Robust Independent Elementary Features (BRIEF).
  • SIFT Scale Invariant Feature Transform
  • SURF Speeded Up Robust Features
  • FAST Features from Accelerated Segment Test
  • BRIEF Binary Robust Independent Elementary Features
  • neural networks or correlation detectors may be used.
  • Detection algorithms used are preferably scale-invariant to enable them to provide matching for multiple distances of the object from a camera. However, if the algorithm is not scale invariant, a pre-scaling search may be applied.
  • the detection algorithm is at least to some extent rotationally invariant.
  • the optimal algorithm for detecting a particular instance of a sign will depend upon the context of the sign e.g. background, and also the requirements of a particular user of the database. For example, some users may wish to prioritise reliability of detection, while others will be more concerned with processing time or memory required. This may depend upon the implementation of a position determining or other system using the database e.g. whether the algorithm is to be run by a server or local device, the way in which the output of the detection algorithm is to be used e.g. whether it is to be used by an autonomous vehicle, in which case accuracy is paramount etc.
  • Neural networks may be expensive in terms of memory and processing time, and may be less suitable for a lightweight embedded system. Feature detector/matcher algorithms will give a precise location, but are specific to a particular instance of a sign.
  • the database includes data relating to the appearance of the sign for use in attempting to detect the sign in an image.
  • This may be an image of the sign, or may be e.g. feature points and descriptors for that sign.
  • the type of appearance data may be selected to be the most appropriate data for use by the particular algorithm specified for the sign, e.g. feature points and descriptors which are expected to enable that algorithm to reliably detect the sign in an image.
  • the algorithm specified for a given instance of a sign may be selected based upon the available appearance data.
  • a complete image of the sign may be more applicable e.g. where the algorithm implements a neural network.
  • the optimal detection algorithm in respect of a given instance of a sign may vary between users.
  • the requirements of a particular user may be taken into account when selecting the algorithm data and appearance data for each instance of a sign.
  • Different algorithms and parameters for configuring the algorithms may be trialled in relation to an instance of a sign, with the parameters being tuned to obtain the optimal algorithm, parameters and appearance data e.g. feature set, for a particular user's requirements e.g. balancing criteria such as processing time, memory required, reliability etc.
  • the optimal algorithm may be the best algorithm in relation to providing a desired balance of performance and detection quality e.g. a tradeoff between detection probability and computation time or latency.
  • a balance may be found between the likelihood of detection success based on a single camera image and processing time, which may affect the number of frames in which the sign can be detected.
  • Storing a complete image of a sign may enable greater choice in selecting algorithms e.g. where the algorithm information stored in respect of a sign may be modified or selected by a user.
  • the database may be modified or completed by a user to enable it to be customised to the user's particular requirements, for example, by providing the user with a database including the data indicative of the instances of signs e.g. sign location, shape etc. and the appearance data for the sign, with the user then adding desired algorithm data for each instance of a sign.
  • system performance and matching success rates may be improved by predetermining per sign what kind of detection/matching algorithm works best, and with which parameters. This may be user specific.
  • the size of the database can be reduced by using dictionary type features, or using other techniques in the art.
  • a dictionary of algorithm and parameter tuples may be used. If certain algorithms with specific parameters are used many times in the database i.e. in respect of multiple instances of signs, then the algorithm and parameter tuples can be replaced by a reference in a separate list of algorithms with parameters. This avoids storing the algorithm data in full in association with the sign data. If there are many instances of a certain sign, and all instances use the same sign appearance data and same algorithm and parameters, then the sign appearance data and algorithm and parameters can be replaced by a reference into a list of signs, that stores the sign appearance data, and either a reference to the algorithm/parameter list or the algorithm/parameters themselves.
  • the database may be structured as desired.
  • the database includes a first part providing a sign database.
  • the sign database provides generic sign data, which is applicable to each instance of a particular sign.
  • the sign database includes, for each sign, the following data;
  • a reference to a set of one or more algorithms to be used in detection of the sign e.g. detection, matching feature extraction etc., and a reference to a set of one or more parameters for configuring the algorithms
  • appearance data for the sign to be input to the algorithm e.g. full image, features for detection using the applicable algorithm(s) e.g. SIFT features, or a sign classification identifier etc.
  • a second part of the database provides an instance database. This includes the data specific to each instance of a sign. For each instance of a sign the following data may be stored in the instance database;
  • data may be stored giving an indication of where in an image the instance of a sign is expected to appear e.g. left, right or top. (alternatively, the expected location may be determined on the fly).
  • the sign database might include references to multiple algorithms for each sign i.e. each generic type of sign.
  • the instance database may then include data referring to a particular one of the multiple algorithms that is to be used for that particular instance of the sign.
  • the data in respect of a given sign is therefore provided by the combination of the data in the instance database for a particular instance of a sign, and the data in the sign database which is referenced by the instance data, and which provides further information which is generic to multiple instances of the sign. It will be appreciated that where the algorithm data and/or appearance data is specific to an instance of a sign, then it may be included in the instance database rather than the generic sign database.
  • Figure 2A One example of the data which may be stored is given in Figure 2A.
  • the sign database includes data for each one of a plurality of generic signs, identified by "signld 1 , 2.".
  • the data is as exemplified above, including data referencing the applicable algorithm for detection, the width and height of the sign, and the appearance data for the sign.
  • the instance database includes a reference to the applicable data in the sign database for a particular instance of a sign e.g. "signld: 1 " etc., the area of an image in which the sign is expected to appear e.g. right, top etc., an indication of the three dimensional location of the sign, and the height above ground, angle and pitch data for that instance of the sign.
  • the complete data relating to the instance of the sign is given by the referenced generic sign data (as referenced by the signID in the instance database), and the remainder of the data in the instance database.
  • the instance database may be stored in a tiled form.
  • the above database construction may be appropriate where multiple instances of a given sign e.g. a 50km/h sign share the same algorithm and appearance data. It will be appreciated that not every instance of a particular sign will necessary share such data, or only certain subsets of a particular sign may share such data.
  • the sign database may then be modified appropriately to include additional entries in respect of each different set of data i.e. corresponding to further generic signs, each associated with its own signld. If different instances of signs tend to have different appearance and algorithm data, then a "dictionary" style structure to the database may not be appropriate.
  • the present invention provides great flexibility to include as much data as required to ensure that sufficiently specific algorithm and appearance data is provided for each instance of a sign. It will be appreciated that even if some generic signs share the same algorithm data e.g. 50km/h signs, the data in respect of different generic traffic signs e.g. give way, no entry, lane guide, other speed limit signs, end of speed limit etc. may, and typically does, have different algorithm data associated therewith. This is in contrast to prior art techniques in which the same algorithm is used to detect all types of traffic sign. It will be appreciated that the present invention may allow simpler algorithms to be used, with resulting benefits in processing times, as an algorithm specific to a particular type, or even instance of sign, may be simpler than one which must detect all types of sign. In some prior art techniques using a neural network, the complexity of the network, and the time and complexity in training the network in order to detect all types of sign, and/or other objects, may be significant.
  • Figure 2B illustrates the data which may be stored in respect of a particular algorithm, and which is referenced by the sign database.
  • signld 1 is associated with SURF1.
  • the algorithm database provides details of the algorithm and parameters corresponding to this identifier (which identifies the SURF algorithm configured using a particular set of parameters).
  • database size can be reduced by only storing the appearance data relevant for the most suitable algorithm e.g. SIFT feature points and descriptors.
  • the method may be performed by a vehicle mounted system using object data obtained from a remote server.
  • the invention is not limited to this implementation, and steps may be performed on any one or ones of a server and vehicle mounted system or elsewhere.
  • a camera image might be transmitted to a server for used in performing the method using object data obtained from a database at the server using data relating to the position of a vehicle, received by the server from the vehicle.
  • the techniques of the present invention enable the position of a vehicle to be determined accurately e.g. in relation to a digital map. This may provide location determination to a level of accuracy to enable a determination as to which lane the vehicle is travelling in, for a multi-lane road.
  • the position determination may be appropriate for use by an autonomous vehicle, which requires particularly accurate location determination, although the invention is not limited to use in this context.
  • the determined position may be used to refine an estimated position of the vehicle based on position data, such as GPS data.
  • position data such as GPS data.
  • Such position data gives a rough approximation of the position of the vehicle, but may not be sufficiently accurate e.g. to determine the lane of travel.
  • Localization based on traffic signs or other static objects which may be performed in accordance with the invention would typically, although not necessarily, be used in combination with another localization method.
  • One example of such another localization method would be one which requires regular error correction or synchronization.
  • the precise positioning data obtained in accordance with the invention may be used to refine or synchronise data obtained from a positioning unit providing position data based on GPS and Inertial Measurement Unit (IMU), which may exhibit a slowly aggregating error.
  • IMU Inertial Measurement Unit
  • the data obtained in accordance with the invention may be used in combination with position data obtained using vehicle mounted camera(s) based on detected lane markings.
  • the data obtained in accordance with the invention may be used as an input for statistical localization, such as localization based on a particle filter. It will be appreciated that it is advantageous to use localization based on sign (or other object) detection in combination with other positioning techniques, as there will be times at which sign based localization may fail, e.g. if a sign is not visible as expected. This may be for temporary or permanent reasons e.g. due to weather conditions, or having been removed or somehow changed in appearance since the database was created.
  • step 1 in relation to determining an accurate position of a vehicle having one or more cameras associated with i.e. mounted thereon, as the vehicle traverses a road element of the road network.
  • step 1 continuously, or in respect of a certain time window, a search is performed in the sign database to identify a set of one or more instances of signs which are expected to be observed in the near future i.e. to appear in an image captured by a camera associated with the vehicle.
  • the signs which are expected to appear are identified based upon the known approximate position of the vehicle e.g. according to GPS or other positioning data.
  • the signs may be identified based upon a route which is expected to be traversed by the vehicle in the applicable time period, whether a predetermined or inferred route. This process may be performed continually, i.e. with the search being performed continually as the position of the vehicle advances, or may be performed at intervals, with all the signs expected to be encountered in the next predetermined interval being obtained.
  • An area of interest is an area in which one of the expected signs is expected to be found. It will be appreciated that, based on the information from the sign database, which includes the location of the signs expected to be encountered in the near future, and knowledge of the location at which a camera frame was taken, it is known which sign(s) should appear in a frame, and, if the sign has data indicative of where in a scene it is to be expected to appear, it will also be known approximately where in the frame the sign should appear. It is therefore possible to identify an area of interest based upon the knowledge of the expected signs, and, where applicable, the data indicative of where in a scene those signs are expected to appear.
  • a pre-processing step of coarse detection e.g. using a neural network may be performed to identify the area of interest.
  • the area of interest may then correspond to an area of the frame in which an expected sign can be seen e.g. with a suitable resolution.
  • the area of interest identifies the area upon which the algorithm defined for the instance of a sign is to operate. Thus, it may be an area in which the instance of the sign is expected to appear, or in which it is believed the instance of the sign can be seen, which is to be subject to the further analysis.
  • the area of interest may be selected taking into account data identifying a lane with which a sign is associated, where such data is stored.
  • the algorithm defined for the given instance of a sign in the database, where applicable, configured using the parameters associated therewith, is used to analyse the area of interest, to detect the sign.
  • the configured algorithm may be kept available in the system during at least a time window in which the sign is expected to be detected.
  • the detection process may include any suitable steps, depending upon the algorithm(s) defined for the instance of a sign, and the appearance data stored for the sign.
  • the algorithm typically analyses the area of interest to detect features therein, and compares the features to features stored in the appearance data in relation to the sign, or determined based upon a stored image of the sign, where the complete image is stored.
  • the algorithm may be such that it can filter out outliers e.g. RANSAC-based. However, depending upon the algorithm used, the comparison may be based on comparing the entire image of the sign in the camera frame to a stored image of the sign e.g. where the algorithm is a neural network.
  • a homography (1-1 ) projection may be determined between the features in the camera image and the stored image of a sign (or stored feature data for the sign).
  • homography may be found for a certain number of feature points e.g. at least 4, then it may be assumed that there is a very high probability that the image in the camera frame is indeed of the sign.
  • the homography projection may be used to project the sign according to the stored data e.g. the corners thereof, into the image found in the camera frame.
  • the homography computation together with the camera calibration and known dimensions of the sign according to the sign database e.g. known distances within the sign, may be used to determine a 3 D distance of the sign to the vehicle- step 7. This may be performed by computing a translation of pixel distances in the camera image to distances in the 3-d world.
  • Camera calibration information enables 2D camera pixels to be translated into depth-vectors in a 3D vehicle- relative world. The 2D distances between points in the sign which correspond to points used in the homography projections to the camera image will be known.
  • the determined distance of the sign to the vehicle may be used in obtaining an accurate position of the vehicle relative to a digital map- step 9. This may then be used alone, or to refine a known approximate position of the vehicle. Multiple observations of the sign from different viewpoints i.e. based on the analysis of different camera frames (from the same or different cameras), may be used to increase the precision of the determined position of the vehicle relative to the sign.
  • the present invention may allow a single observation of a sign to be used to determine the position of the vehicle e.g. relative to a digital map.

Abstract

A method of determining a position of a vehicle when traversing a road network involves obtaining, from a database, data indicative of one or more traffic signs expected to be encountered by the vehicle in the future while traversing the road network. The data indicative of each sign includes position and appearance data, and is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the sign in the image of the scene obtained from the camera. A image is obtained from a camera associated with the vehicle while traversing the road network, and the appearance and algorithm data associated with the obtained data indicative of a traffic sign is used to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of the scene. The results of the analysis are used in determining a position of the vehicle.

Description

METHODS AND SYSTEMS FOR DETERMINING THE POSITION OF A VEHICLE
FIELD
The present invention relates, in certain aspects at least, to methods and systems for determining the position of a vehicle using images obtained from a camera associated with the vehicle. The present invention is of particular utility in the context of determining the position of an autonomous vehicle, although is not limited thereto. In accordance with other aspects, the invention relates to a database for use in identifying objects in an image, and methods of using such a database.
BACKGROUND
It is often desirable to be able to accurately determine the position of a vehicle e.g. relative to a digital map. This is of critical importance in the context of autonomous vehicles, although accurate position determination is of course of wider application e.g. to determine the position of a vehicle to a lane-level degree of accuracy, not necessarily in the context of a self-driving vehicle. Knowledge of the precise position of a vehicle may allow improved guidance to be provided to a driver, in terms of route navigation, the provision of relevant traffic information etc.
Conventionally the position of a vehicle has been determined using position data, such as GPS data. However, such data will only provide a rough
approximation of the position of the vehicle e.g. relative to a digital map. This is typically not sufficiently accurate to determine a lane-level position of a vehicle. Furthermore, while a digital map may be relatively accurate in terms of the relative distances between points, it may not so accurately reflect absolute positions in the real world.
Techniques are known for determining the position of a vehicle using images captured by a camera associated with the vehicle. For example, objects such as traffic signs may be detected in captured images, and used to determine the position of the vehicle based upon a known position of the detected traffic sign. Localization of a vehicle in this manner based on a detected object is typically used in conjunction with other localization techniques. For example, position data obtained based on the analysis of captured images may be used to correct a rough position of a vehicle obtained using position data, such as GPS data. This may help to more accurately determine the absolute position of the vehicle, for example accurately enough to obtain a lane-level degree of accuracy.
Known techniques for determining the position of a vehicle using images obtained from a camera associated with the vehicle utilise the same method to detect and analyse all objects used in position determination. Thus, a particular algorithm with the same operating parameters is used in each instance. For example, this may implement a neural network trained in the particular manner.
The Applicant has realised that there is a need for an improved method of determining the position of a vehicle.
SUMMARY
In accordance with a first aspect of the invention there is provided a method of determining a position of a vehicle when traversing a road network, the method comprising;
obtaining, from a database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of the scene obtained from the camera;
obtaining an image of a scene from a camera associated with the vehicle while traversing the road network;
and, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of the scene;
and, where an image of the object is deemed to be found in the image obtained from the camera, using the results of the analysis in determining a position of the vehicle. ln accordance with the invention, when at least a portion of an image of a scene captured by a camera associated with a vehicle during travel is analysed to attempt to detect an image of an object expected to be found in the image, the algorithm data used is obtained from a database storing specific algorithm data in respect of that object. Thus, rather than using a generic algorithm in respect of the image analysis carried out in respect of any object which may be encountered by the vehicle during travel, and which may hence appear in any image of a scene obtained from the camera, the algorithm used may be tailored to a particular instance of a particular object. It will be appreciated that the same algorithm data may be applicable to more than one particular object, or to a particular object in more than one context i.e. to more than one instance of a particular object. For example, where the same road sign appears multiple times, in a similar context i.e. with a concrete background, the same algorithm data may be appropriate for each instance of the sign. It is thus not necessary that the algorithm data in respect of each instance of a particular object in the database is different. For example, in some arrangements, the same algorithm data may be applicable to each instance of a particular object e.g. a specific traffic sign, with different algorithm data being specified for other specific traffic signs. However, where the same algorithm data is applicable to multiple objects, these will still typically be a relatively limited set of objects, or instances of a particular object.
By way of example, a particular object may appear at various points in a road network. For example, a rectangular 50km/hr speed limit sign may appear multiple times. The Applicant has realised that when analysing an image of a scene obtained by a camera associated with a vehicle to determine the presence of the sign, or to perform any other steps required in relation to at least a portion of the image expected to contain the sign which may be necessary in determining the position of the vehicle, using the same algorithm in respect of each instance of the sign throughout the network may not provide optimal results. Accordingly, using the same algorithm data in respect of each instance of the sign may not lead to the best possible position determination. Instead, in accordance with the invention, the algorithm used for a particular instance of the sign may be customised to that instance. The algorithm may be selected to provide the optimal results in respect of that particular instance of the sign. For example, in one instance, the sign may appear superimposed on a grey background e.g. where the sign appears in front of a concrete bridge. In another instance, the sign may have a green background, corresponding to vegetation at the side of the road. The best algorithm to be used in analysing an image thought to contain the sign will differ in these two situations.
It has also been found that by providing algorithm data specific to each instance of an object, it is possible to use simpler algorithms than might be the case if a generic algorithm were used to detect all objects. This is because, in order to be able to detect multiple different objects, in different scenes, with a reasonable level of reliability, a relatively complex algorithm may be required. In the context of a neural network, significant amounts of training may be needed. In contrast, the present invention only requires an algorithm to be able to detect a limited number, or even a single object, in a limited number of contexts, or even a single context.
It will be appreciated that, if not explicitly stated, any reference to an object being found, expected to be found, or detected in an image, or similar, should be understood as referring to an image of the object being found, expected to be found, or detected in the image etc.
The present invention extends to a system for carrying out a method in accordance with any of the embodiments of the invention described herein.
In accordance with a further aspect of the invention there is provided a system for a system for determining a position of a vehicle when traversing a road network, the system comprising;
means for obtaining, from a database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of the scene obtained from the camera;
means for obtaining an image of a scene from a camera associated with the vehicle while traversing the road network;
and means for, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of the scene; and means for, where an image of the object is deemed to be found in the image obtained from the camera, using the results of the analysis in determining a position of the vehicle.
The present invention in these further aspects may include any or all of the features described in relation to method aspects of the invention, and vice versa, to the extent that they are not mutually inconsistent. Thus, if not explicitly stated herein, the system of the present invention in any of its aspects may comprise means for carrying out any of the steps of the method described in relation to any of the aspects of the invention.
The means for carrying out any of the steps of the method of the invention in any of its aspects may comprise a set of one or more processors configured, e.g. programmed, for doing so. A given step may be carried out using the same or a different set of processors to any other step. Any given step may be carried out using a combination of sets of processors.
The methods of the present invention are computer implemented methods. In accordance with the invention in those aspect or embodiments relating to determining the position of a vehicle, at least one camera is associated with the vehicle. One or more camera may be associated with the vehicle. The or each camera is mounted to the vehicle.
The image obtained from a camera associated with the vehicle (regardless of how many cameras are present), may be a frame of a video captured by a camera. The image obtained from a camera is an image of a scene encountered by the vehicle when traversing the road network.
In certain aspects and embodiments, the method comprises obtaining, from the database, data indicative of one or more object which is expected to be encountered by the vehicle in the future. Preferably data in respect of a plurality of objects is obtained from the database. The one or more object may be one or more object expected to be encountered in a future time period. In embodiments, data in respect of each object included in the database and expected to be encountered in the future time period is obtained from the database. The future period may be a predetermined period. In these embodiments, the future time period may be defined as desired. In general, the longer the time period used, the more objects may be expected to be encountered in that period, and hence the larger the amount of data which will need to be obtained from the database. In some embodiments in which data is obtained from the database and stored locally, the length of the future time period may therefore be defined having regard to the amount of data which it is desired to be stored locally. The step of obtaining the object data in respect of future object(s) to be encountered may be performed continually or at
predetermined intervals.
The one or more object expected to be encountered may be identified in any suitable manner. It will be appreciated that, as each object in the database is associated with data indicative of a position of the object, the position data may be used to identify the object(s) expected to be encountered e.g. over a future time period. In some embodiments, the identification of the object(s) may be performed using at least the position data associated with the object(s) in the database and position data indicative of the approximate position of the vehicle e.g. a current position. The position data indicative of the approximate position of the vehicle may be used to determine the expected position of the vehicle at one or more future times e.g. over the future time period. This may be e.g. based upon the speed of travel of the vehicle. The position data in respect of the vehicle may be data obtained from a positioning module of the vehicle e.g. GPS data. Such position data may be regarded as assumed approximate position data, in contrast to the more accurate position data which may be determined using the results of the analysis of the image(s) of an object. The position of the vehicle determined using the results of the analysis may be used to refine an assumed approximate position of the vehicle. Alternatively or additionally, it is envisaged that the one or more objects may be identified based on data indicative of a route expected to be traversed by the vehicle e.g. over the future time period. The expected route may be a predetermined route being navigated or an inferred route e.g. based upon knowledge of the road network and e.g. previous routes travelled by the vehicle etc. It will be understood that other data may alternatively or additionally be used in identifying an object expected to be encountered e.g. data indicative of a road stretch associated with the object where the object data comprises such data.
In embodiments, the one or more objects are objects associated with positions such that they may be expected to be encountered by the vehicle based upon one or more expected future positions of the vehicle e.g. over the future time period. The method may comprise identifying the or each one of said one or more objects in respect of which data is obtained from the database based upon at least the data indicative of the position of the object and data indicative of one or more expected future positions of the vehicle. The one or more expected positions of the vehicle may be determined based upon data indicative of an approximate position of the vehicle or data indicative of a route expected to be traversed by the vehicle.
The method of the present invention may implemented in any desired manner. The steps may be performed in the same or different locations. For example, the method may be performed by any combination of one or more servers and one or more vehicle mounted systems, or may be performed solely by one or more server, or solely by one or more vehicle mounted system.
In some embodiments the method is performed by a vehicle mounted system, such as a navigation system. The navigation system may be a navigation system of an ADAS (Advanced Driver Assistance System), or system for autonomously driving a vehicle. However, this is not necessarily the case. The present invention is of utility in any context where it is desired to obtain a more accurate determination of the position of a vehicle, not merely in the context of assisted or autonomous driving. In some embodiments the database from which the object data is obtained is a remote database, and the method may comprise locally storing the obtained data in respect of one or more objects expected to be encountered by the vehicle. Where the obtained data is stored locally, the data may be stored for a limited period, such as until a time period to which the data relates has passed. In these embodiments, the stored data may be continually updated, as travel proceeds, such that it always relates to a future time period, i.e. to objects which are expected to be encountered in a future time period. This may help to reduce the amount of data which needs to be stored.
In other embodiments it is envisaged that the method may be performed by a server. The database may then be stored by the server or may be stored remote from the server, or combinations thereof. The server may be arranged to receive from the vehicle e.g. from a vehicle mounted system, the image of a scene from a camera associated with the vehicle. In these embodiments, it is envisaged that the image data may be transmitted from the vehicle to the server for use in the method of the present invention. The server may then perform the step of using the obtained data to analyse the image of the object in the image obtained from the camera, and use the results of the analysis in determining a position of the vehicle. The server may then be arranged to generate data indicative of the position of the vehicle, and, in embodiments, transmit such data to the vehicle e.g. for use in navigation by the vehicle. Thus, in some embodiments, a vehicle may transmit camera image data to a remote server, which then performs analysis of the image data using data obtained from the database to determine position data for the vehicle. Thus, the position determination may be performed off-board. The server may obtain the data indicative of object(s) expected to be encountered from the database in accordance with any of the embodiments described above e.g. based on the expected future position of the vehicle. Such arrangements might be envisaged where available bandwidth enabled the communication of data between the vehicle and server as necessary to determine the position of the vehicle approximately in real time, reducing the need for data to be stored local to the vehicle.
Of course, the steps of the method may be performed in any combination of positions, and need not be performed solely by a server, or solely by a vehicle mounted system. For example, a server might perform the analysis based on image data transmitted to the server from a vehicle, and then transmit the results of the analysis to the vehicle for position determination, and so on. Numerous possibilities are envisaged, depending upon the storage capacities of the various components, and/or bandwidth availability etc.
In accordance with the invention in any of its aspects or embodiments, the one or more objects in respect of which data is obtained from the database are objects whose images expected to be encountered by the vehicle in the future i.e. such that their image(s) may be expected to be found in an image captured by a camera associated with the vehicle.
The method comprises analysing an image obtained from the camera associated with the vehicle using the obtained data in respect of an object to attempt to detect an image of the object in the image obtained from the camera.
The object data used in analysing the image obtained from the camera is in respect of one or more object whose image is expected to be found in the image obtained from the camera. It will be appreciated that it is possible to know which object(s) is expected to appear in a given image obtained from the camera. The object data may be selected based upon a position associated with the image and the position of the object as indicated by the position data for the object. The one or more object whose data is used in respect of a particular camera image may be a subset of the one or more objects in respect of which data is obtained from the database. Thus, data in respect of a plurality of objects expected to be encountered in the future, e.g. in a future time period, may be obtained from the database, with the data in respect of the applicable object(s) from the plurality of objects whose image(s) may be expected to be present in a particular image being used in analysing that image. The image will be associated with a position which may correspond to a particular one of a number of positions traversed by the vehicle in the time period for which object data was obtained from the database.
In these embodiments of the invention, the data may be obtained from the database in advance of, or simultaneously with the obtaining of the image of a scene from the camera associated with the vehicle. Obtaining the data in advance in relation to object(s) expected to be found in an image of a scene obtained from the camera may be advantageous in enabling the method to be performed more rapidly, allowing the determining of accurate position data for the vehicle approximately in real-time. The subsequent process of analysing the obtained image only requires analysis of the image based on a limited set of possible object data, rather than searching an entire database to extract possible relevant object data. As described above, potentially relevant object data may be continually or periodically updated based on the expected future position of the vehicle, so that a limited set of object data for use in analysing images obtained is always available. This has particular benefits in the context of ADAS or autonomous vehicle navigation.
In accordance with the invention in any of its aspects or embodiments, the object data is obtained from a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing the road network, wherein the database comprises, for each object, data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use.
It is believed that a database storing data indicative of objects which may be expected to be encountered by vehicles traversing a road network, in which the data indicative of each object is associated with algorithm data for use in analysing an image obtained from a camera associated with a vehicle to attempt to detect the object in the image is advantageous in its own right.
From a further aspect of the present invention there is provided
a database, the database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing a road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use.
It will be appreciated that any of the features or steps described in relation to one aspect of the invention are equally applicable to any other aspect of the invention, unless the context demands otherwise. For example, the further features relating to a database described herein are equally applicable to the database of this further aspect, and the database from which the object data is obtained in accordance with the earlier aspects of the invention. The database from which the object data is obtained may incorporate any of the features described in relation to a database.
In accordance with a further aspect of the invention there is provided a method of determining a position of a vehicle, the method comprising;
providing a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing a road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use;
obtaining, from the database, data indicative of one or more object which is expected to be encountered by the vehicle in the future, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of a scene obtained from the camera;
obtaining an image of a scene from a camera associated with the vehicle during travel of the vehicle;
and, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of an object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of a scene obtained from the camera;
and, where an image of the object is deemed to be found in the image of a scene obtained from the camera, using the results of the analysis in determining a position of the vehicle.
In accordance with a further aspect of the invention there is provided a system for determining a position of a vehicle, the system comprising;
a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing a road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use;
means for obtaining, from the database, data indicative of one or more object which is expected to be encountered by the vehicle in the future, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing at an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of a scene obtained from the camera; obtaining an image of a scene from a camera associated with the vehicle during travel of the vehicle;
and, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of an object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of a scene obtained from the camera;
and, where an image of the object is deemed to be found in the image of a scene obtained from the camera, using the results of the analysis in determining a position of the vehicle.
The present invention in accordance with these further aspects may include any or all of the features described in relation to the other aspects of the invention, to the extent that they are not mutually exclusive. In some embodiments, the method of the present invention in any of its aspects or embodiments may be performed in relation to one or more further image of a scene obtained from a camera associated with the vehicle, and expected to contain an image of the same object or objects. The image may be obtained from the same or a different camera where multiple cameras are associated with the vehicle. The images of a scene may correspond to images obtained from a given camera when the vehicle is located at different positions and/or to images obtained from cameras mounted to different positions on the vehicle. Thus, where an image of an object is deemed to be found in the image obtained from the camera, the method may comprise obtaining one or more further image of a scene from a camera associated with the vehicle and expected to contain an image of the same object, using the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the or each further image of a scene to attempt to detect an image of the object in the image; and, for each image of a scene in which an image of the object is deemed to be found, using the results of the analysis in determining a position of the vehicle. Thus, the results of the analysis of more than one image obtained from a camera and containing an image of the object may be used in determining the position of the vehicle. In embodiments in which the results of the analysis of at least a portion of at least one further image of a scene obtained from a camera is used in determining the position of the vehicle, the or each image used in the analysis includes an image of the object from a different viewpoint. Using the results of analysing images of the same object from different viewpoints in this way may provide more precise position data.
Certain features relating to the data in respect of an object that is stored in the database in accordance with the invention in its various aspects and embodiments, and which data may be obtained from the database and used in accordance with the methods and systems of the present invention in its various aspects and embodiments, will now be described. It will be appreciated that the data indicative of any one or ones or all objects stored in the database in the invention in accordance with any of its aspects or embodiments may include any of the features described below. The data indicative of each object may include any of these features. Likewise, the data indicative of any one or ones of the objects which is obtained from the database, and/or which is used in attempting to detect an image of the object in an image of a scene captured by the camera may be in accordance with any of the embodiments described.
In accordance with the invention in any of its aspects or embodiments, the algorithm data associated with the data indicative of an object may be indicative of a set of one or more algorithms for use in analysing at least a portion of an image of a scene expected to contain an image of the object to attempt to detect the image of the object, and optionally a set of one or more parameters for configuring the or each algorithm for performing the analysis. Where the algorithm data comprises a set of one or more parameters for configuring the or each algorithm, the method may comprise configuring the or each algorithm using the respective set of one or more parameters, and may then comprise using the resulting set of one or more configured algorithms to analyse the at least a portion of the image. The step of configuring the or each algorithm in respect of an object may be performed in relation to the or each object for which data is obtained from the database. This may be performed in advance of obtaining an image of a scene expected to contain an image of the object. The method may comprise storing data indicative of the or each configured algorithm once determined for at least a given time period. The time period may correspond to a time period in which the object to which the algorithm data relates is expected to be observed by a vehicle traversing the road network. As described above, this may correspond to a time period for which object data is stored locally. This then provides an appropriately configured algorithm ready to operate on an image of a scene expected to contain an image of the object as soon as it is obtained.
The invention extends to a method of providing the database provided or used in accordance with the invention in any of its aspects and embodiments, the method comprising determining, for each object, algorithm data comprising a set of one or more algorithms, and optionally a set of one or more parameters for configuring the or each algorithm, wherein the algorithm data for an object is optimised in relation to one or more criteria for detecting the object in an image of a scene obtained from a camera associated with a vehicle, and storing the data in association with data indicative of the object. The criteria may be selected as desired, depending upon the particular requirements for a given application or user. For example, the criteria may be detection probability, computation time, or a particular combination thereof. For example, the criteria may provide a tradeoff between detection probability and computation time or latency. In accordance with the invention in any of its embodiments, the algorithm data may be indicative of any suitable algorithm(s). The data may be indicative of a set of one or more algorithms. For example, suitable algorithms may include feature point detector algorithms, such as SIFT, SURF, ORB, FAST and BRIEF. Alternatively the algorithm data may be indicative of a neural network, or a correlation detector. The algorithm data may be indicative of multiple types of algorithm. The algorithms may be used at different stages in the analysis process. Thus the algorithm data may be indicative of one or more of a feature point detector, neural network or correlation detector. The or each algorithm is preferably scale invariant, and alternatively or additionally may be rotation invariant.
The algorithm data is indicative of a set of one or more algorithms for use in analysing at least a portion of an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of a scene encountered by the vehicle when traversing the road network.
The algorithm data may be indicative in any manner of the set of one or more algorithms. The data indicative of an algorithm may be indicative of an already configured algorithm, or alternatively the algorithm data may comprise data indicative of one or more algorithm, and data for configuring the or each algorithm. Of course, where the algorithm data is indicative of multiple algorithms, there may be a mixture of pre-configured algorithms, and algorithms associated with configuring data to be used in configuring the algorithms.
The database may comprise data indicative of a multiple instances of a particular object associated with different positions, wherein different ones of the instances of the object is associated with different algorithm data.
The stored data may be a database indicative of objects which may be encountered when traversing at least a portion of any one or ones of the road elements of the road network. The database may relate to the entire road network or any subset thereof.
It will be appreciated that in any of the aspects or embodiments of the invention, where data indicative of something e.g. an object, or algorithm, is stored in a database, the actual data may be stored in the database, or a pointer to such data may be stored. The pointer may be to a side file, which may be located in a remote server. Thus, the term should not be interpreted to require any particular restriction on data storage positions. It should be noted that the phrase“associated therewith” in relation to one or more segments or elements should not be interpreted to require any particular restriction on data storage positions. The phrase only requires that the features are identifiably related to an element. Therefore association may for example be achieved by means of a reference to a side file, potentially located in a remote server.
For example, the algorithm data in a database may comprise one or more algorithms stored in the database, or a pointer to where such data is stored. For example, if the same algorithm is used in respect of multiple objects, an identifier for the algorithm may be stored, enabling the algorithm data to be looked up and retrieved from a separate store. This may reduce the number of instances of storing the same data. Similar techniques may be used in relation to parameters for configuring algorithms. Likewise, any other data described as being stored in the database may be stored in another location, provided that a suitable pointer is provided thereto in the database. The database may be a distributed database.
Thus, the database may be made up of multiple (sub) databases, which databases may be stored in the same or differing locations, and any particular (sub)database may be stored in stored distributed across multiple locations. Any suitable techniques may be used to reduce the quantity of data stored. For example, a dictionary type arrangement may be used. Certain more generic data e.g. general dimension data, in relation to a particular object may be stored in one (sub) database, with instance specific data e.g. location data for a particular instance of an object, being stored in another (sub)database, with a pointer to the applicable generic data in the other database for the object. Some examples are given below. Some algorithm and appearance data may also be generic to multiple instances of an object, and may be stored using a similar structure, with specific instance data referencing generic data, to avoid storing multiple instances of the generic data. Thus, the data indicative of the appearance of an object may be a pointer to such data, which may be stored in a separate list. This may be particularly applicable where there are multiple instances of the same object, and each instance uses the same appearance data.
The data indicative of an object includes position data. The position data is preferably three dimensional position data. In preferred embodiments the data indicative of an object further comprises data indicative of the shape and/or orientation of the object. Preferably the object data includes position, shape and orientation data. The position, and, where applicable shape and orientation data, are used in analysing at least a portion of an image of a scene to attempt to detect an image of the object in the image. It will be appreciated that the shape and orientation data may enable detection to occur regardless of the particular position of the object relative to the vehicle when captured in an image obtained by a camera associated with the vehicle. Such data also enables the position of the vehicle relative to the object to be accurately determined using an image of the object in an image captured by the camera associated with the vehicle.
In some embodiments, the data indicative of an object may further comprise data indicative of a road stretch with which the object is associated. The road stretch is the road stretch, from which, when the stretch is traversed by a vehicle, the object may be expected to appear in an image obtained from a camera associated with the vehicle. A road stretch herein refers to at least a portion of one or more road elements of a road network. The road stretch may be defined in any suitable manner. For example, this may be by reference to a digital map comprising data indicative of a plurality of road segments representing road elements of the road network. The road stretch may be defined by reference to at least a portion of one or more road segments of the digital map. However, any suitable position reference system may be used. The position reference system may be a position agnostic reference system, such as the Open LR™ system. The data indicative of a road stretch may be indicative of a geographic stretch. Such data may be defined using a polyline. The data indicative of a road stretch may optionally be associated with data enabling road stretch to be determined in respect of a given digital map.
The data indicative of the appearance of an object is data which may be (and in embodiments is) used in attempting to detect the presence of the object in an obtained image i.e. by the algorithm(s) associated with the object. The data may be a stored image of the object or any other data describing the appearance of at least a part of the object. The data may be indicative of one or more feature points and/or descriptors for the object. The most appropriate form of the data may depend upon the algorithm data associated with the object, and vice versa. Thus, the selection of the appearance data to be included in the database may be made by reference to the algorithm data for an object, or vice versa, when providing the database. It will be appreciated that, for different instances of the same object, different appearance data may be stored. For example, where the algorithm data for an object is indicative a neural network, it may be most appropriate to store appearance data in the form of a complete image of the object.
The obtained appearance data may be used directly in performing the analysis of the at least a portion of the image of the scene, or some processing of the appearance data may be required to provide data that may be used in the analysis. For example, where the appearance data comprises an image of the object, it may be necessary to process the image to extract one or more feature points from the image of the object which may be attempted to be matched to a feature point of the image obtained from the at least a portion of the image.
The step of analysing the image obtained from the camera may comprise analysing at least a portion, and preferably only a portion of the image obtained from the camera to attempt to detect the image of the object. Where not explicitly mentioned, references to e.g. the algorithm data being for use or used in analysing an image of a scene may involve the data being used to analyse at least a portion of the image of the scene. Analysing only a portion of the image may enable the process to be achieved more rapidly, and reduces the processing power required.
In embodiments, the method comprises identifying an area of interest in the image obtained from the camera, the area of interest being expected to contain an image of the object, and analysing the area of interest using the obtained data indicative of at least the appearance of the object and the algorithm data associated with the obtained data. The area of interest corresponds to only a portion of the obtained image. The step of analysing the area of interest may comprise providing the area of interest as input into one or more algorithm defined by the algorithm data associated with the object. It will be appreciated that, the location of objects in respect of which data is stored is known, and the location of the camera when obtaining a given image of a scene is known, the system may know which object is expected to be found in a particular image of a scene, and, in some cases, where in the scene the object is expected to be found. Defining the area of interest in which the image of an object is expected to appear may be carried out using the data obtained from the database in relation to the object, and/or may involve performing a pre-processing step to initially detect the object. This initial detection may be a coarse detection step pending detailed analysis of the area of interest to confirm the detection of the object with an appropriate level of certainty. This pre-processing might be performed using a neural network e.g. for all objects. Such a two-step approach may only be performed where improved performance is to be expected. For example, where a detection algorithm defined for a particular sign is relatively expensive in terms of processing power when performed over a large area, and a faster algorithm exists to detect red sign boundaries, and the sign is known to have such a boundary, then the faster algorithm may be used to identify a smaller area of interest for the detection algorithm to operate on.
In preferred embodiments the data indicative of an object comprises data indicative of where, in a scene encountered by a vehicle when traversing the road network, the object may be expected to appear. For example, the object may be expected to be appear on the right, or the top left of the scene. Where an object has a position associated with a road element comprising a plurality of lanes, the data indicative of the object may further comprise data indicative of a lane of the road element with which the position of the object is associated. This might be applicable e.g. for a road sign located above a particular lane of the road. These steps may enable certain parts of a captured image of a scene to be disregarded when defining an area of interest to be analysed, providing benefits in reduced computation time. The method may comprise obtaining the data indicative of where in a scene encountered by a vehicle the object is expected to appear from the database, and using the data in identifying the area of interest in the image obtained from the camera.
The step of using the algorithm data to analyse the at least a portion of an image of a scene to attempt to detect an image of an object may involve causing a set of one or more algorithms defined by the algorithm data to operate on the at least a portion of the image of the scene in any suitable manner to determine whether there is a sufficient correspondence between the appearance of the object as determined using the appearance data in respect of the object, and the at least a portion of the image of the scene that is analysed, to result in a determination that an image of the object has been detected. The criteria for determining whether the object has been detected based on the results of the analysis may be set as desired e.g. depending upon the confidence in the result required, computation time constraints etc. The method may comprise determining a likelihood that an image of the object is present in the image obtained from the camera using the results of the analysis. Such data may be used in determining whether or not to use the results of the analysis in respect of a given object in determining the position of the vehicle. The method may comprise inputting the at least a portion of the image to the or each algorithm defined by the algorithm data. The set of one or more algorithms may be arranged to attempt to determine a match between each of one or more features identified in the at least a portion of the image of the scene that is analysed and a feature of the object as determined using the appearance data for the object. The features may be feature points. The appearance data for the object may be indicative of one or more feature points, or the method may further comprise using the appearance data to identify one or more feature points of the object. For example, where the appearance data comprises a stored image of the object, the method may comprise analysing the image to identify the one or more feature points. Alternatively, the one or more features may be at least some of the at least a portion of the image of the scene that is analysed and at least a portion of an image of the object as defined by the appearance data for the object. This may be appropriate where the algorithm is a neural network which attempts to recognise an image of the object according to the obtained data in the obtained image of a scene.
The step of analysing the at least a portion of the image may simply attempt to detect an image of the object i.e. to determine whether it is present or not with a desired level of certainty (which may be set by reference to one or more criteria). This may provide sufficient data to enable the results of the analysis to be used in determining a position of the vehicle. The position of the object according to the object data may be used together with other data as desired e.g. an assumed approximate position of the vehicle, data indicative of the position of the camera etc. to determine a position of the vehicle. However, the analysis to attempt to detect the presence of an image may include other steps to provide additional data for use in determining a position of the vehicle. In some embodiments the method may comprise determining a projection between one or more feature points of the object determined using the appearance data for the object and one or more feature points detected in the at least a portion of the image of the scene that is analysed. The method may comprise determining a homography between the one or more feature points determined using the appearance data and the one or more feature points detected in the analysed at least a portion of the image of the scene. A homography computation involves obtaining a projection of points in one plane and points in another plane. A homography projection may identify subsets of features from those identified using the appearance data, and those identified through analysis of the at least a portion of the image of the scene, which can be considered to have a suitable correspondence. It will be appreciated that the features identified using the appearance data and those identified through analysis of the at least a portion of the image of the scene e.g. in different planes may not be the same. The method may comprise determining a projection of the object, or one or more parts thereof, into the obtained image. In some embodiments, an object may be deemed to be found in the obtained image when a homography may be found between a predetermined number of features in the obtained image and a corresponding feature determined using the appearance data for the object.
It will be appreciated that the step of attempting to detect an image of the object in the image obtained from the camera may or may not be a distinct step from the step of using the results of the analysis in determining a position of the vehicle. For example, matching between feature points of the portion of the image of the scene and one or more feature points determined using the appearance data may be performed to determine the presence of the image of the object, and, at the same time, determine a position of the object relative to the vehicle. The matching process may involve determining a projection between the feature points of the image obtained from the camera and described by the appearance data, which projection may be used in determining the position of the vehicle.
The method may comprise determining the position of the object relative to the vehicle using the results of the analysis. For example, a distance e.g. a three- dimensional distance of the object from the vehicle may be determined. This may then be used in determining an absolute position of the vehicle, or a position relative to another reference system e.g. the road network.
Where an image of the object has been deemed to have been found in the obtained image, the results of the analysis may be used in any suitable manner to determine the position of the vehicle. The results of the analysis may be used together with other data in the determination. This may enable greater confidence in the result to be achieved. The other data may include camera calibration data and/or data in respect of the object obtained from the database, such as position data for the object, and optionally orientation, shape, and/or dimension data. The other data may alternatively or additionally include assumed approximate position data obtained in respect of the vehicle e.g. GPS or other position data. Data obtained in relation to the detection of an image of the object in other images e.g. camera frames may also be used. The method may comprise using the results of the analysis to determine a distance e.g. a three-dimensional distance of the object from the vehicle. This may be carried out using the results of a projection e.g. homography obtained between feature points in the obtained image and determined based on the stored appearance data for an object, and camera calibration data. Thus, the step of using the results of the analysis to determine the distance of the object from the vehicle may comprise using camera calibration data.
The determined position of the vehicle may be used as desired. In some embodiments the position is used in determining a position of the vehicle relative to a digital map. The position determined may be in relation to a segment of the digital map. Alternatively or additionally, the determined position of the vehicle is used in navigation of or by the vehicle. The vehicle may or may not be an autonomous vehicle. The determined position may, alternatively or additionally, be used in determining a position of the vehicle in relation to a given lane of a multiple lane road element. The method may comprise refining an assumed approximate position of the vehicle using the results of the analysis (and optionally other data) e.g. an approximate position based on GPS or other position data. The results of the analysis may be used as an input for a statistical localization method e.g. based upon a particle filter.
In some instances, it may be that it is determined that an image of the object has not been found in the obtained image from the camera based on analysis of the at least a portion of the image obtained from the camera. This may be because the object is no longer present, or is obscured for some reason. For example, the image may not appear as expected in obtained image of a scene if it is obscured from view by a tree in the captured image, which was did not obscure the object when the object was added to the database e.g. if this was at a different time of year. Of course, an object may also not be found in the area of interest if it is no longer present.
In accordance with any of the aspects or embodiments of the invention, the objects are objects which may be visible, under at least some conditions, when traversing element(s) of the road network. It will be appreciated that an object may not be visible to the naked eye under all conditions. For example, the object may be visible only in certain light conditions, or at certain times of year. An object might be obscured by vegetation at certain times of year. An object may be visible only when imaged by a camera apparatus and not by the naked eye e.g. the object may be visible in an image obtained by an infrared camera but not using the naked eye. Each object is an object which may be expected to form part of an image of a scene captured by a camera associated with a vehicle when traversing a road element of the network.
An object for which data is stored may be any static object. By way of example, and not by limitation, an object may be selected from a sign, lamp post, bridge front, road marking, building, skyline. A sign may be a traffic sign, or may include a brand name or logo. For example, the sign might be an advertising sign.
In preferred embodiments, the plurality of objects are or include traffic signs. The database may include data indicative of more than one type of object. Thus the plurality of objects may include any one or ones of the types of objects mentioned above.
While the invention has been described in relation to the use of images of objects and/or scenes obtained from a camera, it is envisaged that the invention may be more broadly applicable to determining a position of a vehicle through analysis of a representation of a scene obtained from a set of one or more sensors associated with the vehicle. The database will then include algorithm data to be used in analysing a representation of a scene obtained using a set of one or more sensors associated with a vehicle to attempt to detect a representation of the object. The representation may be a 2-D representation e.g. an image, or may be three-dimensional representation. The representation of a scene or object may be an image of the scene or object. The set of one or more sensors associated with a vehicle may be a set of one or more image sensors. The image sensors may comprise one or more cameras as in the earlier aspects, but may be any form of image sensor. In other embodiments, the representation may be obtained through three-dimensional scanning. The representation may be a radar or LIDAR (laser scanning) representation. Thus, the set of one or more sensors may comprise one or more laser sensors, or one or more radar sensors. The database used would include appropriate algorithm data for use in conjunction with type of sensors and/or representation involved. For example, suitable algorithms used for detecting features within a LIDAR representation may be identified for use.
In accordance with a further aspect the present invention provides a method of determining a position of a vehicle when traversing a road network, the method comprising;
obtaining, from a database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing a representation of a scene obtained using a set of one or more sensors associated with a vehicle to attempt to detect a representation of the object in the representation of the scene obtained from the set of one or more sensors; obtaining a representation of a scene from a set of one or more sensors associated with the vehicle while traversing the road network;
and, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the representation of a scene obtained from the set of one or more sensors associated with the vehicle to attempt to detect a representation of the object in the representation of the scene;
and, where a representation of the object is deemed to be found in the representation of a scene obtained from the set of one or more sensors, using the results of the analysis in determining a position of the vehicle.
In accordance with a further aspect of the invention there is provided a system for a system for determining a position of a vehicle when traversing a road network, the system comprising;
means for obtaining, from a database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing a representation of a scene obtained from a set of one or more sensors associated with a vehicle to attempt to detect a representation of the object in the representation of the scene obtained from the set of one or more sensors;
means for obtaining a representation of a scene from a set of one or more sensors associated with the vehicle while traversing the road network;
and means for, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the representation of a scene obtained from the set of one or more sensors associated with the vehicle to attempt to detect a representation of the object in the representation of the scene;
and means for, where a representation of the object is deemed to be found in the representation obtained from the set of one or more sensors, using the results of the analysis in determining a position of the vehicle.
It will be appreciated that, the present invention in any of the aspects and embodiments described herein may include any of the features described in relation to any of the other aspects and embodiments of the invention, to the extent they are not mutually exclusive.
The methods of the invention in its various aspects described herein may be performed on the fly e.g. while the vehicle is traversing road elements of the road network.
The methods described herein may be implemented at least partially using software e.g. computer programs. Thus, in further embodiments the technology described herein comprises computer software specifically adapted to carry out the methods herein described when installed on a data processor, a computer program element comprising computer software code portions for performing the methods herein described when the program element is run on a data processor, and a computer program comprising code adapted to perform all the steps of a method or of the methods herein described when the program is run on a data processor. The data processor may be a microprocessor system, a programmable FPGA (field programmable gate array), etc. The technology extends to a computer program product comprising computer readable instructions adapted to carry out any or all of the methods described herein when executed on suitable data processing means.
The technology described herein also extends to a computer software carrier comprising such software which when used to operate a data processing apparatus or system comprising a data processor causes in conjunction with said data processor said apparatus or system to carry out the steps of the methods of the technology described herein. Such a computer software carrier could be a physical storage medium such as a ROM chip, CD ROM, RAM, flash memory, or disk, or could be a signal such as an electronic signal over wires, an optical signal or a radio signal such as to a satellite or the like.
It will further be appreciated that not all steps of the methods of the technology described herein need be carried out by computer software and thus in further embodiments comprise computer software and such software installed on a computer software carrier for carrying out at least one of the steps of the methods set out herein.
The technology described herein may accordingly suitably be embodied as a computer program product for use with a computer system. Such an
implementation may comprise a series of computer readable instructions either fixed on a tangible, non- transitory medium, such as a computer readable medium, for example, diskette, CD, DVD, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, either over a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques. The series of computer readable instructions embodies all or part of the functionality previously described herein.
Those skilled in the art will appreciate that such computer readable instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Further, such instructions may be stored using any memory technology, present or future, including but not limited to, semiconductor, magnetic, or optical, or transmitted using any communications technology, present or future, including but not limited to optical, infrared, or microwave. It is contemplated that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation, for example, shrink wrapped software, pre- loaded with a computer system, for example, on a system ROM or fixed disk, or distributed from a server or electronic bulletin board over a network, for example, the Internet or World Wide Web.
BRIEF DESCRIPTION OF THE DRAWINGS
Various embodiments will now be described, by way of example only, and with reference to the accompanying drawings in which:
Figure 1 illustrates examples of objects which may be included in a database used in embodiments of the invention;
Figure 2A illustrates one way in which data relating to signs may be stored in a database; Figure 2B illustrates one way in which the algorithm data in respect of the signs included in the database in Figure 2A may be stored;
Figure 3 is a flow chart illustrating one embodiment of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS
Some embodiments of the invention will now be described using an example in which the objects in respect of which data is stored, and which are used in determining the position of a vehicle, are traffic signs. It will be appreciated that the invention is not limited to such objects. Any static objects which may be expected to appear in an image captured by a camera associated with a vehicle traversing a road network may be included in a database as described below, and subsequently used in position determination. Examples of such objects include lamp posts, bridge fronts, specific road markings, clearly visible buildings e.g. building fronts, brand names, skylines, etc. Figure 1 illustrates images of certain static objects which may be used. Appearance data included in the database indicative of the object may correspond to such an image, or be derived therefrom e.g. a set of feature points etc.
In embodiments of the invention, a database is constructed storing instances of traffic signs. For each sign, data indicative of a 3D location, shape and orientation of the sign is stored. Optionally the sign is stored in association with information identifying a particular road stretch with which the sign is associated. This may be a road stretch along which the sign is expected to be encountered i.e. be detected in an image obtained from a camera associated with a vehicle traversing the stretch. The road stretch may be defined in relation to digital map data e.g. being at least a portion of one or more segments of a digital map. The digital map comprises segments representing elements of a road network.
Alternatively, the road stretch may be defined in a map agnostic manner e.g. as a geographic stretch. This might be as a polyline e.g. of WGS-84 coordinates. In some embodiments, further information may be associated with the geographical stretch to enable the stretch to be determined in relation to a digital map. This may be carried out using a map agnostic location referencing system. Such systems define a location or stretch in a manner which enables the location or stretch to be determined in relation to any digital map. Such systems may enable a location defined in relation to one digital map to be determined in relation to another, different digital map. Examples of such map agnostic systems include the
Applicant's Open LR® system and the Agora-C system.
Additional information may be associated with the sign in the database.
This information may include information about where in a scene i.e. a scene captured by a camera associated with a vehicle traversing a road network, the sign may be expected to appear. This might be e.g. on the right, top left etc. This information may be used when identifying an area of interest of an image obtained from a camera associated with a vehicle for analysis to try to detect an image of the sign. If it is known where the sign is likely to appear, other parts of the image may be excluded from analysis, saving computation time. Optionally, for example for signs above the road, this can be extended with lane information, so that even a more accurate portion of the image can be selected for analysis when the current lane is known.
In the database, the data relating to each given sign i.e. each particular instance of a sign, is associated with algorithm data to be used in attempting to detect an image of the sign in an image captured by a camera associated with a vehicle when traversing a road network. The algorithm data may be indicative of an identifier for a particular algorithm and a set of parameters that can be used to configure the algorithm for detecting this specific instance of a sign. Thus, the algorithm and/or parameters for use in detecting one instance of a sign e.g. a 50km/h sign, may be different to the algorithm and/or parameters for use in detecting another instance of the same 50km/h sign. This may be due to differences in the background to the sign. For example, in one location the sign may appear on a grey background corresponding to a concrete road bridge, while in another location it appears against the sky. Different algorithms and/or configuration of algorithms may be more appropriate e.g. result in more reliable detection of an image of the sign in these different contexts.
Many different detection algorithms may be used. Well-known feature point detector algorithms are Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), ORB, Features from Accelerated Segment Test (FAST), and Binary Robust Independent Elementary Features (BRIEF). Alternatively neural networks or correlation detectors may be used. These examples of detection algorithm are not limiting, and are merely exemplary of the wide range of algorithms which may be used. Detection algorithms used are preferably scale-invariant to enable them to provide matching for multiple distances of the object from a camera. However, if the algorithm is not scale invariant, a pre-scaling search may be applied. Preferably the detection algorithm is at least to some extent rotationally invariant.
The optimal algorithm for detecting a particular instance of a sign will depend upon the context of the sign e.g. background, and also the requirements of a particular user of the database. For example, some users may wish to prioritise reliability of detection, while others will be more concerned with processing time or memory required. This may depend upon the implementation of a position determining or other system using the database e.g. whether the algorithm is to be run by a server or local device, the way in which the output of the detection algorithm is to be used e.g. whether it is to be used by an autonomous vehicle, in which case accuracy is paramount etc. Neural networks may be expensive in terms of memory and processing time, and may be less suitable for a lightweight embedded system. Feature detector/matcher algorithms will give a precise location, but are specific to a particular instance of a sign.
In addition, the database includes data relating to the appearance of the sign for use in attempting to detect the sign in an image. This may be an image of the sign, or may be e.g. feature points and descriptors for that sign. The type of appearance data may be selected to be the most appropriate data for use by the particular algorithm specified for the sign, e.g. feature points and descriptors which are expected to enable that algorithm to reliably detect the sign in an image.
Conversely, the algorithm specified for a given instance of a sign may be selected based upon the available appearance data. A complete image of the sign may be more applicable e.g. where the algorithm implements a neural network.
As mentioned above, it is envisaged that different users of the database may have different requirements, such that the optimal detection algorithm in respect of a given instance of a sign may vary between users. Thus, when setting up the database, the requirements of a particular user may be taken into account when selecting the algorithm data and appearance data for each instance of a sign. Different algorithms and parameters for configuring the algorithms may be trialled in relation to an instance of a sign, with the parameters being tuned to obtain the optimal algorithm, parameters and appearance data e.g. feature set, for a particular user's requirements e.g. balancing criteria such as processing time, memory required, reliability etc. The optimal algorithm may be the best algorithm in relation to providing a desired balance of performance and detection quality e.g. a tradeoff between detection probability and computation time or latency. A balance may be found between the likelihood of detection success based on a single camera image and processing time, which may affect the number of frames in which the sign can be detected. Storing a complete image of a sign may enable greater choice in selecting algorithms e.g. where the algorithm information stored in respect of a sign may be modified or selected by a user. It is envisaged that the database may be modified or completed by a user to enable it to be customised to the user's particular requirements, for example, by providing the user with a database including the data indicative of the instances of signs e.g. sign location, shape etc. and the appearance data for the sign, with the user then adding desired algorithm data for each instance of a sign. Thus, system performance and matching success rates may be improved by predetermining per sign what kind of detection/matching algorithm works best, and with which parameters. This may be user specific.
The size of the database can be reduced by using dictionary type features, or using other techniques in the art. For example, a dictionary of algorithm and parameter tuples may be used. If certain algorithms with specific parameters are used many times in the database i.e. in respect of multiple instances of signs, then the algorithm and parameter tuples can be replaced by a reference in a separate list of algorithms with parameters. This avoids storing the algorithm data in full in association with the sign data. If there are many instances of a certain sign, and all instances use the same sign appearance data and same algorithm and parameters, then the sign appearance data and algorithm and parameters can be replaced by a reference into a list of signs, that stores the sign appearance data, and either a reference to the algorithm/parameter list or the algorithm/parameters themselves.
The database may be structured as desired. In one example the database includes a first part providing a sign database. The sign database provides generic sign data, which is applicable to each instance of a particular sign.
In one exemplary embodiment the sign database includes, for each sign, the following data;
width, height;
one or more of (depending whether the data is generic);
a reference to a set of one or more algorithms to be used in detection of the sign e.g. detection, matching feature extraction etc., and a reference to a set of one or more parameters for configuring the algorithms; and appearance data for the sign to be input to the algorithm e.g. full image, features for detection using the applicable algorithm(s) e.g. SIFT features, or a sign classification identifier etc.
In this example, a second part of the database provides an instance database. This includes the data specific to each instance of a sign. For each instance of a sign the following data may be stored in the instance database;
reference to the applicable sign data in the sign database;
location, height above ground, normal vector or viewing angle, pitch;
optionally data may be stored giving an indication of where in an image the instance of a sign is expected to appear e.g. left, right or top. (alternatively, the expected location may be determined on the fly).
It is envisaged that the sign database might include references to multiple algorithms for each sign i.e. each generic type of sign. The instance database may then include data referring to a particular one of the multiple algorithms that is to be used for that particular instance of the sign.
The data in respect of a given sign is therefore provided by the combination of the data in the instance database for a particular instance of a sign, and the data in the sign database which is referenced by the instance data, and which provides further information which is generic to multiple instances of the sign. It will be appreciated that where the algorithm data and/or appearance data is specific to an instance of a sign, then it may be included in the instance database rather than the generic sign database.
One example of the data which may be stored is given in Figure 2A. Here the sign database includes data for each one of a plurality of generic signs, identified by "signld 1 , 2...". The data is as exemplified above, including data referencing the applicable algorithm for detection, the width and height of the sign, and the appearance data for the sign.
The instance database includes a reference to the applicable data in the sign database for a particular instance of a sign e.g. "signld: 1 " etc., the area of an image in which the sign is expected to appear e.g. right, top etc., an indication of the three dimensional location of the sign, and the height above ground, angle and pitch data for that instance of the sign. Thus the complete data relating to the instance of the sign is given by the referenced generic sign data (as referenced by the signID in the instance database), and the remainder of the data in the instance database. The instance database may be stored in a tiled form.
The above database construction may be appropriate where multiple instances of a given sign e.g. a 50km/h sign share the same algorithm and appearance data. It will be appreciated that not every instance of a particular sign will necessary share such data, or only certain subsets of a particular sign may share such data. The sign database may then be modified appropriately to include additional entries in respect of each different set of data i.e. corresponding to further generic signs, each associated with its own signld. If different instances of signs tend to have different appearance and algorithm data, then a "dictionary" style structure to the database may not be appropriate.
The present invention provides great flexibility to include as much data as required to ensure that sufficiently specific algorithm and appearance data is provided for each instance of a sign. It will be appreciated that even if some generic signs share the same algorithm data e.g. 50km/h signs, the data in respect of different generic traffic signs e.g. give way, no entry, lane guide, other speed limit signs, end of speed limit etc. may, and typically does, have different algorithm data associated therewith. This is in contrast to prior art techniques in which the same algorithm is used to detect all types of traffic sign. It will be appreciated that the present invention may allow simpler algorithms to be used, with resulting benefits in processing times, as an algorithm specific to a particular type, or even instance of sign, may be simpler than one which must detect all types of sign. In some prior art techniques using a neural network, the complexity of the network, and the time and complexity in training the network in order to detect all types of sign, and/or other objects, may be significant.
Figure 2B illustrates the data which may be stored in respect of a particular algorithm, and which is referenced by the sign database. For example, signld 1 is associated with SURF1. The algorithm database provides details of the algorithm and parameters corresponding to this identifier (which identifies the SURF algorithm configured using a particular set of parameters).
It will be appreciated that database size can be reduced by only storing the appearance data relevant for the most suitable algorithm e.g. SIFT feature points and descriptors.
One way in which the database may be used in determining a precise location of a vehicle traversing a road element in a road network will now be described. The method may be performed by a vehicle mounted system using object data obtained from a remote server. However, the invention is not limited to this implementation, and steps may be performed on any one or ones of a server and vehicle mounted system or elsewhere. For example, a camera image might be transmitted to a server for used in performing the method using object data obtained from a database at the server using data relating to the position of a vehicle, received by the server from the vehicle.
The techniques of the present invention enable the position of a vehicle to be determined accurately e.g. in relation to a digital map. This may provide location determination to a level of accuracy to enable a determination as to which lane the vehicle is travelling in, for a multi-lane road. The position determination may be appropriate for use by an autonomous vehicle, which requires particularly accurate location determination, although the invention is not limited to use in this context.
The determined position may be used to refine an estimated position of the vehicle based on position data, such as GPS data. Such position data gives a rough approximation of the position of the vehicle, but may not be sufficiently accurate e.g. to determine the lane of travel. Localization based on traffic signs or other static objects which may be performed in accordance with the invention would typically, although not necessarily, be used in combination with another localization method. One example of such another localization method would be one which requires regular error correction or synchronization. For example, the precise positioning data obtained in accordance with the invention may be used to refine or synchronise data obtained from a positioning unit providing position data based on GPS and Inertial Measurement Unit (IMU), which may exhibit a slowly aggregating error. Alternatively or additionally the data obtained in accordance with the invention may be used in combination with position data obtained using vehicle mounted camera(s) based on detected lane markings. Alternatively or additionally the data obtained in accordance with the invention may be used as an input for statistical localization, such as localization based on a particle filter. It will be appreciated that it is advantageous to use localization based on sign (or other object) detection in combination with other positioning techniques, as there will be times at which sign based localization may fail, e.g. if a sign is not visible as expected. This may be for temporary or permanent reasons e.g. due to weather conditions, or having been removed or somehow changed in appearance since the database was created. An example will be given in relation to determining an accurate position of a vehicle having one or more cameras associated with i.e. mounted thereon, as the vehicle traverses a road element of the road network. Referring to Figure 3, in step 1 , continuously, or in respect of a certain time window, a search is performed in the sign database to identify a set of one or more instances of signs which are expected to be observed in the near future i.e. to appear in an image captured by a camera associated with the vehicle. The signs which are expected to appear are identified based upon the known approximate position of the vehicle e.g. according to GPS or other positioning data. Alternatively or additionally, the signs may be identified based upon a route which is expected to be traversed by the vehicle in the applicable time period, whether a predetermined or inferred route. This process may be performed continually, i.e. with the search being performed continually as the position of the vehicle advances, or may be performed at intervals, with all the signs expected to be encountered in the next predetermined interval being obtained.
For each camera frame i.e. image, one or more area of interest is identified- step 3. An area of interest is an area in which one of the expected signs is expected to be found. It will be appreciated that, based on the information from the sign database, which includes the location of the signs expected to be encountered in the near future, and knowledge of the location at which a camera frame was taken, it is known which sign(s) should appear in a frame, and, if the sign has data indicative of where in a scene it is to be expected to appear, it will also be known approximately where in the frame the sign should appear. It is therefore possible to identify an area of interest based upon the knowledge of the expected signs, and, where applicable, the data indicative of where in a scene those signs are expected to appear. In some cases a pre-processing step of coarse detection e.g. using a neural network may be performed to identify the area of interest. The area of interest may then correspond to an area of the frame in which an expected sign can be seen e.g. with a suitable resolution. The area of interest identifies the area upon which the algorithm defined for the instance of a sign is to operate. Thus, it may be an area in which the instance of the sign is expected to appear, or in which it is believed the instance of the sign can be seen, which is to be subject to the further analysis. The area of interest may be selected taking into account data identifying a lane with which a sign is associated, where such data is stored. ln step 5, the algorithm defined for the given instance of a sign in the database, where applicable, configured using the parameters associated therewith, is used to analyse the area of interest, to detect the sign. Where the algorithm has been configured using stored parameters, the configured algorithm may be kept available in the system during at least a time window in which the sign is expected to be detected. The detection process may include any suitable steps, depending upon the algorithm(s) defined for the instance of a sign, and the appearance data stored for the sign. The algorithm typically analyses the area of interest to detect features therein, and compares the features to features stored in the appearance data in relation to the sign, or determined based upon a stored image of the sign, where the complete image is stored. The algorithm may be such that it can filter out outliers e.g. RANSAC-based. However, depending upon the algorithm used, the comparison may be based on comparing the entire image of the sign in the camera frame to a stored image of the sign e.g. where the algorithm is a neural network.
Where a sufficient degree of matching is determined between features of the image of a sign detected in the camera frame, and features of the stored sign e.g. based on the stored appearance data for the sign, a homography (1-1 ) projection may be determined between the features in the camera image and the stored image of a sign (or stored feature data for the sign). Where such
homography may be found for a certain number of feature points e.g. at least 4, then it may be assumed that there is a very high probability that the image in the camera frame is indeed of the sign. The homography projection may be used to project the sign according to the stored data e.g. the corners thereof, into the image found in the camera frame.
The homography computation, together with the camera calibration and known dimensions of the sign according to the sign database e.g. known distances within the sign, may be used to determine a 3 D distance of the sign to the vehicle- step 7. This may be performed by computing a translation of pixel distances in the camera image to distances in the 3-d world. Camera calibration information enables 2D camera pixels to be translated into depth-vectors in a 3D vehicle- relative world. The 2D distances between points in the sign which correspond to points used in the homography projections to the camera image will be known.
Also known is the location of the sign with respect to the road (longitudinal, lateral and height above the road), and the orientation of the sign (mounting angle). The determined distance of the sign to the vehicle may be used in obtaining an accurate position of the vehicle relative to a digital map- step 9. This may then be used alone, or to refine a known approximate position of the vehicle. Multiple observations of the sign from different viewpoints i.e. based on the analysis of different camera frames (from the same or different cameras), may be used to increase the precision of the determined position of the vehicle relative to the sign.
The present invention may allow a single observation of a sign to be used to determine the position of the vehicle e.g. relative to a digital map.

Claims

CLAIMS:
1. A method of determining a position of a vehicle when traversing a road network, the method comprising;
obtaining, from a database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of the scene obtained from the camera;
obtaining an image of a scene from a camera associated with the vehicle while traversing the road network;
and, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of the scene;
and, where an image of the object is deemed to be found in the image of a scene obtained from the camera, using the results of the analysis in determining a position of the vehicle.
2. The method of claim 1 comprising identifying the or each one of said one or more objects expected to be encountered in the future, and in respect of which data is obtained from the database, based upon the data indicative of the position of the object and data indicative of one or more expected future positions of the vehicle.
3. The method of claim 2 wherein the one or more expected future positions of the vehicle are determined based upon one or more of; data indicative of an assumed approximate position of the vehicle and data indicative of a route expected to be traversed by the vehicle.
4. The method of any preceding claim wherein the or each of the one or more one object in respect of which data has been obtained from the database whose data is used in analysing the obtained image is an object whose image is expected to be present in the image of a scene obtained from the camera, the or each object being selected based upon at least a position associated with the image of a scene obtained from the camera and the position of the object as indicated by the position data for the object.
5. The method of any preceding claim comprising, where an image of an object is deemed to be found in the image of a scene obtained from the camera, obtaining one or more further image of a scene from a camera associated with the vehicle and expected to contain an image of the same object, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the or each further image of a scene to attempt to detect an image of the object in the image; and, for each image in which an image of the object is deemed to be found, using the results of the analysis in determining the position of the vehicle.
6. The method of any preceding claim wherein the database comprises data indicative of a plurality of objects expected to be encountered by vehicles traversing the road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use.
7. A method of determining a position of a vehicle when traversing a road network, the method comprising;
providing a database comprising data indicative of a plurality of objects expected to be encountered by vehicles traversing the road network, wherein the database comprises, for each object, at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained by a camera associated with a vehicle traversing the road network to attempt to detect the object in the image in use;
obtaining, from the database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the obtained data indicative of each object comprises at least the data indicative of the position of the object and the data indicative of the appearance of the object, and the algorithm data associated with the object;
obtaining an image of a scene from a camera associated with the vehicle while traversing the road network;
and, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image obtained from the camera;
and, where an image of the object is deemed to be found in the image of a scene obtained from the camera, using the results of the analysis in determining a position of the vehicle.
8. The method of any preceding claim comprising identifying an area of interest in the image obtained from the camera, wherein the area of interest is expected to contain an image of the object to be detected, and analysing the identified area of interest using the obtained data indicative of at least the appearance of the object and the algorithm data associated with the obtained data indicative of the object, wherein the area of interest corresponds to only a portion of the obtained image.
9. The method of any preceding claim wherein the data indicative of an object further comprises data indicative of where, in a scene encountered by a vehicle when traversing the road network, the object may be expected to appear.
10. The method of claim 9 wherein, when an object has a position associated with a road element comprising a plurality of lanes, the data indicative of the object further comprises data indicative of a lane of the road element with which the position of the object is associated.
1 1. The method of claim 9 or 10 as dependent upon claim 8 comprising using the data indicative of where, in a scene encountered by a vehicle the object may be expected to appear in identifying the area of interest in the image obtained from the camera.
12. The method of any preceding claim wherein the stored data indicative of an object further comprises data indicative of the shape and orientation of the object.
13. The method of any preceding claim wherein the algorithm data associated with an object is indicative of a set of one or more algorithms for use in analysing an image expected to contain an image of the object to attempt to detect the image of the object, and optionally a set of one or more parameters for configuring the or each algorithm for performing the analysis.
14. The method of claim 13 wherein the set of one or more algorithms are arranged to attempt to determine a match between each of one or more features identified in the at least a portion of the image of the scene that is analysed and a feature of the object as determined using the appearance data for the object.
15. The method of any preceding claim wherein the step of analysing the at least a portion of the image of a scene comprises determining a projection between one or more feature points of the object determined using the appearance data for the object and one or more feature points detected in the at least a portion of the image of the scene that is analysed.
16. The method of any preceding claim wherein the algorithm data is indicative of one or more of a feature point detector, neural network or correlation detector.
17. The method of any preceding claim wherein the determined position of the vehicle is a position relative to a digital map.
18. The method of any preceding claim comprising using the determined position of the vehicle in navigation of or by the vehicle to determine a position of the vehicle in relation to a given lane of a multiple lane road element, or to refine an assumed approximate position of the vehicle.
19. A system for determining a position of a vehicle when traversing a road network, the system comprising;
means for obtaining, from a database, data indicative of one or more objects expected to be encountered by the vehicle in the future while traversing the road network, wherein the data indicative of each object comprises at least data indicative of the position of the object and data indicative of the appearance of the object, and wherein the data indicative of each object is associated with algorithm data to be used in analysing an image of a scene obtained from a camera associated with a vehicle to attempt to detect an image of the object in the image of the scene obtained from the camera;
means for obtaining an image of a scene from a camera associated with the vehicle while traversing the road network;
and means for, for one or more one object in respect of which data has been obtained from the database, using at least the obtained data indicative of the appearance of the object and the algorithm data associated with the obtained data indicative of the object to analyse at least a portion of the image of a scene obtained from the camera associated with the vehicle to attempt to detect an image of the object in the image of the scene;
and means for, where an image of the object is deemed to be found in the image of a scene obtained from the camera, using the results of the analysis in determining a position of the vehicle.
20. The method or system of any preceding claim wherein the object is a traffic sign.
21. A computer program product comprising computer readable instructions adapted to carry out a method as claimed in any of claims 1 to 18 or 20 when executed on suitable data processing means.
PCT/EP2019/073677 2018-09-07 2019-09-05 Methods and systems for determining the position of a vehicle WO2020049089A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1814566.4A GB201814566D0 (en) 2018-09-07 2018-09-07 Methods and systems for determining the position of a vehicle
GB1814566.4 2018-09-07

Publications (1)

Publication Number Publication Date
WO2020049089A1 true WO2020049089A1 (en) 2020-03-12

Family

ID=63921367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/073677 WO2020049089A1 (en) 2018-09-07 2019-09-05 Methods and systems for determining the position of a vehicle

Country Status (2)

Country Link
GB (1) GB201814566D0 (en)
WO (1) WO2020049089A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210199814A1 (en) * 2017-11-02 2021-07-01 Zte Corporation Positioning method and device, and server and system
CN113280822A (en) * 2021-04-30 2021-08-20 北京觉非科技有限公司 Vehicle positioning method and positioning device
WO2023215418A1 (en) * 2022-05-04 2023-11-09 Qualcomm Incorporated Estimating and transmitting objects captured by a camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009098154A1 (en) * 2008-02-04 2009-08-13 Tele Atlas North America Inc. Method for map matching with sensor detected objects
US20140172290A1 (en) * 2012-12-19 2014-06-19 Toyota Motor Engineering & Manufacturing North America, Inc. Navigation of on-road vehicle based on vertical elements
US20150081211A1 (en) * 2013-09-17 2015-03-19 GM Global Technologies Operations LLC Sensor-aided vehicle positioning system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009098154A1 (en) * 2008-02-04 2009-08-13 Tele Atlas North America Inc. Method for map matching with sensor detected objects
US20140172290A1 (en) * 2012-12-19 2014-06-19 Toyota Motor Engineering & Manufacturing North America, Inc. Navigation of on-road vehicle based on vertical elements
US20150081211A1 (en) * 2013-09-17 2015-03-19 GM Global Technologies Operations LLC Sensor-aided vehicle positioning system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210199814A1 (en) * 2017-11-02 2021-07-01 Zte Corporation Positioning method and device, and server and system
CN113280822A (en) * 2021-04-30 2021-08-20 北京觉非科技有限公司 Vehicle positioning method and positioning device
CN113280822B (en) * 2021-04-30 2023-08-22 北京觉非科技有限公司 Vehicle positioning method and positioning device
WO2023215418A1 (en) * 2022-05-04 2023-11-09 Qualcomm Incorporated Estimating and transmitting objects captured by a camera

Also Published As

Publication number Publication date
GB201814566D0 (en) 2018-10-24

Similar Documents

Publication Publication Date Title
US11085775B2 (en) Methods and systems for generating and using localisation reference data
US20220214174A1 (en) Methods and Systems for Generating and Using Localization Reference Data
KR102618443B1 (en) Method and system for video-based positioning and mapping
EP2450667B1 (en) Vision system and method of analyzing an image
US8259994B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
EP3147884B1 (en) Traffic-light recognition device and traffic-light recognition method
WO2020043081A1 (en) Positioning technique
US11200432B2 (en) Method and apparatus for determining driving information
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
WO2020049089A1 (en) Methods and systems for determining the position of a vehicle
US20200279395A1 (en) Method and system for enhanced sensing capabilities for vehicles
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
US11513211B2 (en) Environment model using cross-sensor feature point referencing
JP5435294B2 (en) Image processing apparatus and image processing program
CN115457084A (en) Multi-camera target detection tracking method and device
CN113469045B (en) Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium
CN114898314A (en) Target detection method, device and equipment for driving scene and storage medium
CN116917936A (en) External parameter calibration method and device for binocular camera
CN117541465A (en) Feature point-based ground library positioning method, system, vehicle and storage medium
CN116630430A (en) Camera online calibration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19765686

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19765686

Country of ref document: EP

Kind code of ref document: A1