WO2014091043A1 - Simultaneous localization and mapping method for robotic devices - Google Patents

Simultaneous localization and mapping method for robotic devices Download PDF

Info

Publication number
WO2014091043A1
WO2014091043A1 PCT/ES2013/070846 ES2013070846W WO2014091043A1 WO 2014091043 A1 WO2014091043 A1 WO 2014091043A1 ES 2013070846 W ES2013070846 W ES 2013070846W WO 2014091043 A1 WO2014091043 A1 WO 2014091043A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
points
objects
location
characteristic
Prior art date
Application number
PCT/ES2013/070846
Other languages
Spanish (es)
French (fr)
Inventor
Tomás MARTÍNEZ MARÍN
Eduardo LOPEZ REDONDO
Original Assignee
Universidad De Alicante
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universidad De Alicante filed Critical Universidad De Alicante
Publication of WO2014091043A1 publication Critical patent/WO2014091043A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Definitions

  • the object of the invention is a simultaneous location and mapping system for robotic devices, known in the field of robotics by its acronym SLAM.
  • the basic idea is to create a model of the environment through which the vehicle moves at the same time that it is located.
  • the way to locate from the observations of the sensor, as well as the way in which the information of the environment is modeled generate the various techniques of the SLAM, technical field where the present invention is integrated.
  • the purpose of the SLAM methods and systems is to use the sensor data of a robotic vehicle (which will be referred to herein as a robot or vehicle) to update its position, as well as the information of the environment stored in a structure called Map.
  • a robotic vehicle which will be referred to herein as a robot or vehicle
  • Map the information of the environment stored in a structure called Map.
  • a laser device is usually used as a sensor that provides the information of the environment. With it, the robot must be able to correct its position and that of the elements contained in the map. This is achieved by extracting different characteristics of the sensor data.
  • characteristic is understood as any consecutive region of points, in the sequence of data provided by the sensor, that belong to the same real object.
  • Each of these characteristics has a position uncertainty, as does the vehicle, which has this uncertainty through the information provided by its odometry system (consisting of the angle of rotation of the wheels and the distance traveled).
  • the update of the position of the vehicle and that of the map elements is carried out taking into account the uncertainty of the observations coming from the sensor and that of the odometry.
  • the vehicle In the process of locating, initially, the vehicle detects obstacles through its sensor. Subsequently, the vehicle moves, while its odometry system provides the distance traveled and the angle of rotation of the wheels. With this information a first location hypothesis is established. After that, the robot measures the location of the objects of the environment and based on that location establishes a second hypothesis of location that does not have to coincide with that calculated by odometry. Finally, the robot merges the location hypothesis provided by the odometry and the sensors to, depending on the uncertainty of both, establish a definitive location hypothesis.
  • Our SLAM proposal consists of three inputs and two outputs.
  • the inputs consist of the last known position of the vehicle (X t ), the data acquired by external sensors, such as a laser and an image sensor, and finally, the state of the map at the current time (M t ).
  • SLAM SLAM
  • one of the most used external sensors is the laser. It collects the metric information of the environment by performing an angular scan of its laser beam so that, for each angle, the distance or range to the nearest obstacle is known. All the range information acquired for each position of the beam in the same scan is defined herein as "be".
  • odometry information has to do with the distance traveled by the vehicle in a given period of time, as well as with the turning position of its wheels. Normally, this information is usually collected using encoders associated with the drive and rotation motors of the vehicle's wheels.
  • the map of any SLAM process has the task of saving the information provided by the external sensor.
  • the way in which this information is stored is one of the keys that gives any SLAM strategy its decisive advantage over the rest. Therefore, one of the technical problems to be solved by the present invention is the optimization of the information management provided by the sensor, that is, the establishment of an environment map.
  • One of the determining factors in the location of any SLAM process is the association of the information received in a moment by a sensor with that already existing on the map. By identifying this information captured by the sensor at one time with another already stored and assuming the invariability of the map, the location of the vehicle can be resolved. This form of location is based on observation, since it requires information from an external sensor for its calculation. So that the association between the sensor information and the information contained in the map is possible, all SLAM processes perform a segmentation of the son. This segmentation consists in the extraction of significant units of information (characteristics) from the set of ranges provided by the sensor. The characteristics are an aggregation of consecutive sensor data that facilitates the association with the information stored in the map at an abstraction level higher than the minimum possible based on simple points.
  • odometry provides the information necessary to calculate a vehicle location based on its dynamics. This location will be referred to below as location by prediction.
  • location by prediction Once the location has been calculated using all the vehicle's sensory information: (a) laser (observation location) and (b) encoders (prediction), any SLAM strategy merges both location information to obtain a unique location (X t + i). From this, the map is updated using the information of the laser sensor, obtaining a new state (M t + i).
  • SLAM techniques can be grouped into two classes: topological SLAM and metric SLAM.
  • topological SLAM The first of these (topological SLAM) tries to maintain a map of the environment based on relative positions of the most significant places in the environment in which the vehicle moves (rooms, corridors, corridor crossings, etc.). Maps in this category [Adrien Angel ⁇ , StéphaneDoncieux, Jean A Arcady Meyer, David Filliat. Visual topological SLAM and global localization. In ICRA'09 Proceedings of the 2009 IEEE internationalconferenceon Robotics and Automation, pages 2029 to 2034] are modeled using graphs of connected nodes, so that each of these represents a specific environment scenario. When the robot arrives at one place on the map and wishes to go to another, check its topological map and find out the distance and direction of the new destination.
  • the vehicle maintains a map of the most relevant characteristics of the environment, which serve to locate it.
  • the characteristics are stored on the map keeping their most probable position within an absolute metric coordinate system;
  • the probability of the position is modified as the vehicle moves around the environment, using the new observations of the characteristics to reinforce its certainty of position.
  • the metric SLAM location techniques are mainly grouped into three basic categories: EKF (Kalman extended filter), RBPF (particle filter) and Sean - Matching.
  • mapping techniques are classified into three other categories: brands, grid (grid), patterns and scans alignment.
  • SLAM techniques can be considered taking into account all possible relationships between location and mapping methods:
  • the object of the present invention is a simultaneous location and mapping system for robotic devices that provides improvements in SLAM processes with respect to the association, location and process of updating the map.
  • association and location As the first two (association and location) result of the update process, the different advantages and contributions of the present invention are shown below.
  • mapping stage The contribution in the mapping stage that is proposed is related to the modeling of the map.
  • all the map models used in the existing SLAM techniques are of the type based on: (a) marks; (b) grid of probabilistic cells; or (c) based on patterns, this is a set of consecutive ranges of all received by the sensor (be) and that represent a fragment of a real object.
  • the first are the simplest, but also the ones that have the most limitations when it comes to capturing environments with arbitrary forms. They are designed to represent structured environments based on points and lines. Each brand is represented by a position and an associated uncertainty probability.
  • the seconds (the grid) model the environment based on cells with an associated occupancy probability.
  • Third parties (employers) can be considered an extension of the former, in the sense that they try to expand the restricted set of points and lines with which they work in order to take advantage of the information in the form of any obstacle to establish more robust locations in Not too structured environments.
  • the present invention proposes to replace the two-dimensional grid and the patterns by entities called objects, which consist of a sequence of moving points dynamically adjustable in size in order to represent the shape of the contours of the real obstacles being detected.
  • Each point in the sequence that represents the objects has an associated position and weight that indicates its degree of mobility.
  • location probabilities associated with the position of each brand / pattern and cell which allows the map to be updated to achieve a global consistency.
  • each weight associated with a point of the object represents the probability that its position is more or less close to what it actually represents.
  • map objects can be adapted to local deformations of the contours of obstacles that may appear in real time. In contrast, patterns are not averaged to reduce or eliminate uncertainty.
  • the grid-based method does not provide information on the connectivity of the cells that are occupied (FIG. 17).
  • the present invention stores the information of the objects as independent entities from each other, which allows establishing association strategies based on the relative positions of the objects.
  • These objects therefore, are defined as an orderly sequence of points with information on their position (X, Y) and a weight (P) proportional to the degree of uncertainty of that position.
  • the ordering of the points makes it easy to introduce concepts of computational geometry, such as occlusion and the analysis of the shape of objects.
  • the maps based on marks only model points, lines or patterns (defined as the set of measurements of the sensor).
  • the overall consistency of the map is achieved by maintaining a covariance matrix that reflects the uncertainties in the positions of the state vector used, that is, the state of the vehicle (position (X, Y) and orientation) and position of the rest of the marks.
  • the grid-based map achieves the consistency of the map by maintaining a probability map for the cell.
  • the map can be modified as the probability associated with its cells varies.
  • object-based maps use the weights associated with each point of them as a measure of their overall consistency.
  • its associated weight is the smallest possible (for example, 1).
  • its position is updated over time based on the measurements received from the sensor, its weight increases, so that it is proportional to the number of updates of its position using the sensor information (FIG. 17).
  • the weighting of the points of the objects allows, as in the case of maps based on marks / patterns and grid, to ensure the overall consistency of the map as the system evolves over time.
  • the occupancy map does not take advantage of the potential advantage of knowing that all cells with an occupancy probability equal to 0.6 form a single related grouping, just as cells that have a probability of 0.7 may also be representing The same physical contour.
  • This way of representing the physical reality internally provided by the sensor makes it easy for us to introduce concepts of computational geometry of the type occlusion between objects and object shape analysis to know if they are closed on themselves or to associate, based on shape, sensor characteristics With existing objects.
  • the way of storing sensor information in objects allows us to associate the sensor segmentation (related groupings of sensor points that refer to the same physical object and that we will call "features") with the map information.
  • the association process consists in the use of the object concept to robustly associate map entities using the geometric criteria of shape and occlusion.
  • the present invention solves the problem by using a transformation to achieve relative displacement and rotation between the points of the feature and the object as described below.
  • information is used on the form of the collections of points to fit.
  • This information is summarized in another sequence of data called a signature that is formed by calculating the relative angle between two consecutive points in the collection of original points.
  • the advantage of working with the signature is that it is invariant to translations and rotations of the original sequence.
  • the proposal presented is to establish these relationships using exclusively the information contained in the signature, where the relevant information as corners and in general , angled points, is clearly reflected.
  • FIG. 1 shows a graph with the object and observation signatures, including the relative displacement between them.
  • FIG. 2 shows a scheme with the association of objects for a particular case in which three signatures of observations (k, k + 1, k + 2) and N signatures of objects are based.
  • FIG. 3 shows a graph with the relative displacement between objects and observations, where the maximum is produced for a displacement of eleven points.
  • FIG. 4 shows a graph with the construction of the signature in the practical case of FIG. 3, the homologous reference points are calculated.
  • FIG. 5 shows the scheme of the location process that is part of the object of the invention.
  • FIG. 6 shows an object-based map
  • FIG. 7 shows the object-based map of Figure 6 during the update process.
  • FIG. 8 shows the update of the object map for the case where there is an overlap between the observation points and those of the associated object.
  • FIG. 9 shows the update of the object map for the case where there are points of an observation that do not overlap with points of the associated object.
  • FIG. 10 shows the update of the object map for the case where they exist points of an object that do not overlap with points of any observation taken from the sensor.
  • FIG. 1 shows a diagram of the hardware solution for the simultaneous location and mapping method for robotic devices object of the present invention.
  • FIG. 12 schematically shows the realization of segmentation in the hardware solution of the invention.
  • FIG. 13 schematically shows the hardware implementation of the cross correlation between the signature of an object and that of a characteristic.
  • FIG. 14 shows the general scheme of association implemented in an FPGA according to the method object of the present invention.
  • FIG. 15 shows the outputs of the cross correlators as a function of the viewing angle of the sensor (a).
  • FIG. 16 shows the final location scheme that is obtained by merging the location assumptions based on the observation and the only prediction one.
  • FIG. 17 shows a comparison of the mapping and grid-based mapping techniques together with the mapping proposal using weights on the objects.
  • mapping process As indicated, the main contributions of the invention are detailed in the mapping process, the association process and the localization process.
  • the association process based on the signature of the characteristics (one-dimensional) and of the objects (sequences based on the form) is described.
  • the localization process based on associative hypotheses between notable points of the signatures of the characteristics and their associated object to achieve a transformation in displacement and rotation that achieves a good alignment of the points of the characteristics and those of the objects is explained. .
  • points move, delete or add to map objects
  • Objects are defined as an ordered sequence of points. Each of them contains position information in an absolute reference system as well as a probability measure associated with the position information of the point being in a near environment of the actual position it represents. So, given any ordered set CO of N ordered pairs, CO - ⁇ (X ⁇ , ⁇ i) V 1 ⁇ i ⁇ N i 3 ⁇ 4, yi e R ⁇
  • the minimum of the mean square error function indicates the relative displacement between the signatures for which there is a better fit between the two.
  • Figure 1 shows both signatures and the relative displacement for which the best fit is produced.
  • the association between the regions of the segmented (characteristics) and those of the map objects is carried out in parallel by calculating the mean square error of each of the observations signatures (SOBV) with all those signatures of the map objects ( SOBJ) that are compatible with your position.
  • SOBV observations signatures
  • SOBJ those signatures of the map objects
  • Figure 2 illustrates the process described for a particular case in which we start from three observation signatures (k, k + 1, k + 2) and N object signatures.
  • association hypothesis that is, characteristic - object pairs of the map, where each of them, in turn, establishes a location hypothesis that is solved as indicated below.
  • a two-dimensional shift and rotation transformation is defined by at least two pairs of points.
  • One of the pairs is determined by the homologous points calculated above and the other can be established considering as associated those points of the original sequences that are equidistant from their respective reference points. Once the point associations have been established in an environment of the reference points in both sequences, the shift and rotation transformation is determined.
  • Figure 5 shows the proposed location scheme based on the form.
  • the location hypothesis (51) is established in the following stages: i. Definition of the associated OBJ object (52) and the OBV characteristic (53).
  • a stage of relative displacement of signatures and establishment of homologous points in the signatures (56).
  • v. A stage of establishing homologous points in the characteristic and its associated object (57.57 '). saw.
  • a stage of establishing associations between the points of the sequence of the characteristic and that of the object in an environment centered on the homologous reference points (58).
  • vii. A final stage of location resolution (59).
  • the map is updated based on its current status and information from the sensor.
  • the map update is done locally, taking into account that the map objects are defined by a sequence of one-dimensional points.
  • the points that define the objects are characterized by their absolute position on the map (X, Y) and by their weight (P), as seen in Figure 6.
  • Updating an object on the map requires the location of the vehicle calculated at the current time (X t ) as well as the state of the map at the previous time (Mu). In this way, the local information of the observations extracted from the sensor is converted to absolute coordinates of the map using the location of the vehicle already estimated (X t ) which allows to integrate in the same absolute reference system the sensor data and the points of Map objects.
  • the updating of a region of an object is done by overlapping the updated points of the observation with those of the object.
  • the update of one point of the observation is carried out considering the local information of the weights of the two closest points of the object, as shown in Figure 8.
  • the observation point update is performed as follows:
  • X 0bV t are the updated absolute coordinates of the observation point
  • X 0bV n are the absolute coordinates without updating the observation point
  • X 0bV ⁇ are the absolute coordinates of the point of intersection between the observation ray that joins the sensor with the observation point without updating (X 0b v ti) and the segment defined by the two points of the map closest to the latter.
  • is the angle between the ith point of the object and the next one seen from the sensor.
  • CASE 2 There are observation points that do not overlap with points of the associated object.
  • the update of the region of the object is done by adding the points of the observation that do not overlap the object.
  • the new points that are introduced in the object are of weight 1. All this, as it is observed in figure 9.
  • the update of the area of the object that does not overlap with any observation is done by subtracting one unit from the weights of the points in that region.
  • the weight of the points is negative, the object to which it belongs is deleted.
  • FPGA field-programmable gate array
  • the sensor data acquisition signal in an instant S t is the data input to the system set (100) configured in the FPGA and which results in the object map Mt + 1 and Xt + 1 position updated.
  • the system comprises a first segmentation module (101) that represents all those FPGA resources configured to execute the segmentation of the input sensor (signal St) for the purpose of generating characteristics.
  • the prediction module (102), the association and location module (103), the fusion module (104) and the object map generator module (105) are implemented in the system of the invention (100). ).
  • Each of the previous modules is implemented by logical blocks, which together allow defining the functionality of each module within the system programmed in the FPGA.
  • configurable logical block type point are real (10) configured to represent the information associated with a point of the sensor (range and angle).
  • Configurable logical block type point simulated (20), configured to represent the information associated with a point simulated by the SLAM process (range and angle).
  • this block will treat information related to the position (X, Y) of the particle, as well as its weight.
  • Configurable logical block type vehicle location (40) configured to represent the position of the vehicle at a specific time in the SLAM process. Therefore, the information it handles is that of a position (X, Y) and an orientation
  • the segmentation process implemented in the segmentation module (101) is configured to extract the characteristics of the sensor's sound.
  • the criterion by which it is decided that a region of points of the sea is raised to the characteristic range is usually the local proximity that exists between two consecutive points of that region. If two of them are relatively close, it is assumed that they belong to the same characteristic. Otherwise, one characteristic ends and another begins.
  • the segmentation module (101) comprises as many configurable logic blocks as real point (10) as measures contain the sean; All configurable logic blocks are connected one after the other, so that except for the extreme logic blocks, each has two neighbors, as shown in Figure 12. From the initial information (range and angle of a point of the sea) that is loaded in each logical block are real (10), each of these performs in parallel a comparison of its range with that of its neighbor on the left (for example), storing the difference in the block itself. Next, a sequential tour of the logical blocks from right to left is made by comparing the range difference of a block with that stored in the block on its left.
  • the object map generator module (105) reserves the necessary logical blocks of particle type (configurable logic block particle type (30)) to be able to store the information of each of the objects on the map.
  • Each map object is composed of a set of these configurable logic blocks type particle (30) connected to each other.
  • memory is reserved for the corresponding logic block and is initialized with the appropriate position values (X, Y) and weight.
  • the removal of a point from an object results in the corresponding deletion of the block associated with the point.
  • the association and location module (103) is configured to establish the association between the features extracted from the sea and the map objects. As previously stated, it is based on the cross correlation of two signatures. The mean square error of two unidimensional sequences is given by the expression: This equation requires a large number of multiplications, subtractions and sums. In a general purpose microprocessor, the calculation of this function involves a high computational cost, since it cannot be executed in parallel. However, in an FPGA, it can be done via hardware in an efficient way using ad hoc hardware that performs sums and multiplications simultaneously. Most FPGAs contain logical blocks that perform these types of atomic multiplication and accumulation operations. Normally, these types of operations are called MAC operations (multiple accumulation or multiplyaccumulation). Specifically, the FPGA manufacturer Xilinx ® names these dedicated MAC hardware units, DSP Slice. In the present invention they are called MAC blocks (50).
  • the hardware of the mean square error between the signature of an object and that of a characteristic is shown in Figure 13.
  • the necessary hardware comprises a shift register (51) or delay line (this is a memory array that shifts the contents of each position one place to the right or left in each clock cycle) as well as as many MAC blocks (50) as elements have the signature of the object with which we want to establish the mean square error. The operation is described below.
  • the signature data of the characteristic is loaded into a delay line.
  • the delay line shifts the contents of each position to the right.
  • the contents of lines A and B are subtracted from record to record and the results obtained are entered in the MAC blocks to square the error.
  • the MAC blocks (50) are executed in order from left to right by multiplying the contents of their inputs (A and B) and then adding the result of the MAC block (50) on their left. saw.
  • the final result of the mean square error for a specific offset of the delay line is returned at the rightmost MAC block output.
  • the described element configures an ECM logic block that performs the mean quadratic error of two signatures (60) and comprises three inputs: the signatures of the characteristic (SCAR) and of the object (SOBJ) as well as the relative displacement between them.
  • the output of the block is the average square error of the input data for a particular offset.
  • the general association scheme is shown in Figure 14, where first of all those sections of objects that are compatible with the last known position of the vehicle are taken into account. Secondly, an initial association is established by angle of view of the sensor between the segmented characteristics with the object segments considered. Third, the signatures of the characteristics are calculated, as well as that of the object segments. Fourth, the ECM logic block (60) is used to calculate the mean square error function between the associated signatures. Fifth, the maximums of the outputs of the blocks (ECM) (60) that are compatible are selected, that is, that they maintain an angular relationship between them that is consistent with that between the viewing angles of the characteristics of the be. Figure 15 shows the outputs of the ECM modules as a function of the viewing angle of the sensor (a).
  • one of the minimums found in any of the regions is selected to establish a pair of reference points in the corresponding signatures.
  • Each of the possible maximums establishes a location hypothesis.
  • ECM1 and ECM5 indicate the best location hypothesis because it is where the ECM function grows and decreases most rapidly.
  • the function is almost constant in an environment. Therefore, the most promising or reliable location hypothesis will be chosen as those minimum of the ECM function where its slope variation is as large as possible.
  • each location hypothesis based on the observation contemplated in the process described in the previous section has an associated Gaussian probability density function whose mean ( ⁇ ) and covariance ( ⁇ ) are established based on the multiple embodiments of the ICP algorithm from different points initials.
  • each Gaussian is weighted according to the figure of merit (mean square error) obtained by its location hypothesis as described above.
  • the location hypothesis from the prediction is also modeled as a Gaussian parameter ( ⁇ ⁇ , ⁇ ⁇ ) established according to vehicle dynamics.
  • the final location is obtained by merging the location assumptions based on the observation and the only prediction, each with an associated weight that indicates the reliability of the hypothesis, as shown in Figure 16.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a simultaneous localization and mapping method for robotic devices, comprising at least steps of association, localization and map updating or mapping, characterised in that the modeling of the map in the mapping step is based on entities known as objects, defined as a sequence of moving points of a size that can be dynamically adjusted in order to represent the shape of the contours of real obstacles detected by at least one sensor of the robotic device. Moreover, each point in the sequence representing the objects is associated with a position and a weight indicating the degree of mobility thereof. The basic idea is to create a model of the environment through which a vehicle is moving while it is localized.

Description

MÉTODO DE LOCALIZACIÓN Y MAPEO SIMULTÁNEO PARA DISPOSITIVOS  SIMULTANEOUS LOCATION AND MAPPING METHOD FOR DEVICES
ROBÓTICOS  ROBOTICS
DESCRIPCIÓN DESCRIPTION
El objeto de la invención es un sistema de localización y mapeo simultáneo para dispositivos robóticos, conocido en el campo de la robótica por su acrónimo SLAM. La idea básica es crear un modelo del entorno por el que se desplaza el vehículo al mismo tiempo que éste se encuentra localizado. La forma de localizarse a partir de las observaciones del sensor, así como el modo en que se modela la información del entorno generan las diversas técnicas del SLAM, campo técnico donde queda integrada la presente invención. The object of the invention is a simultaneous location and mapping system for robotic devices, known in the field of robotics by its acronym SLAM. The basic idea is to create a model of the environment through which the vehicle moves at the same time that it is located. The way to locate from the observations of the sensor, as well as the way in which the information of the environment is modeled generate the various techniques of the SLAM, technical field where the present invention is integrated.
Estado de la técnica State of the art
El objeto de los métodos y sistemas SLAM es emplear los datos del sensor de un vehículo robotizado (que en la presente memoria quedará referido indistintamente como robot o vehículo) para actualizar la posición del mismo, así como la información del entorno almacenada en una estructura denominada mapa. Normalmente, suele usarse como sensor un dispositivo láser que es el que proporciona la información del entorno. Con ella, el robot debe ser capaz de corregir su posición y la de los elementos contenidos en el mapa. Esto se consigue extrayendo distintas características de los datos del sensor. The purpose of the SLAM methods and systems is to use the sensor data of a robotic vehicle (which will be referred to herein as a robot or vehicle) to update its position, as well as the information of the environment stored in a structure called Map. Normally, a laser device is usually used as a sensor that provides the information of the environment. With it, the robot must be able to correct its position and that of the elements contained in the map. This is achieved by extracting different characteristics of the sensor data.
En el caso de un sensor láser, se entiende por característica cualquier región consecutiva de puntos, en la secuencia de datos proporcionada por el sensor, que pertenecen a un mismo objeto real. Cada una de estas características tiene una incertidumbre de posición, al igual que el vehículo, que tiene dicha incertidumbre a través de la información que le proporciona su sistema de odometría (consistente en el ángulo de giro de las ruedas y la distancia recorrida). La actualización de la posición del vehículo y la de los elementos del mapa se realiza teniendo en cuenta la incertidumbre de las observaciones procedentes del sensor y la de la odometría. In the case of a laser sensor, characteristic is understood as any consecutive region of points, in the sequence of data provided by the sensor, that belong to the same real object. Each of these characteristics has a position uncertainty, as does the vehicle, which has this uncertainty through the information provided by its odometry system (consisting of the angle of rotation of the wheels and the distance traveled). The update of the position of the vehicle and that of the map elements is carried out taking into account the uncertainty of the observations coming from the sensor and that of the odometry.
En el proceso de localización, inicialmente, el vehículo detecta los obstáculos mediante su sensor. Posteriormente, el vehículo se mueve, mientras que su sistema de odometría le proporciona la distancia recorrida y el ángulo de giro de las ruedas. Con esta información se establece una primera hipótesis de localización. Tras ello, el robot vuelve a medir la localización de los objetos del entorno y en base a esa localización establece una segunda hipótesis de localización que no tiene por qué coincidir con la calculada mediante odometría. Finalmente, el robot fusiona la hipótesis de localización proporcionada por la odometría y por los sensores para, en función de la incertidumbre de ambas, establecer una hipótesis de localización definitiva. In the process of locating, initially, the vehicle detects obstacles through its sensor. Subsequently, the vehicle moves, while its odometry system provides the distance traveled and the angle of rotation of the wheels. With this information a first location hypothesis is established. After that, the robot measures the location of the objects of the environment and based on that location establishes a second hypothesis of location that does not have to coincide with that calculated by odometry. Finally, the robot merges the location hypothesis provided by the odometry and the sensors to, depending on the uncertainty of both, establish a definitive location hypothesis.
Nuestra propuesta de SLAM consta de tres entradas y dos salidas. Las entradas consisten en la última posición conocida del vehículo (Xt), los datos adquiridos por sensores externos, como un láser y un sensor de imagen, y finalmente, el estado del mapa en el instante actual (Mt). Our SLAM proposal consists of three inputs and two outputs. The inputs consist of the last known position of the vehicle (X t ), the data acquired by external sensors, such as a laser and an image sensor, and finally, the state of the map at the current time (M t ).
En SLAM, unos de los sensores externos más utilizados es el láser. Éste recoge la información métrica del entorno realizando un barrido angular de su haz láser de modo que, para cada ángulo, se conoce la distancia o rango al obstáculo más cercano. Toda la información de rangos adquirida para cada posición del haz en un mismo barrido queda definida en esta memoria como "sean". Por otro lado, la información de la odometría tiene que ver con la distancia recorrida por el vehículo en un lapso de tiempo dado, así como con la posición de giro de sus ruedas. Normalmente, esta información suele recabarse empleando encoders asociados a los motores de tracción y giro de las ruedas del vehículo. In SLAM, one of the most used external sensors is the laser. It collects the metric information of the environment by performing an angular scan of its laser beam so that, for each angle, the distance or range to the nearest obstacle is known. All the range information acquired for each position of the beam in the same scan is defined herein as "be". On the other hand, odometry information has to do with the distance traveled by the vehicle in a given period of time, as well as with the turning position of its wheels. Normally, this information is usually collected using encoders associated with the drive and rotation motors of the vehicle's wheels.
El mapa de cualquier proceso de SLAM tiene el cometido de guardar la información proporcionada por el sensor externo. La forma en que se almacena esta información es una de las claves que confiere a cualquier estrategia de SLAM su ventaja resolutiva frente a las restantes. Por tanto, unos de los problemas técnicos a resolver por la presente invención es la optimización de la gestión de la información proporcionada por el sensor, es decir, el establecimiento de un mapa del entorno. The map of any SLAM process has the task of saving the information provided by the external sensor. The way in which this information is stored is one of the keys that gives any SLAM strategy its decisive advantage over the rest. Therefore, one of the technical problems to be solved by the present invention is the optimization of the information management provided by the sensor, that is, the establishment of an environment map.
Uno de los factores determinantes en la localización de cualquier proceso de SLAM es la asociación de la información recibida en un instante por un sensor con la ya existente en el mapa. Identificando esta información capturada por el sensor en un instante con otra ya almacenada y asumiendo la invariabilidad del mapa, puede resolverse la localización del vehículo. Esta forma de localización está basada en la observación, puesto que requiere de la información procedente de algún sensor externo para su cálculo. Para que la asociación entre la información del sensor y la contenida en el mapa sea posible, todos los procesos de SLAM realizan una segmentación del sean. Esta segmentación consiste en la extracción de unas unidades de información significativas (características) a partir del conjunto de rangos aportados por el sensor. Las características son una agregación de datos consecutivos del sensor que facilitan la asociación con la información almacenada en el mapa a un nivel de abstracción superior a la mínima posible basada en puntos simples. One of the determining factors in the location of any SLAM process is the association of the information received in a moment by a sensor with that already existing on the map. By identifying this information captured by the sensor at one time with another already stored and assuming the invariability of the map, the location of the vehicle can be resolved. This form of location is based on observation, since it requires information from an external sensor for its calculation. So that the association between the sensor information and the information contained in the map is possible, all SLAM processes perform a segmentation of the son. This segmentation consists in the extraction of significant units of information (characteristics) from the set of ranges provided by the sensor. The characteristics are an aggregation of consecutive sensor data that facilitates the association with the information stored in the map at an abstraction level higher than the minimum possible based on simple points.
Por otro lado, la odometría proporciona la información necesaria para calcular una localización del vehículo basada en su dinámica. Esta localización será denominada en adelante localización por predicción. Una vez calculada la localización empleando toda la información sensorial del vehículo: (a) láser (localización por observación) y (b) encoders (predicción), cualquier estrategia SLAM fusiona ambas informaciones de localización para obtener una localización única (Xt+i). A partir de ésta, se actualiza el mapa utilizando la información del sensor láser, obteniendo un nuevo estado (Mt+i). On the other hand, odometry provides the information necessary to calculate a vehicle location based on its dynamics. This location will be referred to below as location by prediction. Once the location has been calculated using all the vehicle's sensory information: (a) laser (observation location) and (b) encoders (prediction), any SLAM strategy merges both location information to obtain a unique location (X t + i). From this, the map is updated using the information of the laser sensor, obtaining a new state (M t + i).
Las técnicas de SLAM se pueden agrupar en dos clases: SLAM topológico y SLAM métrico. SLAM techniques can be grouped into two classes: topological SLAM and metric SLAM.
El primero de ellos (SLAM topológico) trata de mantener un mapa del entorno basado en posiciones relativas de los lugares más significativos del entorno en el que se mueve el vehículo (habitaciones, pasillos, cruces de corredores, etc.). Los mapas de esta categoría [Adrien Angelí, StéphaneDoncieux, Jean A Arcady Meyer, David Filliat. Visual topological SLAM and global localization. In ICRA'09 Proceedings of the 2009 IEEE internationalconferenceonRobotics and Automation, páginas 2029 a 2034] se modelan empleando grafos de nodos conectados, de modo que cada uno de estos representan un escenario concreto del entorno. Cuando el robot llega a un lugar del mapa y desea dirigirse a otro, consulta su mapa topológico y averigua la distancia y dirección del nuevo destino. The first of these (topological SLAM) tries to maintain a map of the environment based on relative positions of the most significant places in the environment in which the vehicle moves (rooms, corridors, corridor crossings, etc.). Maps in this category [Adrien Angelí, StéphaneDoncieux, Jean A Arcady Meyer, David Filliat. Visual topological SLAM and global localization. In ICRA'09 Proceedings of the 2009 IEEE internationalconferenceon Robotics and Automation, pages 2029 to 2034] are modeled using graphs of connected nodes, so that each of these represents a specific environment scenario. When the robot arrives at one place on the map and wishes to go to another, check its topological map and find out the distance and direction of the new destination.
En el segundo caso (SLAM métrico), el vehículo mantiene un mapa de las características más relevantes del entorno, que le sirven para localizarse. Las características son almacenadas en el mapa guardando su posición más probable dentro de un sistema de coordenadas métrico absoluto; la probabilidad de la posición se va modificando según el vehículo se desplaza por el entorno, utilizando las nuevas observaciones de las características para reforzar su certidumbre de posición. A continuación, se establecen las diferentes técnicas de localización y mapeado empleadas en las técnicas de SLAM métrico en el estado de la técnica, por ser aquí donde se encuentran los problemas técnicos resueltos por la presente invención y, por tanto, donde se proponen las aproximaciones más novedosas de la presente invención. In the second case (metric SLAM), the vehicle maintains a map of the most relevant characteristics of the environment, which serve to locate it. The characteristics are stored on the map keeping their most probable position within an absolute metric coordinate system; The probability of the position is modified as the vehicle moves around the environment, using the new observations of the characteristics to reinforce its certainty of position. Next, the different location and mapping techniques used in the metric SLAM techniques in the state of the art are established, since this is where the technical problems solved by the present invention are found and, therefore, where the approximations are proposed more novel of the present invention.
Las técnicas de localización SLAM métrico se agrupan principalmente en tres categorías básicas: EKF (filtro extendido de Kalman), RBPF (filtro de partículas) y Sean - Matching. The metric SLAM location techniques are mainly grouped into three basic categories: EKF (Kalman extended filter), RBPF (particle filter) and Sean - Matching.
Las técnicas de mapeo se clasifican en otras tres categorías: marcas, grid (rejilla), patrones y alineamiento de scans. Atendiendo a esta clasificación, se pueden considerar las siguientes técnicas de SLAM teniendo en cuenta todas las posibles relaciones entre métodos de localización y mapeado: Mapping techniques are classified into three other categories: brands, grid (grid), patterns and scans alignment. In accordance with this classification, the following SLAM techniques can be considered taking into account all possible relationships between location and mapping methods:
Figure imgf000006_0001
Para cada una de las técnicas alternativas tenemos las siguientes referencias:
Figure imgf000006_0001
For each of the alternative techniques we have the following references:
[1] R. Smith, M.Self, and P. Cheeseman.Estimating uncertain spatial relationships in robotics.ln Autonomous Robot Vehicles, I. Cox and G. Wilfong, Eds. Springer Verlag, New York, 1988, pp. 167- 193. [1] R. Smith, M.Self, and P. Cheeseman.Estimating uncertain spatial relationships in robotics.ln Autonomous Robot Vehicles, I. Cox and G. Wilfong, Eds. Springer Verlag, New York, 1988, pp. 167-193.
[2] G. Grisetti, C. Stachniss, and W.Burgard. Improving grid A based slam with rao- blackwellized particle filters by adaptive proposals and selective resampling. In IEEE International Conference on Robotics and Automation (ICRA,2005). [2] G. Grisetti, C. Stachniss, and W. Burgard. Improving grid A based slam with rao- blackwellized particle filters by adaptive proposals and selective resampling. In IEEE International Conference on Robotics and Automation (ICRA, 2005).
[3] Juan I. Nieto and José E. Guivant and Eduardo M.Nebot, The Hybrid Metric Maps(HYMMS): A novel map representation for denseSLAM. In IEEE International Conference on Robotics and Automation(ICRA 2004, pp.391-396).  [3] Juan I. Nieto and José E. Guivant and Eduardo M.Nebot, The Hybrid Metric Maps (HYMMS): A novel map representation for denseSLAM. In IEEE International Conference on Robotics and Automation (ICRA 2004, pp. 391-396).
[4] Chieh-Chih Wang and Charles Thorpe and Sebastian Thrun. Online Simultaneous Localization And Mapping with Detection And Tracking of Moving Objects: Theory and Results from a Ground Vehicle in Crowded Urban Areas. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2003). [5] Tim Bailey and Juan Nieto. Sean- SLAM: Recursive Mapping and Localisation with Arbitrary - Shaped Landmarks. Workshop on Quantitative Performance Evaluation of Navigation Solutions for Mobile Robots, Robotics: Science and Systems Conference (RSS), 2008]. [4] Chieh-Chih Wang and Charles Thorpe and Sebastian Thrun. Online Simultaneous Localization And Mapping with Detection And Tracking of Moving Objects: Theory and Results from a Ground Vehicle in Crowded Urban Areas. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2003). [5] Tim Bailey and Juan Nieto. Sean- SLAM: Recursive Mapping and Localisation with Arbitrary - Shaped Landmarks. Workshop on Quantitative Performance Evaluation of Navigation Solutions for Mobile Robots, Robotics: Science and Systems Conference (RSS), 2008].
[6] P. Newman, D.Cole and K. Ho. Outdoor SLAM using visual appearance and láser ranging. In IEEE International (Conference on Robotics and Automation, 2006) [6] P. Newman, D.Cole and K. Ho. Outdoor SLAM using visual appearance and laser ranging. In IEEE International (Conference on Robotics and Automation, 2006)
Explicación de la invención El objeto de la presente invención es un sistema de localización y mapeo simultáneo para dispositivos robóticos que aporta mejoras en los procesos SLAM respecto de la asociación, localización y proceso de actualización del mapa. Por ser las dos primeras (asociación y localización) consecuencia del proceso de actualización, a continuación se muestran las distintas ventajas y aportaciones de la presente invención. Explanation of the invention The object of the present invention is a simultaneous location and mapping system for robotic devices that provides improvements in SLAM processes with respect to the association, location and process of updating the map. As the first two (association and location) result of the update process, the different advantages and contributions of the present invention are shown below.
La aportación en la etapa de mapeado que se propone está relacionada con la modelización del mapa. Tal y como se ha comentado en el estado del arte, todos los modelos de mapa empleados en las técnicas de SLAM existente son de tipo basado en: (a) marcas; (b) rejilla de celdas probabilísticas; o bien (c) basados en patrones, esto es un conjunto de rangos consecutivos de todos los recibidos por el sensor (sean) y que representan un fragmento de un objeto real. The contribution in the mapping stage that is proposed is related to the modeling of the map. As mentioned in the state of the art, all the map models used in the existing SLAM techniques are of the type based on: (a) marks; (b) grid of probabilistic cells; or (c) based on patterns, this is a set of consecutive ranges of all received by the sensor (be) and that represent a fragment of a real object.
Los primeros (las marcas) son los más sencillos, pero también los que presentan más limitaciones a la hora de capturar entornos con formas arbitrarias. Están pensados para representar entornos estructurados en base a puntos y líneas. Cada marca se representa por una posición y una probabilidad de incertidumbre asociada. Los segundos (la rejilla) modelan el entorno a base de celdas con una probabilidad de ocupación asociada. Los terceros (patrones) pueden considerarse una extensión de los primeros, en el sentido en que intentan ampliar el conjunto restringido de puntos y líneas con los que trabajan con el propósito de aprovechar la información de la forma de cualquier obstáculo para establecer localizaciones más robustas en entornos no demasiado estructurados. The first (brands) are the simplest, but also the ones that have the most limitations when it comes to capturing environments with arbitrary forms. They are designed to represent structured environments based on points and lines. Each brand is represented by a position and an associated uncertainty probability. The seconds (the grid) model the environment based on cells with an associated occupancy probability. Third parties (employers) can be considered an extension of the former, in the sense that they try to expand the restricted set of points and lines with which they work in order to take advantage of the information in the form of any obstacle to establish more robust locations in Not too structured environments.
Para evitar las limitaciones de las marcas y aprovechar las características de los otros métodos de una forma simplificada, la presente invención propone reemplazar la rejilla bidimensional y los patrones por entidades denominadas objetos, que consisten en una secuencia de puntos móviles ajustable dinámicamente en tamaño con el fin de representar la forma de los contornos de los obstáculos reales que se van detectando. Cada punto de la secuencia que representa los objetos tiene asociado una posición y un peso que indica el grado de movilidad del mismo. En el caso de los mapas de marcas/patrones y rejilla, existen probabilidades de localización asociadas a la posición de cada marca/patrón y celda, lo que permite al mapa actualizarse para lograr una consistencia global. En el caso de la invención, cada peso asociado a un punto del objeto representa la probabilidad de que su posición se encuentre más o menos cerca de la que realmente representa. Las ventajas de esta aproximación de modelo de mapa son varias. Así pues, en relación con la estrategia basada en marcas, permite modelar cualquier contorno de forma precisa y detallada sin merma sustancial de la memoria empleada. In order to avoid the limitations of the marks and take advantage of the characteristics of the other methods in a simplified manner, the present invention proposes to replace the two-dimensional grid and the patterns by entities called objects, which consist of a sequence of moving points dynamically adjustable in size in order to represent the shape of the contours of the real obstacles being detected. Each point in the sequence that represents the objects has an associated position and weight that indicates its degree of mobility. In the case of brand / pattern and grid maps, there are location probabilities associated with the position of each brand / pattern and cell, which allows the map to be updated to achieve a global consistency. In the case of the invention, each weight associated with a point of the object represents the probability that its position is more or less close to what it actually represents. The advantages of this map model approach are several. Thus, in relation to the brand-based strategy, it allows you to model any contour accurately and in detail without substantial loss of the memory used.
En relación con los modelos basados en rejilla, si bien ambas estrategias permiten capturar información de contornos de cualquier forma, en nuestro caso la memoria requerida para almacenar la información de los objetos crece linealmente con su número y su longitud y no cuadráticamente según la superficie de mapa explorado hasta el momento. In relation to grid-based models, although both strategies allow to capture contour information in any way, in our case the memory required to store the information of objects grows linearly with their number and length and not quadratically according to the surface of map explored so far.
En relación con los modelos basados en patrones, permite actualizar incrementalmente la forma y tamaño de los mismos y, además, permite establecer una probabilidad de consistencia local para cada punto del objeto, en lugar de una probabilidad única asociada al patrón completo. Además, los objetos del mapa pueden adaptarse a las deformaciones locales de los contornos de los obstáculos que pudieran aparecer en tiempo real. En cambio, los patrones no se promedian para reducir o eliminar la incertidumbre. In relation to pattern-based models, it allows you to incrementally update their shape and size and, in addition, allows you to establish a probability of local consistency for each point of the object, rather than a single probability associated with the entire pattern. In addition, map objects can be adapted to local deformations of the contours of obstacles that may appear in real time. In contrast, patterns are not averaged to reduce or eliminate uncertainty.
Otra ventaja a destacar es que el método basado en rejilla no proporciona información sobre la conectividad de las celdas que están ocupadas (FIG. 17). Sin embargo, la presente invención almacena la información de los objetos como entidades independientes unas de otras, lo que permite establecer estrategias de asociación basadas en las posiciones relativas de los objetos. Dichos objetos, por lo tanto, quedan definidos como una secuencia ordenada de puntos con información de su posición (X,Y) y un peso (P) proporcional al grado de incertidumbre de esa posición. La ordenación de los puntos facilita introducir conceptos de geometría computacional, como la oclusión y el análisis de forma de los objetos. Así pues, de acuerdo con lo enunciado en el estado de la técnica, los mapas basados en marcas sólo modelan puntos, rectas o patrones (definidos como el conjunto de medidas del sensor). La consistencia global del mapa se logra manteniendo una matriz de covarianza que refleja las incertidumbres en las posiciones del vector de estado utilizado, esto es, estado del vehículo (posición (X,Y) y orientación) y posición del resto de marcas. Another advantage to note is that the grid-based method does not provide information on the connectivity of the cells that are occupied (FIG. 17). However, the present invention stores the information of the objects as independent entities from each other, which allows establishing association strategies based on the relative positions of the objects. These objects, therefore, are defined as an orderly sequence of points with information on their position (X, Y) and a weight (P) proportional to the degree of uncertainty of that position. The ordering of the points makes it easy to introduce concepts of computational geometry, such as occlusion and the analysis of the shape of objects. Thus, according to what is stated in the state of the art, the maps based on marks only model points, lines or patterns (defined as the set of measurements of the sensor). The overall consistency of the map is achieved by maintaining a covariance matrix that reflects the uncertainties in the positions of the state vector used, that is, the state of the vehicle (position (X, Y) and orientation) and position of the rest of the marks.
El mapa basado en rejilla logra la consistencia del mapa manteniendo un mapa de probabilidades para celda. El mapa puede modificarse según varía la probabilidad asociada a sus celdas. The grid-based map achieves the consistency of the map by maintaining a probability map for the cell. The map can be modified as the probability associated with its cells varies.
Sin embargo, los mapas basados en objetos utilizan los pesos asociados a cada punto de ellos como medida de su grado de consistencia global. Cuando un punto se añade a un objeto, su peso asociado es el menor posible (por ejemplo, 1). Conforme su posición se va actualizando en el tiempo en base a las medidas recibidas del sensor, su peso aumenta, de forma que éste es proporcional al número de actualizaciones de su posición empleando la información del sensor (FIG. 17). Una de las diferencias fundamentales que se observan entre el mapa de objetos y el de rejilla es que el primero almacena la información de contornos clasificada de forma natural por objetos físicos que detecta el sensor. Además, la ponderación de los puntos de los objetos permite, al igual que en los casos de mapas basados en marcas/patrones y rejilla, asegurar la consistencia global del mapa conforme el sistema evoluciona en el tiempo. However, object-based maps use the weights associated with each point of them as a measure of their overall consistency. When a point is added to an object, its associated weight is the smallest possible (for example, 1). As its position is updated over time based on the measurements received from the sensor, its weight increases, so that it is proportional to the number of updates of its position using the sensor information (FIG. 17). One of the fundamental differences observed between the object map and the grid is that the first stores the contour information classified naturally by physical objects detected by the sensor. In addition, the weighting of the points of the objects allows, as in the case of maps based on marks / patterns and grid, to ensure the overall consistency of the map as the system evolves over time.
En los casos de rejilla y objetos, en el primero no existe información sobre la conectividad de las celdas, mientras que en el segundo, sencillamente, aparece de forma natural. Según ilustramos en la FIG. 17, el mapa de ocupación no aprovecha la potencial ventaja de conocer que todas las celdas con probabilidad de ocupación igual a 0,6 forman una única agrupación conexa, del mismo modo que las celdas que tienen una probabilidad de 0,7 también pueden estar representando un mismo contorno físico. Esta forma de representar internamente la realidad física proporcionada por el sensor nos facilita introducir conceptos de geometría computacional del tipo oclusión entre objetos y análisis de forma de objetos para saber si están cerrados sobre sí mismos o para asociar, en base a forma, características del sensor con objetos ya existentes. La forma de almacenar la información del sensor en los objetos nos permite asociar la segmentación del sensor (agrupaciones conexas de puntos del sensor que hacen referencia a un mismo objeto físico y que denominaremos "características") con la información del mapa. En la presente invención, el proceso de asociación consiste en la utilización del concepto de objeto para asociar de forma robusta entidades del mapa utilizando los criterios geométricos de forma y oclusión. In the case of grid and objects, in the first there is no information on the connectivity of the cells, while in the second, it simply appears naturally. As illustrated in FIG. 17, the occupancy map does not take advantage of the potential advantage of knowing that all cells with an occupancy probability equal to 0.6 form a single related grouping, just as cells that have a probability of 0.7 may also be representing The same physical contour. This way of representing the physical reality internally provided by the sensor makes it easy for us to introduce concepts of computational geometry of the type occlusion between objects and object shape analysis to know if they are closed on themselves or to associate, based on shape, sensor characteristics With existing objects. The way of storing sensor information in objects allows us to associate the sensor segmentation (related groupings of sensor points that refer to the same physical object and that we will call "features") with the map information. In the present invention, the association process consists in the use of the object concept to robustly associate map entities using the geometric criteria of shape and occlusion.
Finalmente, en el proceso de localización, las técnicas conocidas como Scan-Matching se fundamentan, en general, en el algoritmo de ajuste de puntos denominados ICP [Lu and E. E. Milios. Robot pose estimation in unknown environments by matching 2D range scans. In Proc. IEEE Comp. Soc. Conf. on Computer Vision and Pattern Recognition, Seattle, WA, pp.935-938, June 1994]. Básicamente, este procedimiento trata de hallar el desplazamiento relativo entre dos colecciones de puntos: una procedente de una característica extraída del sensor y, la otra, de alguna entidad almacenada en el mapa. La idea básica de ICP consiste en hacer coincidir ambas colecciones de puntos empleando una transformación basada en un desplazamiento y una rotación. Para ello, se establecen asociaciones a priori entre los puntos de ambas secuencias y se resuelve la transformación resultante. El proceso se repite hasta que una colección de puntos encaja con la otra. Por desgracia, una de las desventajas principales de este algoritmo es su grado de convergencia. La técnica de ICP no converge al resultado deseado cuando las colecciones de puntos se encuentran muy distantes (esto ocurre cuando el error acumulado de posición del vehículo es grande y se revisita una zona: cerrado de bucle) o bien cuando la rotación relativa de ambas es notable. Finally, in the localization process, the techniques known as Scan-Matching are based, in general, on the point adjustment algorithm called ICP [Lu and E. E. Milios. Robot pose estimation in unknown environments by matching 2D range scans. In Proc. IEEE Comp. Soc. Conf. On Computer Vision and Pattern Recognition, Seattle, WA, pp. 935-938, June 1994]. Basically, this procedure tries to find the relative displacement between two collections of points: one from a characteristic extracted from the sensor and the other from an entity stored on the map. The basic idea of ICP is to match both collections of points using a transformation based on a shift and a rotation. To do this, a priori associations are established between the points of both sequences and the resulting transformation is resolved. The process is repeated until one collection of points fits with the other. Unfortunately, one of the main disadvantages of this algorithm is its degree of convergence. The ICP technique does not converge to the desired result when the collections of points are very distant (this occurs when the accumulated error of vehicle position is large and an area is revisited: closed loop) or when the relative rotation of both is remarkable.
En este punto, la presente invención resuelve el problema utilizando una transformación para lograr un desplazamiento y rotación relativos entre los puntos de la característica y del objeto como describimos a continuación. Para ello, se hace uso de la información sobre la forma de las colecciones de puntos a encajar. Esta información se resume en otra secuencia de datos denominada signatura que se forma calculando el ángulo relativo entre dos puntos consecutivos de la colección de puntos originales. La ventaja de trabajar con la signatura es que ésta es invariante a traslaciones y rotaciones de la secuencia original. A diferencia del algoritmo ICP, donde hay que establecer asociaciones entre los puntos de las colecciones originales, la propuesta que se presenta es la de establecer dichas relaciones empleando exclusivamente la información de forma contenida en la signatura, donde la información relevante como esquinas y en general, puntos angulosos, queda reflejada claramente. De este modo, pueden establecerse hipótesis de asociación entre puntos de ambas signaturas que se corresponden con transformaciones de traslación y rotación. A lo largo de la descripción y las reivindicaciones la palabra "comprende" y sus variantes no pretenden excluir otras características técnicas, aditivos, componentes o pasos. Para los expertos en la materia, otros objetos, ventajas y características de la invención se desprenderán en parte de la descripción y en parte de la práctica de la invención. Los siguientes ejemplos y dibujos se proporcionan a modo de ilustración, y no se pretende que restrinjan la presente invención. Además, la presente invención cubre todas las posibles combinaciones de realizaciones particulares y preferidas aquí indicadas. Breve descripción de las figuras At this point, the present invention solves the problem by using a transformation to achieve relative displacement and rotation between the points of the feature and the object as described below. To do this, information is used on the form of the collections of points to fit. This information is summarized in another sequence of data called a signature that is formed by calculating the relative angle between two consecutive points in the collection of original points. The advantage of working with the signature is that it is invariant to translations and rotations of the original sequence. Unlike the ICP algorithm, where you have to establish associations between the points of the original collections, the proposal presented is to establish these relationships using exclusively the information contained in the signature, where the relevant information as corners and in general , angled points, is clearly reflected. In this way, hypotheses of association between points of both signatures can be established that correspond to translation and rotation transformations. Throughout the description and the claims the word "comprises" and its variants are not intended to exclude other technical characteristics, additives, components or steps. For those skilled in the art, other objects, advantages and features of the invention will be derived partly from the description and partly from the practice of the invention. The following examples and drawings are provided by way of illustration, and are not intended to restrict the present invention. In addition, the present invention covers all possible combinations of particular and preferred embodiments indicated herein. Brief description of the figures
A continuación se pasa a describir de manera muy breve una serie de dibujos que ayudan a comprender mejor la invención y que se relacionan expresamente con una realización de dicha invención que se presenta como un ejemplo no limitativo de ésta. A series of drawings that help to better understand the invention and that expressly relate to an embodiment of said invention which is presented as a non-limiting example thereof is described very briefly below.
La FIG.1 muestra una gráfica con las signaturas de objeto y de observación, incluyendo el desplazamiento relativo entre ambas. FIG. 1 shows a graph with the object and observation signatures, including the relative displacement between them.
La FIG.2 muestra un esquema con la asociación de objetos para un caso particular en el que se parte de tres signaturas de observaciones (k, k+1 , k+2) y N signaturas de objetos.  FIG. 2 shows a scheme with the association of objects for a particular case in which three signatures of observations (k, k + 1, k + 2) and N signatures of objects are based.
La FIG.3 muestra una gráfica con el desplazamiento relativo entre objetos y observaciones, donde el máximo se produce para un desplazamiento de once puntos.  FIG. 3 shows a graph with the relative displacement between objects and observations, where the maximum is produced for a displacement of eleven points.
La FIG.4 muestra una gráfica con la construcción de la signatura en el caso práctico de la FIG.3, calculándose los puntos homólogos de referencia.  FIG. 4 shows a graph with the construction of the signature in the practical case of FIG. 3, the homologous reference points are calculated.
La FIG.5 muestra el esquema del proceso de localización que forma parte del objeto de la invención.  FIG. 5 shows the scheme of the location process that is part of the object of the invention.
La FIG.6 muestra un mapa basado en objetos.  FIG. 6 shows an object-based map.
La FIG.7 muestra el mapa basado en objetos de la figura 6 durante el proceso de actualización.  FIG. 7 shows the object-based map of Figure 6 during the update process.
La FIG.8 muestra la actualización del mapa de objetos para el caso donde exista solape entre los puntos de observación y los del objeto asociado.  FIG. 8 shows the update of the object map for the case where there is an overlap between the observation points and those of the associated object.
La FIG.9 muestra la actualización del mapa de objetos para el caso donde existan puntos de una observación que no solapan con puntos del objeto asociado.  FIG. 9 shows the update of the object map for the case where there are points of an observation that do not overlap with points of the associated object.
- La FIG.10 muestra la actualización del mapa de objetos para el caso donde existen puntos de un objeto que no solapan con puntos de alguna observación extraída del sensor. - FIG. 10 shows the update of the object map for the case where they exist points of an object that do not overlap with points of any observation taken from the sensor.
La FIG.1 1 muestra un esquema de la solución hardware para el métodode localización y mapeo simultáneo para dispositivos robóticos objeto de la presente invención.  FIG. 1 shows a diagram of the hardware solution for the simultaneous location and mapping method for robotic devices object of the present invention.
La FIG.12 muestra esquemáticamente la realización de la segmentación en la solución hardware de la invención.  FIG. 12 schematically shows the realization of segmentation in the hardware solution of the invention.
La FIG.13 muestra esquemáticamente la implementación hardware de la correlación cruzada entre la signatura de un objeto y la de una característica.  FIG. 13 schematically shows the hardware implementation of the cross correlation between the signature of an object and that of a characteristic.
- La FIG.14 muestra el esquema general de asociación implementado en una FPGA según el método objeto de la presente invención.  - FIG. 14 shows the general scheme of association implemented in an FPGA according to the method object of the present invention.
La FIG.15 se muestran las salidas de los correladores cruzados en función del ángulo de visión del sensor (a).  FIG. 15 shows the outputs of the cross correlators as a function of the viewing angle of the sensor (a).
La FIG.16 muestra el esquema de localización final que se obtiene fusionando las hipótesis de localización basadas en la observación y la única de predicción.  FIG. 16 shows the final location scheme that is obtained by merging the location assumptions based on the observation and the only prediction one.
La FIG.17 muestra una comparativa de las técnicas de mapeado basadas en marcas y en rejilla junto con la propuesta de mapeado empleando pesos en los objetos.  FIG. 17 shows a comparison of the mapping and grid-based mapping techniques together with the mapping proposal using weights on the objects.
Exposición detallada de modos de realización y ejemplo Detailed presentation of embodiments and example
Como se ha indicado, las principales aportaciones de la invención se encuentran detalladas en el proceso de mapeado, el proceso de asociación y el proceso de localización. Para explicar detenidamente la invención, en primer lugar, se describe el proceso de asociación basado en la signatura de las características (unidimensional) y de los objetos (secuencias basadas en la forma). Posteriormente se explica, el proceso de localización basado en hipótesis asociativas entre puntos notables de las signaturas de las características y de su objeto asociado para lograr una transformación en desplazamiento y rotación que logre un buen alineamiento de los puntos de las características y los de los objetos. Por último se describirá cómo los puntos se desplazan, borran o añaden a los objetos del mapa. As indicated, the main contributions of the invention are detailed in the mapping process, the association process and the localization process. In order to explain the invention in detail, first, the association process based on the signature of the characteristics (one-dimensional) and of the objects (sequences based on the form) is described. Subsequently, the localization process based on associative hypotheses between notable points of the signatures of the characteristics and their associated object to achieve a transformation in displacement and rotation that achieves a good alignment of the points of the characteristics and those of the objects is explained. . Finally, it will be described how points move, delete or add to map objects.
Los objetos quedan definidos como una secuencia ordenada de puntos. Cada uno de ellos contiene información de posición en un sistema de referencia absoluto así como una medida de probabilidad asociada a que la información de posición del punto se encuentre en un entorno próximo de la posición real que representa. Así pues, dado cualquier conjunto ordenado CO de N pares ordenados, CO - { (XÍ, ¥i) V 1 < i < N i ¾, yi e R } Objects are defined as an ordered sequence of points. Each of them contains position information in an absolute reference system as well as a probability measure associated with the position information of the point being in a near environment of the actual position it represents. So, given any ordered set CO of N ordered pairs, CO - {(X Í , ¥ i) V 1 <i <N i ¾, yi e R}
Definimos su signatura como, We define your signature as,
SCO - { arelan ( (γ,+ι-νΜκ+ι-κ) } V 2 < i < N-l } SCO - {arelan ((γ, + ι-νΜκ + ι-κ)} V 2 <i <Nl}
La asociación entre las observaciones (características) procedentes del sensor y los objetos del mapa se realiza intentando encajar la signatura de la observación (SOBV) en alguna región de la signatura del objeto (SOBJ). Para ello, calculamos el error cuadrático medio de ambas signaturas, de modo que para cada desplazamiento de la signatura de observación sobre la del objeto se obtiene una medida de similitud entre ambas: The association between the observations (characteristics) coming from the sensor and the map objects is made trying to fit the observation signature (SOBV) in some region of the object signature (SOBJ). To do this, we calculate the mean square error of both signatures, so that for each displacement of the observation signature over that of the object, a measure of similarity is obtained between the two:
Figure imgf000013_0001
Figure imgf000013_0001
El mínimo de la función de error cuadrático medio nos indica el desplazamiento relativo entre las signaturas para el que existe un mejor encaje entre ambas. La figura 1 muestra ambas signaturas y el desplazamiento relativo para el que se produce el mejor encaje. The minimum of the mean square error function indicates the relative displacement between the signatures for which there is a better fit between the two. Figure 1 shows both signatures and the relative displacement for which the best fit is produced.
La asociación entre las regiones del sean segmentadas (características) y las de los objetos del mapa se realiza de forma paralela calculando el error cuadrático medio de cada una de las signaturas de las observaciones (SOBV) con todas aquellas signaturas de los objetos del mapa (SOBJ) que son compatibles con su posición. El punto de la función error cuadrático medio donde se produzca un mínimo generará una hipótesis de asociación entre una característica y un objeto del mapa. The association between the regions of the segmented (characteristics) and those of the map objects is carried out in parallel by calculating the mean square error of each of the observations signatures (SOBV) with all those signatures of the map objects ( SOBJ) that are compatible with your position. The point of the mean square error function where a minimum occurs will generate a hypothesis of association between a characteristic and an object on the map.
En la figura 2 se ilustra el proceso descrito para un caso particular en el que partimos de tres signaturas de observaciones (k, k+1 , k+2) y N signaturas de objetos. Figure 2 illustrates the process described for a particular case in which we start from three observation signatures (k, k + 1, k + 2) and N object signatures.
El esquema general de asociación anteriormente visto genera una hipótesis de asociación, esto es, parejas característica - objeto del mapa, donde cada una de éstas, a su vez, establece una hipótesis de localización que se resuelve como se indica a continuación. The general scheme of association previously seen generates an association hypothesis, that is, characteristic - object pairs of the map, where each of them, in turn, establishes a location hypothesis that is solved as indicated below.
El desplazamiento relativo para el que se produce un mínimo de la función de error cuadrático medio (ECM (SOBJ, SOBV)(i)) permite establecer una asociación entre los puntos de la signatura de la observación con los de su asociada del objeto; en la figura 3, idéntica a la figura 1 , se muestra cómo el desplazamiento relativo para el que se produce el mínimo es de once puntos. Está representado en la figura 3 un punto de la signatura de una característica de la observación (3) y más concretamente, el de valor máximo. El punto homólogo (3') en la signatura del objeto está representado teniendo en cuenta el desplazamiento relativo. The relative displacement for which a minimum of the mean square error function (ECM (SOBJ, SOBV) (i)) allows to establish an association between the points of the observation signature with those of its object associate; Figure 3, identical to Figure 1, shows how the relative displacement for which the minimum occurs is eleven points. A point of the signature of a characteristic of the observation (3) and more specifically, the maximum value is represented in Figure 3. The homologous point (3 ') in the object signature is represented taking into account the relative displacement.
Por construcción de la signatura, a partir de la pareja de puntos establecida en la figura 4 se calcula otra pareja de puntos homólogos de referencia en las secuencias originales de característica y objeto (31 , 31 ', 32, 32', 33, 33', 34, 34', 35, 35'). By construction of the signature, from the pair of points established in Figure 4 another pair of homologous reference points is calculated in the original sequences of characteristic and object (31, 31 ', 32, 32', 33, 33 ' , 34, 34 ', 35, 35').
Una transformación de desplazamiento y rotación en dos dimensiones queda definida por, al menos, dos parejas de puntos. Una de las parejas viene determinada por los puntos homólogos calculados anteriormente y la otra puede establecerse considerando como asociados aquellos puntos de las secuencias originales que equidisten de sus respectivos puntos de referencia. Una vez establecidas las asociaciones de puntos en un entorno de los puntos de referencia en ambas secuencias, la transformación de desplazamiento y rotación queda determinada. En la figura 5 se muestra el esquema de localización propuesto basado en la forma. A two-dimensional shift and rotation transformation is defined by at least two pairs of points. One of the pairs is determined by the homologous points calculated above and the other can be established considering as associated those points of the original sequences that are equidistant from their respective reference points. Once the point associations have been established in an environment of the reference points in both sequences, the shift and rotation transformation is determined. Figure 5 shows the proposed location scheme based on the form.
Tal y como se observa en dicha figura 5, la hipótesis de localización (51) queda establecida en las siguientes etapas: i. Definición del objeto asociado OBJ (52) y de la característica OBV (53). As can be seen in Figure 5, the location hypothesis (51) is established in the following stages: i. Definition of the associated OBJ object (52) and the OBV characteristic (53).
II Definición de la signatura del objeto SOBJ (54) y de la característica SOBV (55). II Definition of the signature of the SOBJ object (54) and the SOBV characteristic (55).
III Establecimiento de la correlación entre las signaturas de objeto y de la característica (54,55) de acuerdo con:
Figure imgf000014_0001
III Establishment of the correlation between object and characteristic signatures (54,55) according to:
Figure imgf000014_0001
Una etapa de desplazamiento relativo de signaturas y establecimiento de puntos homólogos en las signaturas (56). v. Una etapa de establecimiento de puntos homólogos en la característica y en su objeto asociado (57,57'). vi. Una etapa de establecer asociaciones entre los puntos de la secuencia de la característica y la del objeto en un entorno centrado en los puntos homólogos de referencia (58). vii. Una etapa final de resolución de la localización (59). A stage of relative displacement of signatures and establishment of homologous points in the signatures (56). v. A stage of establishing homologous points in the characteristic and its associated object (57.57 '). saw. A stage of establishing associations between the points of the sequence of the characteristic and that of the object in an environment centered on the homologous reference points (58). vii. A final stage of location resolution (59).
Una vez realizada la localización del vehículo, el mapa se actualiza a partir de su estado actual y de la información procedente del sensor. La actualización del mapa se realiza de forma local, teniendo en cuenta que los objetos del mapa quedan definidos por una secuencia de puntos unidimensional. Los puntos que definen los objetos se caracterizan por su posición absoluta en el mapa (X,Y) y por su peso (P), tal y como se observa en la figura 6. Once the vehicle is located, the map is updated based on its current status and information from the sensor. The map update is done locally, taking into account that the map objects are defined by a sequence of one-dimensional points. The points that define the objects are characterized by their absolute position on the map (X, Y) and by their weight (P), as seen in Figure 6.
La actualización de un objeto del mapa requiere la localización del vehículo calculada en el instante actual (Xt) así como el estado del mapa en el instante anterior (Mu). De esta forma, la información local de las observaciones extraídas del sensor se convierten a coordenadas absolutas del mapa empleando la localización del vehículo ya estimada (Xt) lo que permite integrar en un mismo sistema de referencia absoluto los datos del sensor y los puntos de los objetos del mapa. Updating an object on the map requires the location of the vehicle calculated at the current time (X t ) as well as the state of the map at the previous time (Mu). In this way, the local information of the observations extracted from the sensor is converted to absolute coordinates of the map using the location of the vehicle already estimated (X t ) which allows to integrate in the same absolute reference system the sensor data and the points of Map objects.
En la figura 7 se ilustra un ejemplo de un objeto del mapa (71 ) junto con las observaciones del sensor (72) convertidas al sistema de referencia absoluto, de acuerdo con la posición del sensor (73), e incluyendo los puntos del sensor fuera de rango (74). A la hora de actualizar una región de un objeto empleando la información sensorial, podemos distinguir varios casos dependiendo del grado de solapamiento de los puntos del sensor con los puntos del objeto desde la posición del sensor: An example of a map object (71) together with the sensor observations (72) converted to the absolute reference system, according to the position of the sensor (73), and including the sensor points outside, is illustrated in Figure 7 of rank (74). When updating a region of an object using sensory information, we can distinguish several cases depending on the degree of overlap of the sensor points with the object points from the sensor position:
CASO 1) Existe solape entre los puntos de la observación y los del objeto asociado. CASE 1) There is an overlap between the points of the observation and those of the associated object.
En este caso, la actualización de una región de un objeto se realiza imbricando los puntos actualizados de la observación con los del objeto. La actualización de un punto de la observación se realiza considerando la información local de los pesos de los dos puntos más cercanos del objeto, tal y como se observa en la figura 8. La actualización del punto de la observación se realiza de la siguiente forma: In this case, the updating of a region of an object is done by overlapping the updated points of the observation with those of the object. The update of one point of the observation is carried out considering the local information of the weights of the two closest points of the object, as shown in Figure 8. The observation point update is performed as follows:
Xobvt
Figure imgf000016_0001
(Pobv¡/(Pobvi + 1)) + Xobvt-1 (1/Pobvi+1 ))
Xobvt
Figure imgf000016_0001
(Pobv¡ / (Pobvi + 1)) + Xobvt-1 (1 / Pobvi + 1))
Donde X0bV t son las coordenadas absolutas actualizadas del punto de observación; X0bV n son las coordenadas absolutas sin actualizar del punto de la observación; X0bV¡ son las coordenadas absolutas del punto de intersección entre el rayo de observación que une el sensor con el punto de observación sin actualizar (X0bv t-i) y el segmento definido por los dos puntos del mapa más cercanos a éste último. Finalmente P0bV¡ es el peso del punto X0bV¡ definido como P0bV¡ = P¡(a - a¡)/ a) + Pj(a¡/a) donde a¡ es el ángulo entre el punto X0bV¡ y el punto i-ésimo del objeto visto desde el sensor. Finalmente, α es el ángulo entre el punto i- ésimo del objeto y el siguiente visto desde el sensor. CASO 2) Existen puntos de observación que no solapan con puntos del objeto asociado. Where X 0bV t are the updated absolute coordinates of the observation point; X 0bV n are the absolute coordinates without updating the observation point; X 0bV ¡are the absolute coordinates of the point of intersection between the observation ray that joins the sensor with the observation point without updating (X 0b v ti) and the segment defined by the two points of the map closest to the latter. Finally P 0bV ¡is the weight of the point X 0bV ¡defined as P 0bV ¡= (a - a¡) / a) + P j (a¡ / a) where a¡ is the angle between the point X 0bV ¡ and the i-th point of the object seen from the sensor. Finally, α is the angle between the ith point of the object and the next one seen from the sensor. CASE 2) There are observation points that do not overlap with points of the associated object.
En este caso la actualización de la región del objeto se realiza añadiendo los puntos de la observación que no solapan al objeto. Los nuevos puntos que se introducen en el objeto son de peso 1. Todo ello, tal y como se observa en la figura 9. In this case the update of the region of the object is done by adding the points of the observation that do not overlap the object. The new points that are introduced in the object are of weight 1. All this, as it is observed in figure 9.
CASO 3) Existen puntos de un objeto que no solapan con puntos de alguna observación extraída del sensor. CASE 3) There are points of an object that do not overlap with points of any observation taken from the sensor.
Como se puede observar en la figura 10, la actualización de la zona del objeto que no solapa con alguna observación se realiza restando una unidad los pesos de los puntos de esa región. Cuando el peso de los puntos es negativo, se elimina el objeto al que pertenece. As can be seen in Figure 10, the update of the area of the object that does not overlap with any observation is done by subtracting one unit from the weights of the points in that region. When the weight of the points is negative, the object to which it belongs is deleted.
Ejemplo de realización práctica de la invención Para aprovechar todas las ventajas de las innovaciones descritas se presenta una posible realización práctica (hardware) de las mismas empleando una FPGA. Ésta se caracteriza porque es un dispositivo lógico programable de propósito general. Las FPGA están compuestas por bloques lógicos configurables (BLC) que se comunican entre sí mediante conexiones programables. De este modo, cualquier FPGA consta de una matriz bidimensional de estos bloques, rodeados de conexiones modificables entre ellos. Además, con el propósito de comunicar la FPGA con el exterior, ésta dispone de un conjunto de puertos de entrada y de salida configurables por el usuario. Example of practical embodiment of the invention In order to take advantage of all the advantages of the described innovations, a possible practical embodiment (hardware) of the same is presented using an FPGA. This is characterized in that it is a general purpose programmable logic device. FPGAs consist of configurable logic blocks (BLC) that communicate with each other via programmable connections. Thus, any FPGA consists of a two-dimensional matrix of these blocks, surrounded by modifiable connections between them. Further, With the purpose of communicating the FPGA with the outside, it has a set of user-configurable input and output ports.
A continuación, y siguiendo lo preconizado por la presente invención, se describe una realización hardware implementada en una FPGA. Puesto que la característica más destacable de una FPGA es la de permitir ejecutar en paralelo distintas tareas, en la figura 1 1 se muestran distintos bloques para ejecutar correctamente la invención. Así cada uno de los bloques, representa la reserva de la parte de los recursos lógicos configurables de la FPGA global para lograr un propósito concreto. Next, and following the recommendations of the present invention, a hardware embodiment implemented in an FPGA is described. Since the most remarkable feature of an FPGA is that it allows different tasks to be executed in parallel, different blocks are shown in Figure 1 1 to correctly execute the invention. Thus each of the blocks represents the reserve of the part of the configurable logical resources of the global FPGA to achieve a specific purpose.
Así, por ejemplo, tenemos como la señal de adquisición de datos de los sensores en un instante St es la entrada de datos al conjunto del sistema (100) configurado en la FPGA y que obtiene como resultados el mapa de objetos Mt+1 y posición Xt+1 actualizados. El sistema comprende un primer módulo de segmentación (101) que representa todos aquellos recursos de la FPGA configurados para ejecutar la segmentación del sean del sensor de entrada (seña St) con el propósito de generar características. Del mismo modo, en el sistema de la invención (100) se implementan el módulo de predicción (102), el módulo de asociación y localización (103), el módulo de fusión (104) y el módulo generador del mapa de objetos (105). Thus, for example, we have as the sensor data acquisition signal in an instant S t is the data input to the system set (100) configured in the FPGA and which results in the object map Mt + 1 and Xt + 1 position updated. The system comprises a first segmentation module (101) that represents all those FPGA resources configured to execute the segmentation of the input sensor (signal St) for the purpose of generating characteristics. Similarly, the prediction module (102), the association and location module (103), the fusion module (104) and the object map generator module (105) are implemented in the system of the invention (100). ).
Cada uno de los módulos anteriores está implementado por bloques lógicos, que en conjunto permiten definir la funcionalidad de cada módulo dentro del sistema programado en la FPGA. Así tenemos los siguientes bloques lógicos configurables básicos: a) Bloque lógico configurable tipo punto sean real (10) configurado para representar la información asociada a un punto del sean del sensor (rango y ángulo). b) Bloque lógico configurable tipo punto sean simulado (20), configurado para representar la información asociada a un punto de un sean simulado por el proceso de SLAM (rango y ángulo). Así pues, desde una localización del vehículo se simula la propagación de los rayos sobre los objetos del mapa, del mismo modo que sucede con los rayos del sensor sobre los objetos reales del entorno. c) Bloque lógico configurable tipo partícula (30), que está configurado para representar la información de una partícula o punto asociado a un objeto. Por lo tanto, este bloque tratará información relacionada con la posición (X,Y) de la partícula, así como su peso. d) Bloque lógico configurable tipo localización de vehículo (40), configurado para representar la posición del vehículo en un momento concreto del proceso de SLAM. Por lo tanto, la información que maneja es la de una posición (X,Y) y una orientaciónEach of the previous modules is implemented by logical blocks, which together allow defining the functionality of each module within the system programmed in the FPGA. Thus we have the following basic configurable logic blocks: a) configurable logical block type point are real (10) configured to represent the information associated with a point of the sensor (range and angle). b) Configurable logical block type point simulated (20), configured to represent the information associated with a point simulated by the SLAM process (range and angle). Thus, from a vehicle location, the propagation of the rays on the map objects is simulated, in the same way as with the sensor rays on the real objects in the environment. c) Configurable logical block type particle (30), which is configured to represent the information of a particle or point associated with an object. Therefore this block will treat information related to the position (X, Y) of the particle, as well as its weight. d) Configurable logical block type vehicle location (40), configured to represent the position of the vehicle at a specific time in the SLAM process. Therefore, the information it handles is that of a position (X, Y) and an orientation
(Ψ)· (Ψ)
El proceso de segmentación, implementado en el módulo de segmentación (101) está configurado para extraer las características del sean del sensor. El criterio por el que se decide que una región de puntos del sean se eleva al rango de característica suele ser la proximidad local que existe entre dos puntos consecutivos de esa región. Si dos de ellos están relativamente cerca, se asume que pertenecen a la misma característica. En caso contrario, termina una característica y comienza otra. The segmentation process, implemented in the segmentation module (101) is configured to extract the characteristics of the sensor's sound. The criterion by which it is decided that a region of points of the sea is raised to the characteristic range is usually the local proximity that exists between two consecutive points of that region. If two of them are relatively close, it is assumed that they belong to the same characteristic. Otherwise, one characteristic ends and another begins.
Aprovechando la capacidad de procesamiento paralelo de la FPGA, el módulo de segmentación (101) comprende tantos bloques lógicos configurables tipo punto sean real (10) como medidas contenga el sean; todos los bloques lógicos configurables están conectados uno a continuación de otro, de modo que excepto los bloques lógicos extremos, cada uno tiene dos vecinos, tal y como se muestra en la figura 12. A partir de la información inicial (rango y ángulo de un punto del sean) que se carga en cada bloque lógico sean real (10), cada uno de estos realiza en paralelo una comparación de su rango con el de su vecino de la izquierda (por ejemplo), almacenando la diferencia en el propio bloque. A continuación, se realiza un recorrido secuencial de los bloques lógicos de derecha a izquierda comparando la diferencia de rangos de un bloque con la almacenada en el bloque de su izquierda. Si la comparación no supera un determinado umbral (que dependerá de la aplicación), ambos puntos pertenecen a la misma característica; en caso contrario, se incrementa un contador de número de características detectadas y se continua con la comparación entre vecinos. Por otro lado, el módulo generador del mapa de objetos (105) reserva los bloques lógicos necesarios de tipo partículas (bloque lógico configurable tipo partícula (30)) para poder almacenar la información de cada uno de los objetos del mapa. Cada objeto del mapa está compuesto por un conjunto de estos bloques lógicos configurables tipo partícula (30) conectados entre sí. Cada vez que un punto se añade a un objeto, se reserva memoria para el bloque lógico correspondiente y se inicializa con los valores de posición (X,Y) y peso adecuados. Del mismo modo, la eliminación de un punto de un objeto se traduce en el correspondiente borrado del bloque asociado al punto. El módulo de asociación y localización (103) está configurado para establecer la asociación entre las características extraídas del sean y los objetos del mapa. Tal y como ha sido establecido anteriormente, ésta se basa en la correlación cruzada de dos signaturas. El error cuadrático medio de dos secuencias unidimensionales viene dado por la expresión:
Figure imgf000019_0001
Esta ecuación requiere de un gran número de multiplicaciones, restas y sumas. En un microprocesador de propósito general, el cálculo de esta función supone un coste computacional elevado, ya que no se puede ejecutar de forma paralela. Sin embargo, en una FPGA, se puede realizar vía hardware de una manera eficiente empleando hardware ad hoc que realiza sumas y multiplicaciones simultáneamente. La mayoría de las FPGA contienen unos bloques lógicos que realizan estos tipos de operaciones de multiplicación y acumulación de forma atómica. Normalmente, a este tipo de operaciones se les denomina operaciones MAC (acumulación múltiple o multiplyaccumulation). Concretamente, el fabricante de FPGA Xilinx ® denomina a estas unidades hardware MAC dedicadas, DSP Slice. En la presente invención se denominan bloques MAC (50).
Taking advantage of the parallel processing capacity of the FPGA, the segmentation module (101) comprises as many configurable logic blocks as real point (10) as measures contain the sean; All configurable logic blocks are connected one after the other, so that except for the extreme logic blocks, each has two neighbors, as shown in Figure 12. From the initial information (range and angle of a point of the sea) that is loaded in each logical block are real (10), each of these performs in parallel a comparison of its range with that of its neighbor on the left (for example), storing the difference in the block itself. Next, a sequential tour of the logical blocks from right to left is made by comparing the range difference of a block with that stored in the block on its left. If the comparison does not exceed a certain threshold (which will depend on the application), both points belong to the same characteristic; otherwise, a counter of the number of detected characteristics is increased and the comparison between neighbors is continued. On the other hand, the object map generator module (105) reserves the necessary logical blocks of particle type (configurable logic block particle type (30)) to be able to store the information of each of the objects on the map. Each map object is composed of a set of these configurable logic blocks type particle (30) connected to each other. Each time a point is added to an object, memory is reserved for the corresponding logic block and is initialized with the appropriate position values (X, Y) and weight. Similarly, the removal of a point from an object results in the corresponding deletion of the block associated with the point. The association and location module (103) is configured to establish the association between the features extracted from the sea and the map objects. As previously stated, it is based on the cross correlation of two signatures. The mean square error of two unidimensional sequences is given by the expression:
Figure imgf000019_0001
This equation requires a large number of multiplications, subtractions and sums. In a general purpose microprocessor, the calculation of this function involves a high computational cost, since it cannot be executed in parallel. However, in an FPGA, it can be done via hardware in an efficient way using ad hoc hardware that performs sums and multiplications simultaneously. Most FPGAs contain logical blocks that perform these types of atomic multiplication and accumulation operations. Normally, these types of operations are called MAC operations (multiple accumulation or multiplyaccumulation). Specifically, the FPGA manufacturer Xilinx ® names these dedicated MAC hardware units, DSP Slice. In the present invention they are called MAC blocks (50).
El hardware del error cuadrático medio entre la signatura de un objeto y la de una característica se muestra en la figura 13. Como se observa en esta figura, el hardware necesario comprende un registro de desplazamiento (51) o línea de retardo (esto es un array de memoria que desplaza el contenido de cada posición un lugar a la derecha o a la izquierda en cada ciclo de reloj) así como tantos bloques MAC (50) como elementos tenga la signatura del objeto con el que queremos establecer el error cuadrático medio. El funcionamiento se describe a continuación. The hardware of the mean square error between the signature of an object and that of a characteristic is shown in Figure 13. As seen in this figure, the necessary hardware comprises a shift register (51) or delay line (this is a memory array that shifts the contents of each position one place to the right or left in each clock cycle) as well as as many MAC blocks (50) as elements have the signature of the object with which we want to establish the mean square error. The operation is described below.
En primer lugar los datos de la signatura del objeto se cargan en un array de memoria. En el ejemplo mostrado en la figura, la línea de datos B. First of all the data of the object signature is loaded into a memory array. In the example shown in the figure, data line B.
¡i. A continuación se cargan los datos de la signatura de la característica en una línea de retardo. En cada ciclo de reloj, la línea de retardo desplaza un lugar de memoria a la derecha el contenido de cada posición. iv. Los contenidos de las líneas A y B se restan registro a registro y los resultados obtenidos se introducen en los bloques MAC para realizar el cuadrado del error. v. Finalmente, los bloques MAC (50) se ejecutan por orden de izquierda a derecha multiplicando los contenidos de sus entradas (A y B) y sumando posteriormente el resultado del bloque MAC (50) de su izquierda. vi. El resultado final del error cuadrático medio para un desplazamiento concreto de la línea de retardo se devuelve en la salida del bloque MAC más a la derecha. I. Next, the signature data of the characteristic is loaded into a delay line. In each clock cycle, the delay line shifts the contents of each position to the right. iv. The contents of lines A and B are subtracted from record to record and the results obtained are entered in the MAC blocks to square the error. v. Finally, the MAC blocks (50) are executed in order from left to right by multiplying the contents of their inputs (A and B) and then adding the result of the MAC block (50) on their left. saw. The final result of the mean square error for a specific offset of the delay line is returned at the rightmost MAC block output.
El elemento descrito configura un bloque lógico ECM que realiza el error cuadrático medio de dos signaturas (60) y comprende tres entradas: las signaturas de la característica (SCAR) y del objeto (SOBJ) así como el desplazamiento relativo entre ellas. La salida del bloque es el error cuadrático medio de los datos de entrada para un desplazamiento concreto. The described element configures an ECM logic block that performs the mean quadratic error of two signatures (60) and comprises three inputs: the signatures of the characteristic (SCAR) and of the object (SOBJ) as well as the relative displacement between them. The output of the block is the average square error of the input data for a particular offset.
La ventaja de trabajar con una FPGA frente a un procesador de propósito general es que es posible reservar el hardware necesario para replicar los bloques lógicos que queramos y, de este modo, acelerar la ejecución de los procesos. Así, el número de bloques lógicos ECM (60) que serán necesarios para acelerar al máximo la asociación será igual al número de segmentos de objetos con los que queremos establecer la asociación de las características extraídas en la etapa de segmentación (101). The advantage of working with an FPGA over a general purpose processor is that it is possible to reserve the necessary hardware to replicate the logical blocks we want and, thus, speed up the execution of the processes. Thus, the number of ECM logical blocks (60) that will be necessary to accelerate the association to the maximum will be equal to the number of object segments with which we want to establish the association of the characteristics extracted in the segmentation stage (101).
El esquema de asociación general se muestra en la figura 14, donde en primer lugar se tienen en cuenta aquellas secciones de objetos que son compatibles con la última posición conocida del vehículo. En segundo lugar se establece una asociación inicial por ángulo de visión del sensor entre las características segmentadas con los segmentos objetos considerados. En tercer lugar, se calculan las signaturas de las características, así como la de los segmentos de objetos. En cuarto lugar, se emplea el bloque lógico ECM (60) para calcular la función de error cuadrático medio entre las signaturas asociadas. En quinto lugar, se seleccionan los máximos de las salidas de los bloques (ECM) (60) que sean compatibles, esto es, que mantengan una relación angular entre ellos que sea coherente con la que existe entre los ángulos de visión de las características del sean. En la figura 15 se muestran las salidas de los módulos ECM en función del ángulo de visión del sensor (a). En dicha figura se han representado los diferentes mínimos para las funciones de error cuadrático medio (ECM) calculadas. Así, los mínimos identificados mediante un punto son máximos compatibles entre sí porque mantienen la misma relación angular que existe entre las características extraídas del sean. Sin embargo, el mínimo identificado mediante una cruz no es un mínimo compatible puesto que no está situado en alguna de las ventanas de compatibilidad a¡ y por tanto no será tenido en cuenta a la hora de generar a partir de ella una hipótesis de localización. The general association scheme is shown in Figure 14, where first of all those sections of objects that are compatible with the last known position of the vehicle are taken into account. Secondly, an initial association is established by angle of view of the sensor between the segmented characteristics with the object segments considered. Third, the signatures of the characteristics are calculated, as well as that of the object segments. Fourth, the ECM logic block (60) is used to calculate the mean square error function between the associated signatures. Fifth, the maximums of the outputs of the blocks (ECM) (60) that are compatible are selected, that is, that they maintain an angular relationship between them that is consistent with that between the viewing angles of the characteristics of the be. Figure 15 shows the outputs of the ECM modules as a function of the viewing angle of the sensor (a). In bliss The different minimums for the calculated mean square error (ECM) functions have been represented. Thus, the minimums identified by a point are maximum compatible with each other because they maintain the same angular relationship that exists between the characteristics extracted from the sea. However, the minimum identified by a cross is not a compatible minimum since it is not located in any of the compatibility windows a and therefore will not be taken into account when generating a location hypothesis from it.
Finalmente, se selecciona uno de los mínimos que se encuentren en alguna de las regiones a¡ para establecer una pareja de puntos de referencia en las signaturas correspondientes. Cada uno de los posibles máximos establece una hipótesis de localización. Continuando con el ejemplo de la figura 15, de todos los mínimos de las funciones de error cuadrático medio ECM, sólo las identificadas como ECM1 y ECM5 indican las mejores hipótesis de localización porque en ellos es donde la función ECM crece y decrece con mayor rapidez. Sin embargo, en los puntos identificados en ECM4 en las ventanas 2, 3 y 4, la función es casi constante en un entorno. Por lo tanto, se escogerán como hipótesis de localización más prometedoras o fiables aquellos mínimos de la función ECM donde su variación de pendiente sea lo mayor posible. Resuelta la localización, se calcula una figura de mérito del encaje entre el sean y el mapaFinally, one of the minimums found in any of the regions is selected to establish a pair of reference points in the corresponding signatures. Each of the possible maximums establishes a location hypothesis. Continuing with the example in Figure 15, of all the minimum of the ECM mean square error functions, only those identified as ECM1 and ECM5 indicate the best location hypothesis because it is where the ECM function grows and decreases most rapidly. However, at the points identified in ECM4 in windows 2, 3 and 4, the function is almost constant in an environment. Therefore, the most promising or reliable location hypothesis will be chosen as those minimum of the ECM function where its slope variation is as large as possible. Once the location is resolved, a figure of merit of the fit between the sea and the map is calculated
(por ejemplo, el error cuadrático medio). En caso de que esta no resulte todo lo buena que se esperaba, se intenta mejorar volviendo al principio del proceso, esto es, reasociando las características a los segmentos del mapa hasta conseguir un mejor encaje basada la figura de mérito establecida. (for example, the mean square error). In the event that this is not as good as expected, an attempt is made to return to the beginning of the process, that is, by reassigning the characteristics to the map segments until a better fit is achieved based on the established figure of merit.
Finalmente, y tras resolver la localización a partir del encaje del sean con el mapa (localización basada en la observación) queda por determinar como se establece la fusión de esta con la procedente de la información proporcionada por la odometría del vehículo. Cada hipótesis de localización basada en la observación contemplada en el proceso descrito en la sección anterior tiene asociada una función de densidad de probabilidad gaussiana cuya media (μ) y covarianza (∑) quedan establecidas a partir de las múltiples realizaciones del algoritmo ICP desde diferentes puntos iniciales. A su vez, cada gaussiana queda ponderada en función de la figura del mérito (error cuadrático medio) obtenida por su hipótesis de localización según hemos descrito anteriormente. Del mismo modo, la hipótesis de localización procedente de la predicción también se modela como una gaussiana de parámetros (μρ,∑ρ) establecidos según la dinámica del vehículo. Así, la localización final se obtiene fusionando las hipótesis de localización basadas en la observación y la única de predicción, cada una de ellas con un peso asociado que indica la f labilidad de la hipótesis, tal y como se muestra en la figura 16. Finally, and after resolving the location from the fit of the sea with the map (location based on the observation) it remains to be determined how the fusion of this with that from the information provided by the odometry of the vehicle is established. Each location hypothesis based on the observation contemplated in the process described in the previous section has an associated Gaussian probability density function whose mean (μ) and covariance (∑) are established based on the multiple embodiments of the ICP algorithm from different points initials. In turn, each Gaussian is weighted according to the figure of merit (mean square error) obtained by its location hypothesis as described above. Similarly, the location hypothesis from the prediction is also modeled as a Gaussian parameter (μ ρ , ∑ ρ ) established according to vehicle dynamics. Thus, the final location is obtained by merging the location assumptions based on the observation and the only prediction, each with an associated weight that indicates the reliability of the hypothesis, as shown in Figure 16.
Los pesos de cada una de las hipótesis de localización obtenidas de la observación se normalizan, de modo que su suma es igual a 1. De esta manera, la función densidad de probabilidad de la localización final del vehículo resulta ser el producto de todas las funciones de densidad anteriormente comentadas: N funciones de densidad de probabilidad, que están asociadas a N hipótesis de observación consideradas y una función de densidad asociada a la localización por predicción. The weights of each of the location assumptions obtained from the observation are normalized, so that their sum is equal to 1. In this way, the probability density function of the final location of the vehicle turns out to be the product of all functions of density discussed above: N probability density functions, which are associated with N observation hypotheses considered and a density function associated with prediction location.

Claims

REIVINDICACIONES
1. Método de localización y mapeo simultáneo para dispositivos robóticos que comprende al menos las etapas de asociación, localización y actualización del mapa o mapeado y que se caracteriza por que la modelización del mapa en la etapa de mapeado está basada en entidades denominadas objetos, definidas como una secuencia de puntos móviles ajustables dinámicamente en tamaño con el fin de representar la forma de los contornos de los obstáculos reales detectados por al menos un sensor del dispositivo robótico; y donde cada punto de la secuencia que representa los objetos tiene asociado una posición y un peso que indica el grado de movilidad del mismo; y donde la información proporcionada por el sensor es almacenada sobre los objetos, de tal forma que permite su asociación con una agrupación conexa de puntos del sensor que hacen referencia a un mismo objeto físico estableciéndose una etapa de asociación entre dichos objetos y la agrupación conexa de puntos o característica utilizando los criterios geométricos de forma y oclusión; y donde la etapa de localización comprende una transformación que utiliza la información de dos puntos iniciales asociados empleando la información de signatura de sean y mapa, logrando un desplazamiento y rotación relativos entre los puntos del objeto y de la característica. 1. Simultaneous location and mapping method for robotic devices that includes at least the stages of association, location and updating of the map or mapping and that is characterized by the fact that the modeling of the map in the mapping stage is based on entities called objects, defined as a sequence of dynamically adjustable moving points in size in order to represent the shape of the contours of the real obstacles detected by at least one sensor of the robotic device; and where each point of the sequence that represents the objects has associated a position and a weight that indicates the degree of mobility of the same; and where the information provided by the sensor is stored on the objects, in such a way that it allows its association with a related grouping of sensor points that refer to the same physical object by establishing a stage of association between said objects and the related grouping of points or characteristic using the geometric criteria of form and occlusion; and where the localization stage comprises a transformation that uses the information of two associated initial points using the signature information of the Sean and the map, achieving a relative displacement and rotation between the points of the object and the characteristic.
2. Método de acuerdo con la reivindicación 1 en donde cada objeto contiene información de posición en un sistema de referencia absoluto así como una medida de probabilidad asociada a que la información de posición del punto se encuentre en un entorno próximo de la posición real que representa. 2. A method according to claim 1 wherein each object contains position information in an absolute reference system as well as a probability measure associated with the position information of the point being in a near environment of the actual position it represents. .
3. Método de acuerdo con las reivindicaciones 1 y 2 en donde la asociación entre la característica, que son las observaciones procedentes del sensor y los objetos del mapa se realiza encajando la signatura de la observación (SOBV) en el alguna región de la signatura del objeto (SOBJ) calculando el error cuadrático medio de ambas signaturas de modo que para cada desplazamiento de la signatura de observación sobre la del objeto se obtiene una medida de similitud entre ambas, y donde el mínimo de la función ECM nos indica el desplazamiento relativo entre las signaturas para el que existe un mejor encaje entre ambas. 3. Method according to claims 1 and 2 wherein the association between the characteristic, which are the observations from the sensor and the objects of the map is carried out by fitting the observation signature (SOBV) in some region of the signature of the object (SOBJ) calculating the mean square error of both signatures so that for each displacement of the observation signature over that of the object a similarity measure is obtained between the two, and where the minimum of the ECM function indicates the relative displacement between the signatures for which there is a better fit between the two.
4. Método de acuerdo con cualquiera de las reivindicaciones 1 a 3 donde la asociación entre las regiones características y las regiones de los objetos del mapa se realiza de forma paralela calculando la función de error cuadrático medio de cada una de signaturas de las observaciones (SOBV) con todas aquellas signaturas de los objetos del mapa (SOBJ) que son compatibles con su posición; y donde la función ECM que produzca un menor mínimo genera una hipótesis de asociación entre una característica y un objeto del mapa. 4. Method according to any of claims 1 to 3 wherein the association between the characteristic regions and the regions of the map objects is carried out in parallel by calculating the mean square error function of each of the observations' signatures (SOBV ) with all those signatures of the map objects (SOBJ) that they are compatible with their position; and where the ECM function that produces a lower minimum generates a hypothesis of association between a characteristic and an object on the map.
5. Método de acuerdo con cualquiera de las reivindicaciones 1 a 4 donde la asociación genera una hipótesis de asociación, esto es parejas de características - objetos del mapa, donde cada una de éstas establece un hipótesis de localización (51) tal que comprende las etapas de definición del objeto asociado OBJ (52) y de la característica OBV (53); definición de la signatura del objeto SOBJ (54) y de la característica SOBV (55); establecimiento de la función de error cuadrático medio entre las signaturas de objeto y de la característica (54,55); una etapa de desplazamientos relativos entre las signaturas para averiguar cuál es el óptimo en base a un criterio de error cuadrático medio y establecimiento de puntos homólogos en las signaturas (56); una etapa de establecimiento de puntos homólogos en la característica y en su objeto asociado (57, 57'); una etapa de establecer asociaciones entre los puntos de la secuencia de la característica y la del objeto en un entorno centrado en los puntos homólogos de referencia (58); y una etapa final de resolución de la localización (59). 5. Method according to any one of claims 1 to 4 wherein the association generates an association hypothesis, that is pairs of features - map objects, where each of them establishes a location hypothesis (51) such that it comprises the steps for defining the associated object OBJ (52) and the OBV characteristic (53); definition of the signature of the SOBJ object (54) and the SOBV characteristic (55); establishment of the mean square error function between object and characteristic signatures (54,55); a stage of relative displacements between the signatures to find out which is the optimum based on a criterion of mean square error and establishment of homologous points in the signatures (56); a stage of establishing homologous points on the characteristic and its associated object (57, 57 '); a stage of establishing associations between the points of the sequence of the characteristic and that of the object in an environment centered on the homologous reference points (58); and a final stage of location resolution (59).
6. Método de acuerdo con cualquiera de las reivindicaciones anteriores donde una vez realizada la localización del vehículo, el mapa se actualiza a partir de su estado actual y de la información procedente del sensor; y donde dicha actualización del mapa se realiza de forma local, teniendo en cuenta que los objetos del mapa quedan definidos por una secuencia de puntos unidimensional; y donde dicha actualización de un objeto del mapa requiere la localización del vehículo calculada en el instante actual (Xt) así como el estado del mapa en el instante anterior (Mu). 6. Method according to any of the preceding claims where once the vehicle location is made, the map is updated from its current state and from the information coming from the sensor; and where said map update is performed locally, taking into account that the map objects are defined by a sequence of one-dimensional points; and where said updating of a map object requires the location of the vehicle calculated at the current instant (X t ) as well as the state of the map at the previous instant (Mu).
7. Dispositivo de localización y mapeo simultáneo para dispositivos robóticos que comprende medios para ejecutar el método según cualquiera de las reivindicaciones 1 a 6. 7. Simultaneous location and mapping device for robotic devices comprising means for executing the method according to any one of claims 1 to 6.
8. Robot móvil que comprende medios para ejecutar el método según cualquiera de las reivindicaciones 1 a 6. 8. Mobile robot comprising means for executing the method according to any of claims 1 to 6.
PCT/ES2013/070846 2012-12-14 2013-12-05 Simultaneous localization and mapping method for robotic devices WO2014091043A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ES201201234A ES2476565B2 (en) 2012-12-14 2012-12-14 Simultaneous location and mapping method for robotic devices
ESP201201234 2012-12-14

Publications (1)

Publication Number Publication Date
WO2014091043A1 true WO2014091043A1 (en) 2014-06-19

Family

ID=50933787

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2013/070846 WO2014091043A1 (en) 2012-12-14 2013-12-05 Simultaneous localization and mapping method for robotic devices

Country Status (2)

Country Link
ES (1) ES2476565B2 (en)
WO (1) WO2014091043A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009080A1 (en) * 2022-07-04 2024-01-11 Opteran Technologies Limited Method and system for determining the structure, connectivity and identity of a physical or logical space or attribute thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070027612A1 (en) * 2005-07-26 2007-02-01 Barfoot Timothy D Traffic management system for a passageway environment
US20070293985A1 (en) * 2006-06-20 2007-12-20 Samsung Electronics Co., Ltd. Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
US20080065267A1 (en) * 2006-09-13 2008-03-13 Samsung Electronics Co., Ltd. Method, medium, and system estimating pose of mobile robots
US20120230550A1 (en) * 2011-03-10 2012-09-13 Jacob Kraut Method and Apparatus for Generating a Map from Landmarks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070027612A1 (en) * 2005-07-26 2007-02-01 Barfoot Timothy D Traffic management system for a passageway environment
US20070293985A1 (en) * 2006-06-20 2007-12-20 Samsung Electronics Co., Ltd. Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
US20080065267A1 (en) * 2006-09-13 2008-03-13 Samsung Electronics Co., Ltd. Method, medium, and system estimating pose of mobile robots
US20120230550A1 (en) * 2011-03-10 2012-09-13 Jacob Kraut Method and Apparatus for Generating a Map from Landmarks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LU , FENG ET AL.: "Robot pose estimation in unknown environments by matching 2d range scans''.", JOURNAL OF INTELLIGENT AND ROBOTIC SYSTEMS, vol. 18, no. 3, 1997, pages 249 - 275 *
SHUAI GUO ET AL.: "VorSLAM: A new solution to simultaneous localization and mapping''.", IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (ICIA, 20 June 2010 (2010-06-20), PISCATAWAY, NJ, USA, pages 1896 - 1901 *
TOMAS MARTINEZ-MARIN ET AL.: "An unified framework for active SLAM and online optimal motion planning''.", INTELLIGENT VEHICLES SYMPOSIUM (IV), 2011, pages 1092 - 1097 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009080A1 (en) * 2022-07-04 2024-01-11 Opteran Technologies Limited Method and system for determining the structure, connectivity and identity of a physical or logical space or attribute thereof

Also Published As

Publication number Publication date
ES2476565A1 (en) 2014-07-14
ES2476565B2 (en) 2015-01-30

Similar Documents

Publication Publication Date Title
Yousif et al. An overview to visual odometry and visual SLAM: Applications to mobile robotics
Burgard et al. World modeling
Tardós et al. Robust mapping and localization in indoor environments using sonar data
Miettinen et al. Simultaneous localization and mapping for forest harvesters
Tipaldi et al. FLIRT: Interest regions for 2D range data with applications to robot navigation
Jaspers et al. Multi-modal local terrain maps from vision and lidar
Skrzypczynski Simultaneous localization and mapping: A feature-based probabilistic approach
US20140142891A1 (en) Generaton of map data
Lee et al. Vision-based kidnap recovery with SLAM for home cleaning robots
Nehme et al. Lidar-based structure tracking for agricultural robots: Application to autonomous navigation in vineyards
Agarwal Robust graph-based localization and mapping
Falomir et al. Qualitative distances and qualitative image descriptions for representing indoor scenes in robotics
Ghosh et al. Multi sensor data fusion for 6D pose estimation and 3D underground mine mapping using autonomous mobile robot
Souza et al. 3D probabilistic occupancy grid to robotic mapping with stereo vision
Dhiman et al. Where am I? Creating spatial awareness in unmanned ground robots using SLAM: A survey
Skrzypczyński Mobile robot localization: Where we are and what are the challenges?
Shufelt Geometric constraints for object detection and delineation
Zhang et al. An efficient data association approach to simultaneous localization and map building
KR102097722B1 (en) Apparatus and method for posture estimation of robot using big cell grid map and recording medium storing program for executing the same and computer program stored in recording medium for executing the same
Angladon et al. Room floor plan generation on a project tango device
WO2014091043A1 (en) Simultaneous localization and mapping method for robotic devices
Zhang et al. An improvement algorithm for OctoMap based on RGB-D SLAM
Xu et al. Humanoid robot localization based on hybrid map
Norouzi et al. Recursive line extraction algorithm from 2d laser scanner applied to navigation a mobile robot
Cheong et al. Indoor global localization using depth-guided photometric edge descriptor for mobile robot navigation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13862730

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13862730

Country of ref document: EP

Kind code of ref document: A1