WO2021207999A1 - 车辆定位的方法和装置、定位图层生成的方法和装置 - Google Patents

车辆定位的方法和装置、定位图层生成的方法和装置 Download PDF

Info

Publication number
WO2021207999A1
WO2021207999A1 PCT/CN2020/085060 CN2020085060W WO2021207999A1 WO 2021207999 A1 WO2021207999 A1 WO 2021207999A1 CN 2020085060 W CN2020085060 W CN 2020085060W WO 2021207999 A1 WO2021207999 A1 WO 2021207999A1
Authority
WO
WIPO (PCT)
Prior art keywords
voxel
weight value
voxels
posture
positioning layer
Prior art date
Application number
PCT/CN2020/085060
Other languages
English (en)
French (fr)
Inventor
杨磊
陈成
史昕亮
周帅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/085060 priority Critical patent/WO2021207999A1/zh
Priority to CN202080004104.3A priority patent/CN112703368B/zh
Publication of WO2021207999A1 publication Critical patent/WO2021207999A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Definitions

  • This application relates to the field of vehicle positioning, and in particular to methods and devices for vehicle positioning, methods and devices for generating positioning layers, and methods and devices for updating positioning layers.
  • autonomous vehicles need to be equipped with a positioning system to obtain the relative position relationship between the vehicle and the surrounding environment and the reference information contained in the surrounding environment in real time to formulate complex driving strategies .
  • the basic onboard positioning system combines global navigation satellite system (GNSS), inertial navigation (inertial navigation system: INS) and vehicle chassis wheel speed to perform high-dynamic real-time positioning while the vehicle is moving.
  • GNSS global navigation satellite system
  • INS inertial navigation
  • INS vehicle chassis wheel speed
  • RTK real-time kinematic
  • the positioning accuracy of the basic vehicle positioning system is easily affected by factors such as weather and the number of stars searched.
  • the absolute accuracy of the GNSS positioning obtained by the system at the same location at different times is uncertain, and the positioning results will change.
  • the basic vehicle positioning system can only rely on INS and/or vehicle chassis wheel speed for dead reckoning, which will cause cumulative errors and ultimately lead to vehicles Deviation from the predetermined lane, which affects the positioning stability of the basic vehicle positioning system.
  • the relative map positioning scheme is mainly used to improve positioning stability and positioning accuracy.
  • the basic principle of the relative map positioning scheme is to use sensors such as lidar to obtain environmental data around the vehicle, and to achieve vehicle positioning by matching the obtained environmental data with a pre-built positioning layer.
  • the relative map positioning scheme takes the map point as a reference for positioning. Since any point in the map has unique map coordinates, the relative map positioning scheme can eliminate the uncertainty of GNSS positioning, thereby improving positioning accuracy.
  • the relative map positioning solution matches the real-time environmental data collected by the sensor with the positioning layer, which can eliminate accumulated errors and improve positioning stability.
  • the existing relative map positioning solution uses a large amount of data in the positioning layer, which requires a large storage space for storing the positioning layer, and the positioning stability is not high enough.
  • the embodiments of the present application provide a method and device for vehicle positioning, a method and device for generating positioning layers, and a method and device for updating positioning layers, which can reduce the storage required for storing positioning layers. Space and improve positioning stability.
  • the first aspect of the present application provides a vehicle positioning method, including: using at least data from a satellite navigation system and an inertial measurement unit to predict a first position and a first attitude of the vehicle at a first point in time; using the vehicle To obtain the first laser point cloud data of the first local geographic area including the first position; and, using the first laser point cloud data and the pre-built first local geographic area
  • the first positioning layer is used to correct the first position and the first attitude to obtain the corrected position and the corrected attitude of the vehicle at the first point in time, wherein the first positioning map
  • the layer is structured to store the identities and weight values of a plurality of voxels, the plurality of voxels including voxelizing the second laser point cloud data of the first local geographic area obtained when constructing the first positioning layer At least a part of each voxel obtained by the transformation, and the weight value of the voxel indicates the probability that the voxel is occupied by the geographic environment object.
  • the positioning layer used in the method stores the weight value indicating the possible extent of the voxel being occupied by the geographic environment object, which reflects the geospatial structure of the geographic area, and does not store the specific semantics (ie specific types) of the geographic environment object And the reflection intensity of the geographic environment objects that are susceptible to environmental changes. Therefore, the method depends on the geographic spatial structure of the geographic area, rather than the specific semantics of the geographic environment objects and the reflection intensity of the geographic environment objects to the laser.
  • Vehicles are positioned; because the geospatial structure of the geographic area is not easily affected by the specific semantics of geographic environment objects and environmental changes (for example, weather, road wear and tear over time, etc.), no matter where the geographic environment objects are rich in types
  • the area is also in a geographic area where the types of geographic environmental objects are scarce, and regardless of changes in the environment, the method can obtain basically the same positioning effect, thereby improving positioning stability and positioning performance.
  • the plurality of voxels includes voxels with the weight value greater than zero in each voxel obtained by voxelizing the second laser point cloud data, and the plurality of voxels
  • the weight value of each voxel in the voxel is represented by the number of laser points contained in the voxel.
  • the first positioning layer is structured to store only the identification and weight value of the voxel whose weight value is greater than zero, and does not store the identification and weight value of the voxel whose weight value is equal to zero, that is, no positioning Without helpful information, this will reduce and compress the data volume of the positioning layer, thereby reducing the storage space required for storing the positioning layer.
  • the weight value can reliably and simply indicate the possibility of the voxel being occupied by the geographical environment object.
  • the first positioning layer is configured to store the identifiers and the weight values of the plurality of voxels in pairs in the form of a hash table, and the value of each voxel
  • the identifier is represented by a hash map value of the position information of the voxel in the spatial coordinate system applied by the first positioning layer.
  • a hash table also called a hash table
  • constructing the first positioning layer in the form of a hash table can increase the indexing/matching speed of the positioning layer, thereby improving the positioning efficiency.
  • the correcting the first position and the first posture includes: performing multiple spatial sampling for the position and posture in the surrounding space of the first position and the first posture, so as to Obtain multiple position and posture groups, where each position and posture group includes a sampling position and a sample posture obtained by one sampling; calculate the similarity score of the multiple position and posture groups, wherein the similarity score of any position and posture group Represents the degree of similarity between the second positioning layer and the first positioning layer associated with the any position and posture group, and the second positioning layer uses the third laser point cloud data to generate the first positioning layer.
  • the positioning layer is generated, and the third laser point cloud data is generated by using a three-dimensional space transformation generated based on the sampling position and the sampling posture included in the any position and posture group to transform the first laser point
  • the cloud data is obtained by transforming the first spatial coordinate system associated with the vehicle to the second spatial coordinate system associated with the first positioning layer; and, at least based on the plurality of position and posture groups
  • the similarity score determines the corrected position and the corrected posture.
  • the first position and the first posture of the vehicle can be corrected with the help of multiple sets of sampled positions and sampled postures sampled from the surrounding space of the first position and the first posture of the vehicle, and the corrected position and posture of the vehicle can be determined relatively reliably. Correct the posture.
  • the determining the corrected position and the corrected posture includes: selecting each first position and posture group whose similarity score is greater than a first threshold from the multiple position and posture groups; and, Using the similarity scores of the respective first position and posture groups as weights, weighted fitting is performed on the sampling positions and the sampling postures included in the respective first position and posture groups to obtain the corrected position and the corrected posture.
  • weighted fitting is performed on the sampling positions and the sampling postures included in the respective first position and posture groups to obtain the corrected position and the corrected posture.
  • only the sampling positions and sampling attitudes in the position and attitude group whose similarity scores are greater than the first threshold are selected to correct the first position and the first attitude of the vehicle to determine the corrected position and the corrected attitude of the vehicle, which can effectively remove the noise sampling position And the influence of noise sampling attitude on vehicle positioning, so as to obtain high-precision positioning results.
  • the calculating the similarity scores of the multiple position and posture groups includes: searching for voxels of the first type and the voxels of the second type in each position and posture group, wherein the first The weight value stored in the second positioning layer associated with the position and pose group of the voxel is the same as the weight value stored in the first positioning layer, and the second The weight value stored in the second positioning layer associated with the position and posture group of the voxel is different from the weight value stored in the first positioning layer; calculate each position and posture Group of the first total weight value and the second total weight value, wherein the first total weight value is equal to one of the weight values of the voxels of the first type stored in the first positioning layer And, and, the second total weight value is equal to the sum of the weight values of the voxels of the second type stored in the first positioning layer; The difference between the first total weight value and the second total weight value is used as the similarity score of the position and posture group to obtain the similarity scores of the multiple position and posture groups.
  • the number of voxels of the first type with the same weight value stored in the two positioning layers is usually greater.
  • the number of voxels of the second type with different weight values is usually smaller, so that the first total weight value representing the sum of the weight values of the voxels of the first type and the first total weight value representing the sum of the weight values of the voxels of the second type
  • the difference between the two total weight values is usually larger.
  • the first total weight value representing the sum of the weight values of the voxels of the first type and the second total weight value representing the sum of the weight values of the voxels of the second type The difference is usually smaller.
  • the difference between the first total weight value representing the sum of the weight values of the voxels of the first type and the second total weight value representing the sum of the weight values of the voxels of the second type is used as the similarity of the position and posture group
  • the score can accurately indicate the similarity between the real-time positioning layer based on the laser point cloud data obtained from the vehicle and the pre-built positioning layer of the corresponding local geographic area.
  • the method further includes: acquiring sensor data of the lidar of the vehicle at a second position; and generating map update data, which includes the second position and the second attitude, and According to the acquired sensor data, the second position and the second attitude are the corrected position and the corrected attitude of the vehicle at the current point in time, which are based on the The corrected position and the corrected posture are determined as well as the movement of the vehicle from the first point in time to the current point in time.
  • the map update data is generated after determining the corrected position and the corrected posture of the vehicle so as to update the occupancy weight of each voxel stored in the positioning layer of the corresponding local geographic area that has been constructed, which can reduce the impact on the construction of the positioning layer.
  • the accuracy requirements of lidar sensor data can reduce or eliminate the adverse effects of moving objects (for example, moving vehicles or pedestrians) on the positioning layer during the construction of the positioning layer, thereby improving the positioning relative to the positioning layer The accuracy of the result.
  • a second aspect of the present application provides a method for generating a positioning layer, including: acquiring laser point cloud data of a second geographic area; voxelizing the laser point cloud data to obtain multiple voxels; calculating the The weight value of a plurality of voxels, the weight value of each voxel indicates the probability that the voxel is occupied by the geographical environment object; and the identification and weight value of at least a part of the plurality of voxels are stored to obtain all the voxels.
  • the positioning layer of the second geographic area including: acquiring laser point cloud data of a second geographic area; voxelizing the laser point cloud data to obtain multiple voxels; calculating the The weight value of a plurality of voxels, the weight value of each voxel indicates the probability that the voxel is occupied by the geographical environment object; and the identification and weight value of at least a part of the plurality of voxels are stored to obtain all the voxels.
  • the positioning layer generated by the method stores the weight value indicating the possible extent of the voxel being occupied by the geographic environment object, which reflects the geospatial structure of the geographic area, and does not store the specific semantics (ie specific types) of the geographic environment object.
  • the reflection intensity of the geographic environment objects that are susceptible to environmental changes to the laser Therefore, when the positioning layer generated by the method is used for positioning, it depends on the geographic spatial structure of the geographic area, rather than the specific semantics and geography of the geographic environment objects.
  • the positioning layer generated by the method can obtain basically the same positioning effect, which can improve positioning stability and positioning. performance.
  • the calculating the weight value of the plurality of voxels includes: calculating the number of laser points contained in each voxel in the plurality of voxels as the weight value of the voxel, and, The at least a part of the voxels is each voxel of the plurality of voxels whose weight value is greater than zero.
  • the higher the possibility that a voxel is occupied by the geographical environment object the more laser points contained in the voxel usually, therefore, the number of laser points contained in the voxel represents the total amount of the voxel.
  • the weight value can reliably and simply indicate the possibility of the voxel being occupied by the geographical environment object.
  • the positioning layer is structured to store only the identifiers and weight values of voxels whose weight value is greater than zero, and does not store the identifiers and weight values of voxels whose weight value is equal to zero, that is, it does not store the identifiers and weight values of voxels whose weight value is equal to zero.
  • Information which will reduce and compress the data volume of the positioning layer, thereby reducing the storage space required for storing the positioning layer.
  • the positioning layer is configured to store the identification and the weight value of the at least a part of the voxels in pairs in the form of a hash table, and the identification of each voxel is determined by
  • the location information of the voxel in the spatial coordinate system associated with the positioning layer is represented by a hash map value.
  • a hash table also called a hash table
  • constructing the first positioning layer in the form of a hash table can increase the indexing/matching speed of the positioning layer, thereby improving the positioning efficiency.
  • a third aspect of the present application provides a method for updating a positioning layer, including: using map update data to generate fourth laser point cloud data of a third local geographic area, wherein the map update data includes the third position of the vehicle And the third attitude and the first sensor data of the lidar of the vehicle at the third position, the third local geographic area is a local geographic area including the third position, and the fourth laser The point cloud data is formed using the first sensor data; the fifth laser point cloud data is voxelized to obtain a plurality of third voxels, wherein the fifth laser point cloud data is based on the The three-dimensional space transformation generated by the third position and the third posture is obtained by transforming the fourth laser point cloud data from the space coordinate system associated with the vehicle to the space coordinate system associated with the third positioning layer , And wherein the third positioning layer is a previously constructed positioning layer of the third local geographic area, which stores the identification and weight value of at least a part of the plurality of third voxels, The weight value of each voxel indicates the probability that the vo
  • the third positioning layer stores the identification and the weight value of those voxels of the plurality of third voxels whose weight value is greater than zero, and the update The weight value of each voxel stored in the third positioning layer includes: selecting those voxels whose calculated weight value is greater than zero from the plurality of third voxels; for each selected individual If the third positioning layer stores the weight value of the voxel and the stored weight value is not the same as the calculated weight value, the calculated weight value of the voxel is used to replace the third positioning layer The weight value of the voxel stored in the layer; for each selected voxel, if the third positioning layer does not store the weight value of the voxel, then the identification and the value of the voxel will be The calculated weight value is stored in the third positioning layer; and, if the third positioning layer stores the identification and weight value of a fourth voxel that does not appear in the selected voxel, then The identification and weight
  • static targets for example, buildings
  • static targets for example, buildings
  • the weight value of deleting moving targets for example, moving vehicles or pedestrians, etc.
  • the fourth aspect of the present application provides a vehicle positioning device, including: a prediction module, configured to use at least data from a satellite navigation system and an inertial measurement unit to obtain a first position and a first attitude of the vehicle at a first point in time Obtaining module for using the sensor data of the lidar of the vehicle to obtain the first laser point cloud data of the first local geographic area containing the first position; and, a correction module for using the first
  • the laser point cloud data and the pre-built first positioning layer of the first local geographic area are used to correct the first position and the first posture, so as to obtain the vehicle information at the first time point Correction position and correction posture
  • the first positioning layer is configured to store the identification and weight value of a plurality of voxels, and the plurality of voxels includes At least a part of each voxel obtained by voxelizing the second laser point cloud data of the first local geographic area, and the weight value of the voxel indicates the probability that the voxel is occupied by the geographic environment
  • the positioning layer used by the device stores the weight value indicating the possibility that the voxel is occupied by the geographic environment object, which reflects the geospatial structure of the geographic area, and does not store the specific semantics (ie, specific types) of the geographic environment object.
  • the reflection intensity of the geographic environment object that is susceptible to environmental changes to the laser. Therefore, the device depends on the geographic spatial structure of the geographic area, rather than the specific semantics of the geographic environment object and the reflection intensity of the geographic environment object to the laser.
  • Positioning because the geospatial structure of a geographic area is not easily affected by the specific semantics of geographic environment objects and environmental changes (for example, weather, road wear and tear over time, etc.), no matter in a geographic area with rich types of geographic environment objects Even in a geographical area where the types of geographical environment objects are scarce, and no matter how the environment changes, the device can obtain basically the same positioning effect, thereby being able to improve positioning stability and positioning performance.
  • specific semantics of geographic environment objects and environmental changes for example, weather, road wear and tear over time, etc.
  • the plurality of voxels includes voxels whose weight value is greater than zero in each voxel obtained by voxelizing the second laser point cloud data, and the plurality of voxels
  • the weight value of each voxel in the voxel is represented by the number of laser points contained in the voxel.
  • the first positioning layer is configured to store the identifiers and the weight values of the plurality of voxels in pairs in the form of a hash table, and the value of each voxel
  • the identifier is represented by a hash map value of the position information of the voxel in the spatial coordinate system applied by the first positioning layer.
  • the correction module includes: a sampling module for performing multiple spatial sampling of positions and attitudes in the surrounding space of the first position and the first attitude to obtain multiple positions and attitudes Group, wherein each position and posture group includes the sampling position and the sampling posture obtained by one sampling; the first calculation module is used to calculate the similarity score of the multiple position and posture groups, wherein any position and posture group is similar
  • the sex score indicates the degree of similarity between the second positioning layer and the first positioning layer associated with any of the position and posture groups, and the second positioning layer uses third laser point cloud data to generate the The first positioning layer is generated, and the third laser point cloud data is generated by using a three-dimensional space transformation generated based on the sampling position and the sampling posture included in the any position and posture group to transform the first laser
  • the point cloud data is obtained by transforming the first spatial coordinate system associated with the vehicle to the second spatial coordinate system associated with the first positioning layer;
  • the similarity score of the position and posture group determines the corrected position and the corrected posture.
  • the determining module includes: a selecting module for selecting each first position and posture group whose similarity score is greater than a first threshold from the plurality of position and posture groups; and, a second calculation The module is used to use the similarity scores of the first position and posture groups as weights, and perform weighted fitting on the sampling positions and the sampling postures included in the first position and posture groups to obtain the corrected positions and the postures. The revised posture.
  • the first calculation module includes: a searching module for searching for voxels of the first type and voxels of the second type in each position and pose group, wherein the voxels of the first type
  • the weight value stored in the second positioning layer associated with the position and posture group is the same as the weight value stored in the first positioning layer, and the second type of body
  • the weight value stored in the second positioning layer associated with the position and posture group is different from the weight value stored in the first positioning layer
  • the fourth calculation module is used to calculate The first total weight value and the second total weight value of each position and attitude group, wherein the first total weight value is equal to all the voxels of the first type stored in the first positioning layer
  • the sum of the weight values, and the second total weight value is equal to the sum of the weight values of the voxels of the second type stored in the first positioning layer
  • a fifth calculation module Used to calculate the difference between the first total weight value and the second total weight value of each position and posture group as the similarity score of the position and posture group to obtain
  • the device further includes: an obtaining module for obtaining sensor data of the lidar of the vehicle at the second position; and a generating module for generating map update data, which includes The second position and the second posture and the acquired sensor data, wherein the second position and the second posture are the position and posture of the vehicle at the current time point, which are based on the position and posture of the vehicle at the current point in time.
  • a fifth aspect of the present application provides an apparatus for generating a positioning layer, including: an acquisition module for acquiring laser point cloud data of a second geographic area; Voxelization to obtain a plurality of voxels; a calculation module for calculating the weight value of the plurality of voxels, the weight value of each voxel indicates the probability that the voxel is occupied by the geographical environment object; and the storage module uses To store the identification and weight value of at least a part of the plurality of voxels to obtain the positioning layer of the second geographic area.
  • the positioning layer generated by the device stores the weight value indicating the possible extent of the voxel being occupied by the geographic environment object, which reflects the geospatial structure of the geographic area, and does not store the specific semantics (ie specific types) of the geographic environment object.
  • the reflection intensity of the geographic environment objects that are susceptible to environmental changes to the laser Therefore, when using the positioning layer generated by the device for positioning, it depends on the geographic spatial structure representing the geographic area, rather than the specific semantics and semantics of the geographic environment objects.
  • the reflection intensity of the geographic environment object to the laser because the geospatial structure of the geographic area is not easily affected by the specific semantics of the geographic environment object and environmental changes (for example, weather, road wear and tear over time, etc.), no matter it is in the geographic environment object Regardless of how the environment changes, the positioning layer generated by the device can obtain basically the same positioning effect, thereby improving positioning stability and positioning. performance.
  • the calculation module is further configured to calculate the number of laser points contained in each voxel in the plurality of voxels as the weight value of the voxel, and the at least a part of the voxels Is each voxel of the plurality of voxels whose weight value is greater than zero.
  • the positioning layer is configured to store the identification and the weight value of the at least a part of the voxels in pairs in the form of a hash table, and the identification of each voxel is determined by
  • the location information of the voxel in the spatial coordinate system associated with the positioning layer is represented by a hash map value.
  • a sixth aspect of the present application provides an apparatus for updating positioning layers, including: a generating module, configured to use map update data to generate fourth laser point cloud data of a third local geographic area, wherein the map update data includes The third position and the third attitude of the vehicle and the first sensor data of the lidar of the vehicle at the third position, the third local geographic area is a local geographic area including the third position, and, The fourth laser point cloud data is formed by using the first sensor data; the voxelization module is used to voxelize the fifth laser point cloud data to obtain a plurality of third voxels, wherein The fifth laser point cloud data is a three-dimensional space transformation generated based on the third position and the third posture to transform the fourth laser point cloud data from the spatial coordinate system associated with the vehicle to Obtained from the spatial coordinate system associated with the third positioning layer, and, wherein the third positioning layer is a previously constructed positioning layer of the third local geographic area, which stores the plurality of third voxels
  • the third positioning layer stores the identification and the weight value of those voxels of the plurality of third voxels whose weight value is greater than zero, and the update
  • the module includes: a selection module, which is used to select those voxels whose calculated weight value is greater than zero from the plurality of third voxels; a replacement module, which is used for each selected voxel, if the first voxel is The three positioning layer stores the weight value of the voxel and the stored weight value is not the same as the calculated weight value, then the calculated weight value of the voxel is used to replace the stored weight value of the third positioning layer The weight value of the voxel; a storage module for each selected voxel, if the third positioning layer does not store the weight value of the voxel, then the identification of the voxel and the calculated The weight value of is stored in the third positioning layer; and, the deletion module is used to store the identification and weight value of a fourth vo
  • a seventh aspect of the present application provides a computer device, including: a bus; a communication interface connected to the bus; at least one processor connected to the bus; and at least one memory connected to the bus and Program instructions are stored, and when executed by the at least one processor, the program instructions cause the at least one processor to perform the method described in the foregoing first aspect.
  • An eighth aspect of the present application provides a map generating device, including: a bus; an input and output interface connected to the bus; at least one processor connected to the bus; and at least one memory connected to the bus It also stores program instructions, which when executed by the at least one processor cause the at least one processor to execute the method described in the foregoing second aspect.
  • a ninth aspect of the present application provides a map update device, including: a bus; an input and output interface connected to the bus; at least one processor connected to the bus; and at least one memory connected to the bus
  • the bus is connected to and stored with program instructions, which when executed by the at least one processor cause the at least one processor to execute the method described in the foregoing third aspect.
  • the tenth aspect of the present application provides a computer-readable storage medium on which program instructions are stored. When executed by a computer, the program instructions cause the computer to execute the aforementioned first, second, or third aspects. Methods.
  • An eleventh aspect of the present application provides a computer program, which includes program instructions that, when executed by a computer, cause the computer to execute the method described in the aforementioned first, second, or third aspect.
  • a twelfth aspect of the present application provides a vehicle, including: a sensor system, which includes at least a global navigation positioning system receiver, an inertial measurement unit, and a lidar; a communication system for the vehicle to communicate with the outside; and, the aforementioned Computer equipment.
  • FIG. 1 shows a schematic diagram of the implementation environment involved in vehicle positioning, positioning layer generation, and positioning layer update according to an embodiment of the present application.
  • Fig. 2A shows a schematic flowchart of a method for vehicle positioning according to an embodiment of the present application.
  • Fig. 2B shows a schematic flowchart of a method for correcting a position and a posture according to an embodiment of the present application.
  • Fig. 2C shows a schematic flowchart of a method for determining a corrected position and a corrected posture according to an embodiment of the present application.
  • Fig. 2D shows a schematic flowchart of a method for calculating a similarity score according to an embodiment of the present application.
  • Fig. 2E shows a schematic flowchart of a method for generating map update data according to an embodiment of the present application.
  • Fig. 3A shows a flowchart of a method for generating a positioning layer according to an embodiment of the present application.
  • Fig. 3B shows a flowchart of a method for updating a positioning layer according to an embodiment of the present application.
  • Fig. 3C shows a flowchart of a method for updating the weight value of a voxel according to an embodiment of the present application.
  • Fig. 4A shows a schematic diagram of a vehicle positioning device according to an embodiment of the present application.
  • Fig. 4B shows a schematic diagram of a correction module according to an embodiment of the present application.
  • Fig. 4C shows a schematic diagram of a determining module according to an embodiment of the present application.
  • Fig. 4D shows a schematic diagram of a first calculation module according to an embodiment of the present application.
  • Fig. 4E shows a schematic diagram of a map update data generating module according to an embodiment of the present application.
  • Fig. 4F shows a schematic diagram of an apparatus for generating a positioning layer according to an embodiment of the present application.
  • Fig. 4G shows a schematic diagram of an apparatus for updating a positioning layer according to an embodiment of the present application.
  • Fig. 4H shows a schematic diagram of an update module according to an embodiment of the present application.
  • Fig. 5A shows an exemplary specific implementation of a system for positioning layer generation, positioning layer update, and vehicle positioning according to an embodiment of the present application.
  • FIG. 5B shows an exemplary specific implementation of the method for generating a positioning layer according to an embodiment of the present application.
  • FIG. 5C shows an exemplary specific implementation of the method for predicting the position and posture of a vehicle according to an embodiment of the present application.
  • Fig. 5D shows an exemplary specific implementation of the vehicle position and posture correction method according to an embodiment of the present application.
  • Fig. 5E shows an exemplary specific implementation of the positioning layer update method according to the embodiment of the present application.
  • Fig. 6 shows a schematic structural diagram of a computer device according to an embodiment of the present application.
  • Fig. 7 shows a schematic structural diagram of a map generating device according to an embodiment of the present application.
  • Fig. 8 shows a schematic structural diagram of a map updating device according to an embodiment of the present application.
  • Document 1 discloses a positioning system and method, which uses lidar, vehicle-mounted camera, global positioning system, inertial measurement unit, vehicle controller area network data and pre-established positioning layers to perform automatic driving position.
  • the location layer (HD map) 510 disclosed in Document 1 uses different methods to store the data of geographical environment objects with different semantics.
  • the data of the lane is stored using a landmark map (landmark map) 520, and the road
  • the surrounding three-dimensional (3D) geographic environment object data is stored in the form of a surface (mesh), a 3D point cloud, or a voxel grid (volumetric grid) using an occupation map 530, where, when the occupation map 530 is a voxel grid
  • the voxel data saves both the data of the occupied cell and the blank cell that cannot provide semantic information at the same time.
  • the data of the occupied cell also stores the local part that exists in the occupied cell.
  • the normal vector of the surface, and the blank cell does not contain the normal vector.
  • the amount of data that occupies the map 530 is relatively large, reaching 1GB/mile.
  • the technical solution disclosed in Document 1 has at least the following two drawbacks.
  • the positioning process is highly dependent on the semantic geographic environment objects provided by the sensor. Therefore, high positioning accuracy can be achieved in scenes with rich geographic environment objects such as urban roads, while in underground garages, tunnels, etc. Such scenes lacking rich geographical environment objects usually have a reduced positioning accuracy, which leads to reduced positioning stability or positioning capabilities.
  • the positioning process is highly dependent on geographic environmental objects, mismatches between different geographic environmental objects caused by misdetection of geographic environmental objects will also result in reduced positioning stability or positioning capabilities.
  • the positioning layer stores the data of geographic environment objects in the form of a surface, in the form of a 3D point cloud, or in the form of a voxel grid containing data of both occupied cells and blank cells. Therefore, the positioning layer The amount of data is large, which leads to a large storage space required for the storage of the positioning layer.
  • Document 2 discloses a positioning system and method that uses lidar, global positioning system, inertial measurement unit, vehicle chassis wheel speed, and pre-established positioning layers to locate autonomous vehicles.
  • the technical solution disclosed in Document 2 has at least the following two defects.
  • the laser reflection intensity is used to characterize the characteristics of the geographical environment object, which will reduce the positioning stability.
  • the reflection intensity of the road surface to the laser is affected by the degree of road wear, weather, lidar data quality, and lidar installation location. Therefore, the same vehicle is affected by the same road surface at different times or under different weather conditions.
  • the laser reflection intensity is different, so when the same vehicle is driving on the same road without marking, the positioning results of the vehicle at different times or under different weather conditions will be very different.
  • the positioning layer is stored in the form of an image, so the data volume of the positioning layer is relatively large, which causes the storage of the positioning layer to require a large storage space.
  • this application proposes various embodiments of vehicle positioning, positioning layer generation, and positioning layer update that will be described in detail below.
  • the term "geographic environment object” refers to various objects that can reflect laser light in the geographic area on which the vehicle is driving, such as but not limited to, buildings, road signs, road signs, roads, trees, bushes , Tunnel ceilings, tunnel walls, pedestrians, vehicles, animals, telephone poles, etc.
  • volume pixel is the abbreviation of volume pixel, which is the smallest unit of digital data divided in three-dimensional space, and is conceptually similar to the smallest unit of two-dimensional space-pixel.
  • FIG. 1 shows a schematic diagram of the implementation environment involved in vehicle positioning, positioning layer generation, and positioning layer update according to an embodiment of the present application.
  • the implementation environment includes a vehicle 10, a map generating device 30 and a map updating device 40.
  • the vehicle 10 may be a conventional vehicle or an autonomous driving vehicle.
  • Autonomous driving vehicles may also be called unmanned vehicles or intelligent driving vehicles, etc., which can drive in manual mode, fully autonomous mode, or partially autonomous mode.
  • an autonomous vehicle can drive autonomously over a geographic area with little or no control input from the driver.
  • the vehicle 10 In addition to common components such as an engine or electric motor, wheels, steering wheel, and transmission, the vehicle 10 also includes a sensor system 102, a communication system 104, and a computer device 108.
  • a sensor system 102 In addition to common components such as an engine or electric motor, wheels, steering wheel, and transmission, the vehicle 10 also includes a sensor system 102, a communication system 104, and a computer device 108.
  • the sensor system 102 includes at least a global navigation satellite system (global navigation satellite system: GNSS) receiver 110, an inertial measurement unit (IMU) 112, and a laser radar (light detection and ranging: LiDAR) 114.
  • GNSS global navigation satellite system
  • IMU inertial measurement unit
  • LiDAR laser radar
  • the GNSS receiver 110 is used to receive satellite signals to locate the vehicle.
  • the GNSS receiver may be a global positioning system (GPS) receiver, a Beidou system receiver, or other types of positioning system receivers.
  • GPS global positioning system
  • Beidou system receiver or other types of positioning system receivers.
  • the IMU 112 can sense the position and orientation changes of the vehicle based on the inertial acceleration.
  • the IMU 112 may be a combination of an accelerometer and a gyroscope, and is used to measure the angular velocity and acceleration of the vehicle.
  • the lidar 114 uses laser light to sense objects in the geographic environment where the vehicle 10 is located. Using the sensor data of the lidar 114, laser point cloud data (also referred to as a laser point cloud map) of a geographic area can be formed.
  • the lidar 114 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the sensor system 102 may also include a chassis wheel speed sensor, which can sense the chassis wheel speed of the vehicle 10.
  • the communication system 104 is used for the vehicle 10 to communicate with the outside, and it can communicate wirelessly with one or more external devices directly or via a communication network.
  • the communication system 104 may use third-generation (3G) cellular communication (e.g., code division multiple access (CDMA), etc.), fourth-generation (4G) cellular communications (e.g., long term evolution). : LTE), etc.) or fifth-generation (5G) cellular communication to communicate with external devices.
  • 3G third-generation
  • 4G fourth-generation
  • 4G fourth-generation
  • 4G long term evolution
  • LTE long term evolution
  • 5G fifth-generation
  • the communication system 104 may use WiFi and a wireless local area network (WLAN) to communicate with external devices.
  • the communication system 104 may directly communicate with an external device using an infrared link, Bluetooth technology, or ZigBee.
  • the computer device 108 is connected to the sensor system 102 and the communication system 104.
  • the computer device 108 may use the sensor data received from the sensor system 102 and the positioning layer of the local geographic area where the vehicle 10 is located to locate the vehicle 10.
  • the positioning layer used by the computer device 108 may be pre-stored in the computer device 108, or may be obtained from an external device such as a server through the communication system 104.
  • the map generating device 30 may be an electronic device with computing capabilities such as a server, a workstation, a desktop computer or a notebook computer, which is used to generate a positioning layer for positioning a vehicle by using laser point cloud data of a geographic area.
  • the generated positioning layer can be stored in a network device or cloud, etc., for the vehicle to be downloaded locally before use or downloaded in real time during use, or it can be provided to users, vehicle manufacturers, vehicle sellers, or vehicle service personnel to share It is stored in the vehicle.
  • the laser point cloud data is formed by using sensor data obtained by a laser radar in the geographic area by a detection device such as a vehicle or an unmanned aerial vehicle.
  • the map update device 40 may be an electronic device with computing capabilities, such as a server, a workstation, a desktop computer, or a notebook computer, which is used to update the constructed positioning layer.
  • Fig. 2A shows a schematic flowchart of a method for vehicle positioning according to an embodiment of the present application.
  • the method 200 shown in FIG. 2A may be executed by, for example, the computer device 108 of the vehicle 10 or any other suitable device to determine the position and posture of the vehicle 10 relative to a map.
  • the method 200 includes step S204-step S212.
  • step S204 at least the data from the satellite navigation system and the inertial measurement unit are used to obtain the first position and the first attitude of the vehicle at the first point in time.
  • the data from both the GNSS receiver 110 and the IMU 112 of the vehicle 10 may be used to obtain the first position and the first attitude of the vehicle 10.
  • the data from the GNSS receiver 110, the IMU 112 and the chassis wheel speed sensor of the vehicle 10 may be used to obtain the first position and the first attitude of the vehicle 10.
  • step S208 use the sensor data of the lidar of the vehicle to obtain the first laser point cloud data of the first local geographic area including the first position.
  • the first local geographic area may be a local area centered on the first position of the vehicle 10.
  • the first local geographic area may be a local area that contains the first position of the vehicle 10 but is not centered thereon.
  • the first laser point cloud data may be constructed using only the sensor data of the laser radar 114 obtained at the first point in time.
  • the first laser point cloud data may be formed using the sensor data of the lidar 114 acquired at the first time point and before the first time point one or more times, as shown in FIG. 5C
  • the steps S544-S546 in the method 540 are shown.
  • step S212 use the first laser point cloud data and the pre-built first positioning layer of the first local geographic area to correct the first position and the first posture to obtain the The corrected position and the corrected posture of the vehicle at the first time point, wherein the first positioning layer is configured to store the identification and weight value of a plurality of voxels. At least a part of each voxel obtained by voxelizing the second laser point cloud data of the first local geographic area obtained when the layer is located, and the weight value of the voxel indicates that the voxel is geographically The degree to which environmental objects may occupy.
  • the plurality of voxels may be all voxels or part of voxels in each voxel obtained by voxelizing the second laser point cloud data (for example, those voxels with the weight value greater than zero).
  • the weight value of each voxel can be represented by, for example, the number of laser spots included in the voxel.
  • the weight value of each voxel can be expressed in other suitable ways.
  • the weight value of each voxel can be expressed by the ratio of the number of laser points contained in the voxel to the specified number.
  • the specified number is the number of laser points contained in the voxel containing the largest number of laser points among all the voxels of the plurality of voxels.
  • the identifier of the voxel may be, for example, but not limited to, expressed by using the position information of the voxel or calculated based on the position information of the voxel.
  • the position information of the voxel may be, for example, the longitude coordinate value, the latitude coordinate value, and the height coordinate value of the voxel in the spatial coordinate system (for example, the global spatial coordinate system) applied by the first positioning layer.
  • the position information of the voxel may be the serial number in the longitude direction, the serial number in the latitude direction, and the serial number in the height direction of the voxel in the spatial coordinate system applied by the first positioning layer.
  • the position information of a certain voxel may be [100, 130, 180], which means that the voxel belongs to the 100th voxel in the longitude direction in the spatial coordinate system applied by the first positioning layer, and The 130th voxel in the latitude direction and the 180th voxel in the height direction.
  • the identifier of the voxel may be calculated, for example, but not limited to, based on the position information of the voxel.
  • the identifier of a voxel is, for example, a hash map value of the location information of the voxel.
  • the first positioning layer may be, for example, but not limited to, obtained from the vehicle or other devices such as a server.
  • the correction of the first position and the first posture of the vehicle in step S212 can be implemented in the manner described in steps S552-S568 in the correction method 550 shown in FIG.
  • the first position and the first posture of the vehicle are not sampled. Therefore, each sampled position and sampled posture obtained by sampling does not include the first position and the first posture of the vehicle, so the plurality of position and posture groups do not include vehicles with vehicles.
  • the position and posture group of the first position and the first posture can be implemented in the manner described in steps S552-S568 in the correction method 550 shown in FIG.
  • the correction of the first position and the first posture of the vehicle in step S212 may be implemented in a first optional manner that is different from the manner described in steps S552-S568 of the correction method 550 shown in FIG. 5D, so The first alternative method is different from the method described in steps S552-S568 of the correction method 550 shown in FIG.
  • the obtained sampling position and sampling posture are the first position and the first posture of the vehicle, so the plurality of position and posture groups include the position and posture group having the first position and the first posture of the vehicle, and then, in step S568, The position and posture contained in the position and posture group with the largest similarity score are used as the corrected position and posture of the vehicle 10 at the first time point T1.
  • the positioning layer used in the method of this embodiment stores the weight value indicating the possibility of the voxel being occupied by the geographic environment object, which reflects the geographic spatial structure of the geographic area, and does not store the specific semantics of the geographic environment object (ie, specific semantics). Type) and the reflection intensity of the geographical environment object that is susceptible to environmental changes. Therefore, the method of this embodiment depends on the geographical spatial structure of the geographical area, rather than the specific semantics of the environmental object and the reflection intensity of the geographical environment object to the laser.
  • the method of this embodiment can achieve basically the same positioning effect, thereby improving positioning stability and positioning performance.
  • the plurality of voxels includes voxels with the weight value greater than zero in each voxel obtained by voxelizing the second laser point cloud data.
  • the first positioning layer is structured to store only the identification and weight value of the voxel whose weight value is greater than zero, and does not store the identification and weight value of the voxel whose weight value is equal to zero, that is, no positioning Without helpful information, this will reduce and compress the data volume of the positioning layer, thereby reducing the storage space required for storing the positioning layer.
  • the weight value of each voxel in the plurality of voxels may be represented by the number of laser spots included in the voxel.
  • the weight value can reliably and simply indicate the possibility of the voxel being occupied by the geographical environment object.
  • the first positioning layer may be configured to store the identification and the weight value of the plurality of voxels in pairs in the form of a hash table, and the weight value of each voxel.
  • the identifier is represented by a hash map value of the position information of the voxel in the spatial coordinate system applied by the first positioning layer.
  • the hash map value is, for example, calculated using a known hash function for the location information.
  • a hash table (also called a hash table) is a data structure that stores a key and a value in pairs so that the value can be directly accessed or indexed according to the key, and it can quickly find the stored value. Therefore, constructing the first positioning layer in the form of a hash table can increase the indexing/matching speed of the positioning layer, thereby improving the positioning efficiency.
  • the correction of the first position and the first posture in step S212 may include operations in step S216, step S220, and step S224.
  • step S216 in the surrounding space of the first position and the first posture, multiple spatial sampling is performed for the position and posture to obtain a plurality of position and posture groups, wherein each position and posture group includes one of them.
  • the sampling position and sampling attitude obtained by spatial sampling.
  • step S220 the similarity scores of the multiple position and posture groups are calculated, wherein the similarity score of any position and posture group represents the second positioning layer associated with the any position and posture group and the first position and posture group.
  • the degree of similarity of the bit layer, the second positioning layer is made by using the third laser point cloud data to make the first positioning layer, and the third laser point cloud data is made by using The sampling position and the three-dimensional space transformation generated by the sampling attitude included in the any position and posture group transform the first laser point cloud data from the first spatial coordinate system associated with the vehicle to the first position It is obtained from the second spatial coordinate system associated with the layer.
  • step S220 the calculation of the similarity scores of the multiple position and posture groups in step S220 can be implemented using the manner described in steps S554-S564 in the correction method 550 shown in FIG. 5D.
  • the calculation of the similarity scores of the multiple position and posture groups in step S220 may be implemented in a first optional manner that is different from the manner described in steps S554-S564 in the correction method 550 shown in FIG. 5D, for example.
  • the first alternative method is different from the method described in steps S554-S564 in that: in step S564, for each position and posture group, the first total weight value, its first total weight value and its second total weight value
  • the ratio of the value, or the ratio of its first total weight value to the sum of its first total weight value and its second total weight value is used as the similarity score of the position and posture group.
  • the first spatial coordinate system may be, but not limited to, a local spatial coordinate system of the vehicle
  • the second spatial coordinate system may be, but not limited to, a global spatial coordinate system.
  • step S224 the corrected position and the corrected posture are determined based on at least the similarity scores of the multiple position and posture groups.
  • the determination of the correction position and the correction posture in step S224 may be implemented by using the manner described in steps S566-S568 in the correction method 550 shown in FIG. 5D.
  • the determination of the correction position and the correction posture in step S224 may be implemented in a first optional manner that is different from the manner described in steps S566-S568 in the correction method 550 shown in FIG. 5D.
  • An optional method is different from the method described in steps S566-S568 in that: in step S568, other weighted fitting algorithms different from least square weighted fitting are used to calculate the sum of the sampling positions included in each position and posture group C1.
  • the sample pose is weighted and fitted to obtain the corrected position and the corrected pose.
  • the first position and the first posture of the vehicle can be corrected with the help of multiple sets of sampled positions and sampled postures sampled from the surrounding space of the first position and the first posture of the vehicle, and the corrected position and posture of the vehicle can be determined relatively reliably. Correct the posture.
  • determining the corrected position and the corrected posture in step S224 may include step S228 and step S232.
  • step S2228 from the plurality of position and posture groups, each first position and posture group whose similarity score is greater than a first threshold is selected.
  • step S232 the similarity scores of the first position and posture groups are used as weights, and the sampling positions and the sampling postures included in the first position and posture groups are weighted and fitted to obtain the corrected positions and postures.
  • the revised posture is used as weights, and the sampling positions and the sampling postures included in the first position and posture groups.
  • sampling positions and sampling attitudes in the position and attitude group whose similarity scores are greater than the first threshold are selected to correct the first position and the first attitude of the vehicle to determine the corrected position and the corrected attitude of the vehicle, which can effectively remove the noise sampling position And the influence of noise sampling attitude on vehicle positioning, so as to obtain high-precision positioning results.
  • calculating the similarity scores of the multiple position and posture groups in step S220 may include step S240, step S244, and step S248.
  • step S240 the first type of voxel and the second type of voxel of each position and posture group are searched, wherein the first type of voxel is in the second position and posture group associated with the position and posture group.
  • the weight value stored in the layer is the same as the weight value stored in the first positioning layer, and the voxel of the second type is in the second positioning associated with the position and posture group.
  • the weight value stored in the layer is different from the weight value stored in the first positioning layer.
  • step S244 the first total weight value and the second total weight value of each position and posture group are calculated, wherein the first total weight value is equal to the first total weight value stored in the first positioning layer.
  • the sum of the weight values of the voxels, and the second total weight value is equal to the sum of the weight values of the respective second voxels stored in the first positioning layer.
  • step S248 the difference between the first total weight value and the second total weight value of each position and posture group is calculated as the similarity score of the position and posture group to obtain the similarity of the multiple position and posture groups Sex score.
  • the number of voxels of the first type with the same weight value stored in the two positioning layers is usually greater.
  • the number of voxels of the second type with different weight values is usually smaller, so that the first total weight value representing the sum of the weight values of the voxels of the first type and the first total weight value representing the sum of the weight values of the voxels of the second type
  • the difference between the two total weight values is usually larger.
  • the first total weight value representing the sum of the weight values of the voxels of the first type and the second total weight value representing the sum of the weight values of the voxels of the second type The difference is usually smaller.
  • the difference between the first total weight value representing the sum of the weight values of the voxels of the first type and the second total weight value representing the sum of the weight values of the voxels of the second type is used as the similarity of the position and posture group
  • the score can accurately indicate the similarity between the real-time positioning layer based on the laser point cloud data obtained from the vehicle and the pre-built positioning layer of the corresponding local geographic area associated with the position and posture group.
  • the method 200 may further include step S252 and step S256 to generate map update data for updating the positioning layer.
  • step S252 obtain sensor data of the lidar of the vehicle at the second position.
  • step S256 generate map update data, which includes the second position and the second posture and the acquired sensor data, wherein the second position and the second posture are of the vehicle at the current point in time.
  • Position and posture which are determined based on the corrected position and posture of the vehicle at the first point in time and the movement of the vehicle from the first point in time to the current point in time of.
  • the map update data is generated after determining the corrected position and the corrected posture of the vehicle so as to update the occupancy weight of each voxel stored in the positioning layer of the corresponding local geographic area that has been constructed, which can reduce the impact on the construction of the positioning layer.
  • the accuracy requirements of lidar sensor data can reduce or eliminate the adverse effects of moving objects (for example, moving vehicles or pedestrians) on the positioning layer during the construction of the positioning layer, thereby improving the positioning relative to the positioning layer The accuracy of the result.
  • Fig. 3A shows a flowchart of a method for generating a positioning layer according to an embodiment of the present application.
  • the method 300 shown in FIG. 3A may be executed by, for example, the map generating device 30 of FIG. 1 or any other suitable device.
  • the method 300 may include step S302-step S312.
  • step S302 the laser point cloud data of the second geographic area is acquired.
  • the second geographic area may be any suitable area, for example, but not limited to, an area of one or more cities, an area of one or more provinces, an area of one or more countries, and so on.
  • detection personnel can drive a vehicle equipped with lidar to drive in the second geographic area or control a drone equipped with lidar to fly in the second geographic area to collect lidar sensor data, using the collected sensors
  • the data can construct the laser point cloud data of the second geographic area.
  • step S306 the laser point cloud data is voxelized to obtain multiple voxels.
  • step S310 the weight value of the plurality of voxels is calculated, and the weight value of each voxel indicates the probability of the voxel being occupied by the geographical environment object.
  • the weight value of each voxel can be represented by the number of laser points included in the voxel, for example.
  • the weight value of each voxel may be expressed in any other suitable manner.
  • the weight value of each voxel may be expressed by the ratio of the number of laser points contained in the voxel to the specified number.
  • the specified number is the number of laser points contained in the voxel containing the largest number of laser points among all the voxels of the plurality of voxels.
  • step S312 the identification and weight value of at least a part of the plurality of voxels are stored to obtain the positioning layer of the second geographic area.
  • the identification of each voxel may be represented by, for example, the position information of the voxel in the spatial coordinate system associated with the positioning layer, or by the hash map value of the position information.
  • the hash map value is, for example, calculated using a known hash function for the location information.
  • the positioning layer of this embodiment stores the weight value indicating the possibility of the voxel being occupied by the geographic environment object, which reflects the geographic spatial structure of the geographic area, and does not store the specific semantics (ie specific types) and ease of the geographic environment object.
  • the reflection intensity of the object to the laser because the geospatial structure of the geographical area is not easily affected by the specific semantics of the geographical environment object and environmental changes, no matter it is in a geographical area with rich types of geographical environment objects or in a lack of types of geographical environment objects Regardless of changes in the environment, using the positioning layer produced in this embodiment for positioning can achieve basically the same positioning effect, thereby improving positioning stability and positioning performance.
  • calculating the weight value of the plurality of voxels in step S310 may further include: calculating the number of laser points contained in each voxel in the plurality of voxels as the weight value of the voxel .
  • the weight value can reliably and simply indicate the possibility of the voxel being occupied by the geographical environment object.
  • the at least a part of the voxels is each voxel of the plurality of voxels whose weight value is greater than zero.
  • the first positioning layer is structured to store only the identification and weight value of the voxel whose weight value is greater than zero, and does not store the identification and weight value of the voxel whose weight value is equal to zero, that is, no positioning Without helpful information, this will reduce and compress the data volume of the positioning layer, thereby reducing the storage space required for storing the positioning layer.
  • the positioning layer is configured to store the identification and the weight value of the at least a part of the voxels in pairs in the form of a hash table, and the identification of each voxel is determined by the The location information of the voxel in the spatial coordinate system associated with the positioning layer is represented by a hash map value.
  • a hash table is a data structure that stores a key and a value in pairs so that the value can be directly accessed or indexed according to the key. It can quickly find the stored value through the key. Therefore, constructing the first positioning layer in the form of a hash table can increase the indexing/matching speed of the positioning layer, thereby improving the positioning efficiency.
  • Fig. 3B shows a flowchart of a method for updating a positioning layer according to an embodiment of the present application.
  • the method 350 shown in FIG. 3B may be executed by, for example, the map update device 40 of FIG. 1 or any other suitable device.
  • the method 350 may include step S352-step S358.
  • step S352 the map update data is used to generate fourth laser point cloud data of the third local geographic area, wherein the map update data includes the third position and the third attitude of the vehicle, and the The first sensor data collected by the lidar of the vehicle, the third local geographic area is a local geographic area including the third location, and the fourth laser point cloud data is formed using the first sensor data .
  • the fifth laser point cloud data is voxelized to obtain a plurality of third voxels, wherein the fifth laser point cloud data is obtained based on the third position and the third posture.
  • the generated three-dimensional space transformation is obtained by transforming the fourth laser point cloud data from the space coordinate system associated with the vehicle to the space coordinate system associated with the third positioning layer, and, wherein the third positioning The layer is the previously constructed positioning layer of the third local geographic area, which stores the identification and weight value of at least a part of the plurality of third voxels, and the weight value of each voxel represents the volume. The degree to which the element is occupied by the geographical environment object.
  • the at least a part of the voxels may be all the voxels in the plurality of third voxels.
  • the at least a part of the voxels may be those voxels whose weight values are greater than zero among the plurality of third voxels.
  • the weight value of a voxel can be represented by the number of laser points included in the voxel, for example.
  • the weight value of the voxel may be expressed in any other suitable manner.
  • the weight value of the voxel may be expressed by the ratio of the number of laser points contained in the voxel to the specified number. The number is the number of laser points contained in the voxel containing the largest number of laser points among all the voxels of the plurality of third voxels.
  • the identifier of the voxel may be represented by the position information of the voxel in the spatial coordinate system associated with the third positioning layer, or may be represented by the hash map value of the position information.
  • the hash map value is, for example, calculated using a known hash function for the location information.
  • step S354 the weight values of the plurality of third voxels are calculated.
  • step S356 the weight value of each voxel stored in the third positioning layer is updated by using the weight value of the plurality of third voxels.
  • step S592 in FIG. 5E may be used to update the weight value of each voxel stored in the first positioning layer.
  • the third positioning layer stores the identification and the weight value of those voxels with the weight value greater than zero among the plurality of third voxels, and, as shown in FIG. 3C
  • the updating of the weight value of each voxel stored in the third positioning layer in step S356 may include step S358, step S360, step S362, and step S364.
  • step S358 from the plurality of third voxels, those voxels whose calculated weight value is greater than zero are selected.
  • step S360 for each voxel selected, if the third positioning layer stores the weight value of the voxel and the stored weight value is not the same as the calculated weight value, then the voxel’s weight value is used. The calculated weight value replaces the weight value of the voxel stored in the third positioning layer.
  • step S362 for each selected voxel, if the weight value of the voxel is not stored in the third positioning layer, the identifier of the voxel and the calculated weight value are stored in the In the third positioning layer.
  • step S364 if the third positioning layer stores the identification and weight value of a fourth voxel that does not appear in the selected voxels, delete the fourth volume from the third positioning layer The identity and weight value of the element.
  • static targets for example, buildings
  • static targets for example, buildings
  • the weight value of deleting moving targets for example, moving vehicles or pedestrians, etc.
  • FIGS. 2A-2E and 3A-3C the vehicle positioning, positioning layer generation, and positioning layer update method embodiments of the present application are described in detail above in conjunction with FIGS. 2A-2E and 3A-3C.
  • the following describes in detail the vehicle positioning, An embodiment of a device for generating and updating a positioning layer. It should be understood that the description of the method embodiment and the description of the device embodiment correspond to each other. Therefore, for the parts that are not described in detail in the device embodiment, reference may be made to the previous method embodiment.
  • Fig. 4A shows a schematic diagram of an apparatus for vehicle positioning according to an embodiment of the present application.
  • the apparatus 400 shown in FIG. 4A may be implemented by the computer device 108 of FIG. 1 or any other suitable apparatus.
  • the device 400 includes a prediction module 402, an acquisition module 404, and a correction module 406.
  • the prediction module 402 is configured to use at least the data from the satellite navigation system and the inertial measurement unit to obtain the first position and the first attitude of the vehicle at the first point in time.
  • the acquiring module 404 is configured to use the sensor data of the lidar of the vehicle to acquire the first laser point cloud data of the first local geographic area including the first location.
  • the correction module 406 is configured to use the first laser point cloud data and the pre-built first positioning layer of the first local geographic area to correct the first position and the first posture, so as to obtain the location The corrected position and the corrected posture of the vehicle at the first point in time, wherein the first positioning layer is configured to store the identification and weight value of a plurality of voxels, and the plurality of voxels includes the At least a part of each voxel obtained by voxelizing the second laser point cloud data of the first local geographic area obtained when a layer is positioned, and the weight value of the voxel indicates that the voxel is The degree of possibility that the geographical environment object occupies.
  • the plurality of voxels includes voxels with the weight value greater than zero in each voxel obtained by voxelizing the second laser point cloud data.
  • the weight value of each voxel in the plurality of voxels is represented by the number of laser spots included in the voxel.
  • the first positioning layer is configured to store the identification and the weight value of the plurality of voxels in pairs in the form of a hash table, and the identification of each voxel It is represented by the hash map value of the position information of the voxel in the spatial coordinate system applied by the first positioning layer.
  • the correction module 406 may include a sampling module 408, a first calculation module 410, and a determination module 412.
  • the sampling module 408 is configured to perform multiple spatial sampling of positions and attitudes in the surrounding space of the first position and the first attitude to obtain a plurality of position and attitude groups, wherein each position and attitude group includes one sampling The obtained sampling position and sampling attitude.
  • the first calculation module 410 is configured to calculate the similarity scores of the multiple position and posture groups, wherein the similarity score of any position and posture group represents the second positioning layer associated with the any position and posture group and the The degree of similarity of the first positioning layer, the second positioning layer is generated by using the third laser point cloud data to generate the first positioning layer, and the third laser point cloud data Is to transform the first laser point cloud data from the first spatial coordinate system associated with the vehicle to the first spatial coordinate system associated with the vehicle by using a three-dimensional spatial transformation generated based on the sampling position and the sampling posture included in the any position and posture group It is obtained by positioning the second spatial coordinate system associated with the layer.
  • the determining module 412 is configured to determine the corrected position and the corrected posture based at least on the similarity scores of the multiple position and posture groups.
  • the determination module 412 may include a selection module 414 and a second calculation module 416.
  • the selecting module 414 is configured to select each first position and posture group whose similarity score is greater than a first threshold from the plurality of position and posture groups.
  • the second calculation module 416 is configured to use the similarity scores of the respective first position and posture groups as weights, and perform weighted fitting on the sampling positions and the sampling postures included in the respective first position and posture groups to obtain the correction Position and the corrected posture.
  • the first calculation module 410 may include a search module 420, a fourth calculation module 422, and a fifth calculation module 424.
  • the searching module 420 is configured to search for the voxels of the first type and the voxels of the second type in each position and posture group, wherein the voxels of the first type are in the second location map associated with the position and posture group.
  • the weight value stored in the layer is the same as the weight value stored in the first positioning layer, and the voxel of the second type is in the second positioning map associated with the position and posture group.
  • the weight value stored in the layer is different from the weight value stored in the first positioning layer.
  • the fourth calculation module 422 is configured to calculate the first total weight value and the second total weight value of each position and posture group, wherein the first total weight value is equal to the first total weight value stored in the first positioning layer.
  • the sum of the weight values of the voxels of the first type, and the second total weight value is equal to the sum of the weight values of the voxels of the second type stored in the first positioning layer with.
  • the fifth calculation module 424 is configured to calculate the difference between the first total weight value and the second total weight value of each position and posture group as the similarity score of the position and posture group to obtain the multiple positions and postures The similarity score of the group.
  • the apparatus 400 may further include a map update data generating module 425, which includes an obtaining module 426 and a generating module 428.
  • the obtaining module 426 is configured to obtain sensor data of the lidar of the vehicle at the second position.
  • the generating module 428 is used to generate map update data, which includes the second position and the second posture and the acquired sensor data, wherein the second position and the second posture are the vehicle at the current point in time.
  • the position and posture of the vehicle are based on the corrected position and posture of the vehicle at the first point in time and the movement of the vehicle from the first point in time to the current point in time definite.
  • Fig. 4F shows a schematic diagram of an apparatus for generating a positioning layer according to an embodiment of the present application.
  • the apparatus 450 shown in FIG. 4F may be implemented by the map generating device 30 in FIG. 1 or any other suitable device.
  • the apparatus 450 may include an acquisition module 452, a voxelization module 454, a calculation module 456, and a storage module 458.
  • the obtaining module 452 is configured to obtain laser point cloud data of the second geographic area.
  • the voxelization module 454 is used to voxelize the laser point cloud data to obtain multiple voxels.
  • the calculation module 456 is used to calculate the weight value of the plurality of voxels, and the weight value of each voxel indicates the probability that the voxel is occupied by the geographical environment object.
  • the storage module 458 is configured to store the identification and weight value of at least a part of the plurality of voxels to obtain the positioning layer of the second geographic area.
  • the calculation module 456 is further configured to calculate the number of laser points included in each voxel in the plurality of voxels as the weight value of the voxel.
  • the at least a part of the voxels is each voxel of the plurality of voxels whose weight value is greater than zero.
  • the positioning layer is configured to store the identification and the weight value of the at least a part of the voxels in pairs in the form of a hash table, and the identification of each voxel is determined by the The location information of the voxel in the spatial coordinate system associated with the positioning layer is represented by a hash map value.
  • Fig. 4G shows a schematic diagram of an apparatus for updating positioning layers according to an embodiment of the present application.
  • the apparatus 480 shown in FIG. 4G may be implemented by the map update device 40 in FIG. 1 or any other suitable device.
  • the device 480 may include a generation module 482, a voxelization module 484, a calculation module 486, and an update module 488.
  • the generating module 482 is configured to use the map update data to generate the fourth laser point cloud data of the third local geographic area, where the map update data includes the third position and the third attitude of the vehicle and the location at the third position.
  • the first sensor data of the lidar of the vehicle, the third local geographic area is a local geographic area including the third location, and the fourth laser point cloud data is formed using the first sensor data .
  • the voxelization module 484 is used to voxelize the fifth laser point cloud data to obtain a plurality of third voxels, where the fifth laser point cloud data is based on the third position and the first
  • the three-dimensional space transformation generated by the three attitudes is obtained by transforming the fourth laser point cloud data from the space coordinate system associated with the vehicle to the space coordinate system associated with the third positioning layer, and, wherein the The third positioning layer is the previously constructed positioning layer of the third local geographic area, which stores the identification and weight value of at least a part of the plurality of third voxels, and the weight value of each voxel Indicates how likely the voxel is to be occupied by geographic environment objects.
  • the calculation module 486 is used to calculate the weight values of the plurality of third voxels.
  • the update module 488 is configured to use the calculated weight values of the plurality of third voxels to update the weight values of each voxel stored in the third positioning layer.
  • the third positioning layer stores the identification and the weight value of those voxels whose weight value is greater than zero among the plurality of third voxels, and, as shown in FIG. 4H
  • the update module 488 may include a selection module 490, a replacement module 492, a storage module 494, and a deletion module 496.
  • the selection module 490 is configured to select those voxels whose calculated weight value is greater than zero from the plurality of third voxels.
  • the replacement module 492 is configured to, for each selected voxel, if the weight value of the voxel is stored in the third positioning layer and the stored weight value is not the same as the calculated weight value, use the voxel The calculated weight value of replaces the weight value of the voxel stored in the third positioning layer.
  • the storage module 494 is configured to, for each selected voxel, if the weight value of the voxel is not stored in the third positioning layer, store the identifier of the voxel and the calculated weight value in all the selected voxels. In the third positioning layer.
  • the deleting module 496 is configured to delete the fourth positioning layer from the third positioning layer if the third positioning layer stores the identification and weight value of the fourth voxel that does not appear in the selected voxel. The identity and weight value of the voxel.
  • the device provided in the above embodiment only uses the division of the above-mentioned functional modules as an example.
  • the above-mentioned function allocation can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into Different functional modules to complete all or part of the functions described above.
  • Fig. 5A shows an exemplary specific implementation of a system for positioning layer generation, positioning layer update, and vehicle positioning according to an embodiment of the present application.
  • the system for positioning layer generation, positioning layer update, and vehicle positioning includes four modules: offline generation module 502 for positioning layer, vehicle global position and posture prediction module 506, and vehicle global position and posture The correction module 512 and the offline update module 514 of the positioning layer.
  • the offline generation module 502 takes the original laser point cloud data of a certain local geographic area as input, and outputs the positioning layer for positioning of the local geographic area.
  • the prediction module 506 takes GNSS data, INS data, vehicle chassis wheel speed, and lidar sensor data as inputs, and outputs the predicted position and posture of the vehicle 10 in the global space coordinate system (ie, the predicted value of the vehicle’s position and posture) , And the laser point cloud data of the surrounding area of the vehicle 10 in the local spatial coordinate system of the vehicle 10.
  • the correction module 512 takes the loaded pre-built positioning layer, the predicted position and posture of the vehicle 10, and the laser point cloud data of the surrounding area of the vehicle 10 in the spatial coordinate system of the vehicle 10 as input, and the output is in the global spatial coordinate system The corrected position and the corrected posture of the vehicle 10.
  • the offline update module 514 determines that the vehicle has entered the area where the positioning layer has been constructed, it uses the corrected positioning result, the laser point cloud data of the surrounding area of the vehicle 10 in the local spatial coordinate system of the vehicle 10, and the constructed positioning map
  • the layer is the input, and the output is the positioning layer with the updated weight value.
  • the offline generation module 502 may be implemented by the map generation device 30 in FIG. 1, for example, the prediction module 506 and the correction module 512 may be implemented by the computer device 108 in FIG. 1, and the offline update module 514 may be implemented by the map.
  • the map update device 40 in 1 is implemented.
  • FIG. 5B shows an exemplary specific implementation of the method for generating a positioning layer according to an embodiment of the present application.
  • the positioning layer generating method 531 shown in FIG. 5B is implemented by the offline generating module 502.
  • the positioning layer generating method 531 may include step S532 to step S540.
  • step S532 the laser point cloud data of the designated geographic area is voxelized to obtain multiple voxels.
  • the designated geographic area is, for example, but not limited to, an area of one or more cities, an area of one or more provinces, an area of one or more countries, and the like.
  • detection personnel can drive a vehicle equipped with lidar to drive in the designated geographic area to collect lidar sensor data, and then use the collected sensor data to form the specified geographic area in the global spatial coordinate system.
  • Laser point cloud data can be used to form the specified geographic area in the global spatial coordinate system.
  • step S534 the hash map value and the weight value of each voxel in the plurality of voxels are calculated, wherein the weight value of each voxel represents the probability that the voxel is occupied by the geographic environment object, which is equal to the The number of laser points contained in the voxel, and the hash map value of each voxel is used as the identification of the voxel, which uses the hash mapping function to hash the position information of the voxel in global space coordinates. Column mapping calculated.
  • the position information may be, for example, but not limited to, represented by the longitude coordinate value, latitude coordinate value, and altitude coordinate value of the voxel in the global space coordinate system, or by the serial number of the voxel in the longitude direction in the global space coordinate system. , The serial number in the latitude direction and the serial number in the height direction to indicate.
  • step S536 from the plurality of voxels, select those voxels whose weight value is greater than zero.
  • voxels with a weight value equal to zero are considered to be not helpful for positioning. Therefore, the information of these voxels with a weight value equal to zero is not stored in order to reduce and compress the amount of data that needs to be stored.
  • step S538 the hash map value and the weight value used as the identifier of the voxel are used as the key and the value, respectively, and the hash map value and the weight value included in the selected voxels are formed in the form of key-value pairs.
  • the ground is stored in a hash table to obtain the positioning layer of the designated geographic area.
  • step S540 considering the size of the designated geographic area and the runtime memory space and computing power of the positioning device to be positioned using the positioning layer in the future, the positioning layer of the designated geographic area is divided into multiple slices. Store it.
  • the location layer in the form of a hash table can easily load the map data of the desired area and delete the map data outside the desired area by inserting and deleting key-value pairs.
  • the process of loading and deleting the positioning layer in the form of a hash table can be completed dynamically. Therefore, the positioning layer in the form of a hash table is very suitable for sliced storage, and there is no need to ensure that there is overlap between sliced positioning layers in adjacent geographic areas. area.
  • each segmented positioning layer can be stored in a conventional server or cloud server for downloading and use by each vehicle, or can be provided to the user, manufacturer, seller or service personnel of the vehicle to be directly stored in the vehicle .
  • FIG. 5C shows an exemplary specific implementation of the method for predicting the position and posture of a vehicle according to an embodiment of the present application.
  • the vehicle position and posture prediction method 540 shown in FIG. 5C is implemented by the prediction module 506.
  • the method 540 may include step S542-step S546.
  • step S542 the position and posture of the vehicle 10 at the first time point T1 are predicted by using the GNSS data received from the GNSS receiver 112, the IMU 114 and the chassis wheel speed sensor of the vehicle 10, the IMU data and the vehicle chassis wheel speed. It should be understood that the position and posture of the vehicle 10 at the first time point T1 are the position and posture in the global space coordinate system, not the position and posture in the local space coordinate system of the vehicle 10.
  • GNSS data Using GNSS data, IMU data, and vehicle chassis wheel speed to determine the position and posture of the vehicle is a known technology, and a detailed description thereof is omitted here.
  • step S544 using the GNSS data, the IMU data and the wheel speed of the vehicle chassis received by the sensor system 102 of the vehicle 10, the relative motion between the multi-frame sensor data is calculated, wherein the multi-frame sensor data includes the data at the first time point.
  • step S546 according to the calculated relative motion, the multi-frame sensor data is superimposed to form the surrounding area of the vehicle 10 in the local spatial coordinate system of the vehicle 10 (that is, the local geographic location that contains the predicted position of the vehicle 10).
  • Area laser point cloud data (for ease of description, it will be referred to as laser point cloud data P1 below)
  • Fig. 5D shows an exemplary specific implementation of the vehicle position and posture correction method according to an embodiment of the present application.
  • the vehicle position and posture correction method 550 shown in FIG. 5D is implemented by the correction module 512, which corrects the position and posture of the vehicle 10 at the first time point T1 predicted by the vehicle position and posture prediction method 540 of FIG. 5C to obtain The corrected position and corrected posture of the vehicle 10 at the first time point T1.
  • the correction method 550 may include step S552-step S576.
  • step S552 in the space around the predicted position and attitude of the vehicle 10 at the first time point T1, at a certain spatial sampling interval, multiple spatial sampling is performed for the position and attitude to obtain multiple position and attitude groups, where , Each position and posture group includes the sampling position and the sampling posture obtained from one spatial sampling.
  • the range of the surrounding space of the position and attitude of the vehicle 10 at the first time point T1 is not fixed, but is estimated based on the vehicle speed of the vehicle 10 at the first time point T1, so there is no need to determine through multiple iterations And find the range of the surrounding space.
  • step S554 the three-dimensional spatial transformation from the local spatial coordinate system of the vehicle 10 to the global spatial coordinate system of each position and posture group is generated by using the sampling positions and the sampling postures included in each position and posture group to obtain the plurality of positions and posture groups.
  • the three-dimensional space transformation of each position and posture group is generated by using the sampling positions and the sampling postures included in each position and posture group to obtain the plurality of positions and posture groups.
  • step S556 using the three-dimensional space transformation of each position and posture group, the laser point cloud data P1 of the surrounding area of the vehicle 10 in the space coordinate system of the vehicle 10 is transformed into the global space coordinate system to obtain the position and posture group Transformed laser point cloud data.
  • step S558 according to the same method as the positioning layer generation method shown in FIG. 5B, the transformed laser point cloud data of each position and posture group is used to generate a real-time positioning layer associated with each position and posture group to obtain The real-time positioning layer associated with each of the multiple position and posture groups.
  • step S560 search for the voxels of the first type and the voxels of the second type in each position and posture group, where the voxels of the first type are stored in the real-time positioning layer associated with the position and posture group.
  • the second type of voxel is each voxel whose weight value stored in the real-time positioning layer associated with the position and posture group is different from the weight value stored in the positioning layer M1.
  • a certain voxel only stores a weight value in one of the real-time positioning layer and the positioning layer M1 associated with the position and pose group, it also belongs to the second type of voxel.
  • step S562 the first total weight value and the second total weight value of each position and posture group are calculated, where the first total weight value is equal to the voxel of the first type of the position and posture group stored in the positioning layer M1
  • the second total weight value is equal to the sum of the weight values stored in the positioning layer M1 for the voxels of the second type of the position and posture group.
  • step S564 the difference between the first total weight value and the second total weight value of each position and posture group is calculated as the similarity score of the position and posture group, thereby obtaining the similarity scores of the multiple position and posture groups.
  • the similarity score of each position and posture group indicates the degree of similarity between the real-time positioning layer and the positioning layer M1 associated with the position and posture group.
  • step S566 from the plurality of position and posture groups, those position and posture groups whose similarity scores are greater than a specified threshold are selected.
  • the selected position and posture group is referred to as the position and posture group C1 hereinafter.
  • step S568 the similarity score of each selected position and posture group C1 is used as a weight, and the sampling positions and sampling postures included in each selected position and posture group C1 are subjected to least square weighted fitting to obtain the first The corrected position and corrected posture of the vehicle 10 at time T1.
  • step S572 based on the corrected position and corrected attitude of the vehicle 10 at the first time point T1, and the sensor data received by the sensor system 102 of the vehicle 10, the vehicle is determined during the period from the first time point T1 to the current time point.
  • the movement of 10 determines the corrected position and corrected posture of the vehicle 10 at the current point in time as the positioning result of the vehicle 10 at the current point in time.
  • the corrected position and the corrected posture of the vehicle 10 at the current point in time will be referred to as the corrected position AP and the corrected posture AZ in the following.
  • step S574 the sensor data of the lidar of the vehicle 10 at the corrected position AP is acquired.
  • map update data is generated, which includes the corrected position AP and the corrected posture AZ of the vehicle 10 and the sensor data acquired in step S574.
  • the generated map update data can be provided, for example, to a map update device to update the built positioning layer.
  • the vehicle position and posture prediction method 540 and the vehicle position and posture correction method 550 may be executed periodically, for example, but not limited to, to continuously position the vehicle 10.
  • Fig. 5E shows an exemplary specific implementation of the positioning layer update method according to the embodiment of the present application.
  • the positioning layer update method 580 shown in FIG. 5E is implemented by the offline update module 514, which uses the map update data generated by the vehicle position and posture correction method 550 to update the constructed positioning layer.
  • the updating method 580 may include step S582-step S592.
  • step S582 if the corrected position of the vehicle 10 (hereinafter referred to as the corrected position KK) included in the map update data is located in the geographic area covered by the pre-built positioning layer, the sensor data included in the map update data is used to form the The laser point cloud data (hereinafter referred to as laser point cloud data M2) in the local spatial coordinate system of the vehicle 10 in the local geographic area of the corrected position KK (hereinafter referred to as the local geographic area A1).
  • the laser point cloud data M2 in the local spatial coordinate system of the vehicle 10 in the local geographic area of the corrected position KK (hereinafter referred to as the local geographic area A1).
  • step S584 the laser point cloud data M2 is transformed to the global space using the three-dimensional space transformation from the local space coordinate system of the vehicle 10 to the global space coordinate system generated based on the corrected position and the corrected posture of the vehicle 10 included in the map update data
  • the converted laser point cloud data M2 is referred to as laser point cloud data M3.
  • step S586 the laser point cloud data M3 is voxelized to obtain multiple voxels.
  • step S588 the hash map value and weight value of each voxel in the plurality of voxels are calculated, wherein the weight value of each voxel is equal to the number of laser points contained in the voxel, and each voxel
  • the hash map value of the voxel is used as the identifier of the voxel, which is obtained by hash map calculation of the position information of the voxel in global space coordinates using a known hash map function.
  • step S590 from the plurality of voxels, select those voxels whose weight value is greater than zero.
  • step S592 the weight value of each voxel in the positioning layer of the local geographic area A1 is updated by using the weight value of the selected voxel and the hash map value as the identifier.
  • its hash map value is used as an identifier to index the positioning layer of the local geographic area A1 to check whether the location layer of the local geographic area A1 is stored in the voxel. Weights.
  • the volume stored in the positioning layer of the local geographic area A1 is not updated The weight value of the element.
  • the calculated weight value of the voxel is used to replace the weight value of the local geographic area A1
  • the weight value of the voxel stored in the positioning layer is used to replace the weight value of the local geographic area A1 The weight value of the voxel stored in the positioning layer.
  • the weight value of the voxel is not stored in the positioning layer of the local geographic area A1
  • the hash map value and weight value of the voxel as the identifier are added to the local geographic area A1 in the form of key-value pairs.
  • Store in the positioning layer In most cases, the occurrence of this situation is caused by the addition of static targets such as buildings, road signs, etc. in the local geographical area A1 after the localization layer of the local geographical area A1 was previously generated. Therefore, by adding the previously unstored voxel identification and weight value to the positioning layer of the local geographic area A1, the weight value of the static target can be added to the positioning layer, so that the positioning layer can match the reality of the corresponding geographic area. Environment, improve the reliability of positioning layers.
  • the positioning layer of the local geographic area A1 stores the weight values of voxels that do not appear in the selected voxels. If the check result shows that the weight value of such a voxel is stored, the identification and weight value of such a voxel are deleted from the positioning layer of the local geographic area A1.
  • the presence of such voxels in the positioning layer of the local geographic area A1 is usually caused by the presence of moving objects such as vehicles and/or pedestrians in the local geographic area A1 when the positioning layer of the local geographic area A1 was previously generated.
  • the weight value of the moving target can be deleted from the positioning layer, so that the positioning layer Match the real environment of the corresponding geographic area to improve the reliability of the positioning layer.
  • Fig. 6 shows a schematic structural diagram of a computer device according to an embodiment of the present application.
  • the computer device 601 shown in FIG. 6 may be, but is not limited to, the computer device 108 in FIG. 1, for example.
  • the computer device 601 may include at least one processor 602, at least one memory 604, a communication interface 606, and a bus 608.
  • the processor 602, the memory 604, and the communication interface 606 are connected through a bus 608.
  • the communication interface 606 is used for communication between the computer device 600 and other devices.
  • the memory 604 is used to store program codes and data.
  • the processor 602 is configured to execute the program code in the memory 604 to execute the method described in FIGS. 2A-2E or 5C-5D.
  • the memory 604 may be a storage unit inside the processor 602, or an external storage unit independent of the processor 602, or a component including a storage unit inside the processor 602 and an external storage unit independent of the processor 602.
  • Fig. 7 shows a schematic structural diagram of a map generating device according to an embodiment of the present application.
  • the map generating device 701 shown in FIG. 7 may be, for example, but not limited to the map generating device 30 in FIG. 1.
  • the map generating device 701 may include at least one processor 702, at least one memory 704, an input/output interface 706, and a bus 708.
  • the processor 702, the memory 704, and the input/output interface 706 are connected through a bus 708.
  • the input and output interface 706 is used to receive data and information from the outside and output data and information to the outside.
  • the input and output interface 706 may include, for example, a mouse, a keyboard, a display, and the like.
  • the memory 704 is used to store program codes and data.
  • the processor 702 is configured to execute the program code in the memory 704 to execute the method described in FIG. 3A or FIG. 5B.
  • the memory 704 may be a storage unit inside the processor 702, an external storage unit independent of the processor 702, or a component including a storage unit inside the processor 702 and an external storage unit independent of the processor 702.
  • Fig. 8 shows a schematic structural diagram of a map updating device according to an embodiment of the present application.
  • the map updating device 801 shown in FIG. 8 may be, for example, but not limited to the map updating device 40 in FIG. 1.
  • the map updating device 801 may include at least one processor 802, at least one memory 804, an input/output interface 806, and a bus 808.
  • the processor 802, the memory 804, and the input/output interface 806 are connected through a bus 808.
  • the input and output interface 806 is used to receive data and information from the outside and output data and information to the outside.
  • the input and output interface 806 may include, for example, a mouse, a keyboard, a display, and the like.
  • the memory 804 is used to store program codes and data.
  • the processor 802 is configured to execute the program code in the memory 804 to execute the method described in FIG. 3B or FIG. 5E.
  • the memory 804 may be a storage unit inside the processor 802, an external storage unit independent of the processor 802, or a component including a storage unit inside the processor 802 and an external storage unit independent of the processor 802.
  • the processors 602, 702, and 902 may be, but are not limited to, general-purpose processors, digital signal processing (DSP), application specific integrated circuit (ASIC), field programmable gate array : FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memories 604, 704, and 804 may be, for example, but not limited to, random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, magnetic disks, or optical disks.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including several
  • the program instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium may include, but is not limited to, U disk, mobile hard disk, read-only memory (read-only memory: ROM), random access memory (random access memory: RAM), magnetic disks or optical disks, etc., which can store program code. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

车辆(10)定位、定位图层生成和定位图层更新的方法和装置,车辆(10)定位的方法包括:至少利用来自卫星导航系统(110)和惯性测量单元(112)的数据,获取在第一时间点处车辆(10)的第一位置和第一姿态(S204);利用车辆(10)的激光雷达(114)的传感器数据,获取包含第一位置的第一局部地理区域的第一激光点云数据(S208);利用第一激光点云数据和预先构建的第一局部地理区域的第一定位图层来修正第一位置和第一姿态,以得到在第一时间点处车辆(10)的修正位置和修正姿态(S212),其中,第一定位图层构造成存储多个体素的标识和权重值,多个体素包括对在构建第一定位图层时获得的第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,体素的权重值表示体素被地理环境对象占据的可能程度。方法和装置能够提升定位稳定性和减少定位图层所需的存储空间。

Description

车辆定位的方法和装置、定位图层生成的方法和装置 技术领域
本申请涉及车辆定位领域,尤其涉及车辆定位的方法和装置、定位图层生成的方法和装置以及定位图层更新的方法和装置。
背景技术
为实现在复杂场景下从A点到B点的自动驾驶,自动驾驶车辆需配备定位系统,以实时获取车辆与周围环境的相对位置关系以及周围环境包含的参考信息,用于制定复杂的行驶策略。
基础车载定位系统结合全球卫星导航(global navigation satellite system:GNSS)、惯性导航(inertial navigation system:INS)和车辆底盘轮速,在车辆行驶的情况下进行高动态实时定位。在此基础上,若采用诸如实时动态差分技术(real-time kinematic:RTK)这样的实时动态相对定位技术来修正定位数据,则可以实现大规模场景下的车道级精确定位。
然而,基础车载定位系统的定位精度易受天气、搜星数等因素影响,导致系统在同一位置处不同时间所获取的GNSS定位的绝对精度具有不确定性,定位结果会发生变化。此外,在诸如地下车库、城市峡谷和隧道等这样的无GNSS信号的区域中,基础车载定位系统只能依靠INS和/或车辆底盘轮速进行航位推算,这会产生累计误差并最终导致车辆偏离预定车道,从而影响基础车载定位系统的定位稳定性。
目前,主要采用相对地图的定位方案来提升定位稳定性和定位精度。相对地图的定位方案的基本原理是使用诸如激光雷达等这样的传感器来获取车辆周围的环境数据,并通过所获取的环境数据与预先构建的定位图层进行匹配来实现车辆定位。相对地图的定位方案以地图点为参考进行定位,由于地图中任意点都具有唯一的地图坐标,因此,相对地图的定位方案能够消除GNSS定位的不确定性,从而提升定位精度。另外,当车辆驶入无GNSS信号的区域时,相对地图的定位方案通过传感器实时采集的环境数据与定位图层进行匹配,能够消除累计误差,从而提升定位稳定性。
然而,现有的相对地图的定位方案所采用的定位图层的数据量较大,导致存储定位图层需要较大的存储空间,而且定位稳定性也还不够高。
发明内容
鉴于现有技术的以上问题,本申请的实施例提供车辆定位的方法和装置、定位图层生成的方法和装置以及定位图层更新的方法和装置,其能够减少存储定位图层所需的存储空间和提升定位稳定性。
本申请的第一方面提供一种车辆定位的方法,包括:至少利用来自卫星导航系统和惯性测量单元的数据,预测在第一时间点处车辆的第一位置和第一姿态;利用所述车辆的激光雷达的传感器数据,获取包含所述第一位置的第一局部地理区域的第一激光点云数据;以及,利用所述第一激光点云数据和预先构建的所述第一局部地理区域的第一定位图层来修正所述第一位置和所述第一姿态,以得到在所述第一时间点处所述车辆的修正位置和修正姿态,其中,所述第一定位图层构造成存储多个体素的标识和权重值,所述多个体素包括对在构建所述第一定位图层时获得的所述第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,以及,所述体素的权重值表示该体素被地理环境对象占据的可能程度。这里,所述方法所使用的定位图层存储表示体素被地理环境对象占据的可能程度的权重值,其体现地理区域的地理空间结构,并不存储地理环境对象的具体语义(即具体种类)和易受环境变化影响的地理环境对象对激光的反射强度,因而,所述方法依赖于地理区域的地理空间结构,而不是地理环境对象的具体语义和地理环境对象对激光的反射强度,来对于车辆进行定位;由于地理区域的地理空间结构不易受地理环境对象的具体语义和环境变化(例如,天气、路面随时间的磨损等)的影响,因此,不管是在地理环境对象的种类丰富的地理区域还是在地理环境对象的种类匮乏的地理区域,也不管环境如何变化,所述方法都能获得基本相同的定位效果,从而能够提升定位稳定性和定位性能。
在一个可能的设计中,所述多个体素包括对所述第二激光点云数据进行体素化得到的各个体素中的所述权重值大于零的体素,以及,所述多个体素中的每一个体素的所述权重值由该体素所包含的激光点的数量表示。这里,所述第一定位图层构造成只存储所述权重值大于零的体素的标识和权重值,不存储所述权重值等于零的体素的标识和权重值,即不存储对定位没有帮助的信息,这将减少和压缩定位图层的数据量,从而能够降低存储定位图层所需的存储空间。此外,一个体素被地理环境对象占据的可能程度越高,则该体素中包含的激光点的数量通常越多,因此,由该体素包含的激光点的数量来表示该体素的所述权重值能够可靠和简单地表示该体素被地理环境对象占据的可能程度。
在一个可能的设计中,所述第一定位图层构造成以散列表的形式成对地存储所述多个体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在所述第一定位图层所应用的空间坐标系下的位置信息的散列映射值表示。这里,散列表(也称为哈希表)是成对地存储键(key)和值(value)以便能根据键来对值进行直接访问或索引的数据结构,其能快速查找所存储的值。因此,把所述第一定位图层构造成散列表的形式,能够提高定位图层的索引/匹配的速度,从而提升定位效率。
在一个可能的设计中,所述修正所述第一位置和所述第一姿态包括:在所述第一位置和所述第一姿态的周围空间上针对位置和姿态进行多次空间抽样,以得到多个位置姿态组,其中,每个位置姿态组包括其中一次抽样得到的抽样位置和抽样姿态;计算所述多个位置姿态组的相似性得分,其中,任一位置姿态组的相似性得分表示与所述任一位置姿态组关联的第二定位图层和所述第一定位图层的相似程度,所述第二定位图层是利用第三激光点云数据以生成所述第一定位图层的方式生成的,以及,所述第三激光点云数据是利用基于所述任一位置姿态组包括的抽样位置和抽样姿态而生成的三维空间变换,将所述第一激光点云数据从与所述车辆关联的第一空间坐标系变换到与所述第一定位图层关联的第二空间坐标系而得到的;以及,至少基于所述多个位置姿态组的所述相似性得分,确定所述修正位置和所述修正姿态。这里,借助于在车辆的第一位置和第一姿态的周围空间上抽样得到的多组抽样位置和抽样姿态来修正车辆的第一位置和第一姿态,能够比较可靠地确定车辆的修正位置和修正姿态。
在一个可能的设计中,所述确定所述修正位置和所述修正姿态包括:从所述多个位置姿态组中,选取其相似性得分大于第一阈值的各个第一位置姿态组;以及,将所述各个第一位置姿态组的相似性得分作为权重,对所述各个第一位置姿态组所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。这里,仅选取相似性得分大于第一阈值的位置姿态组中的抽样位置和抽样姿态来修正车辆的第一位置和第一姿态以确定车辆的修正位置和修正姿态,可以有效地去除噪声抽样位置和噪声抽样姿态对车辆定位的影响,从而能够获得较高精度的定位结果。
在一个可能的设计中,所述计算所述多个位置姿态组的相似性得分包括:查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,所述第一类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值相同,而所述第二类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值不同;计算每个位置姿态组的第一总权重值和第二总权重值,其中,所述第一总权重值等于在所述第一定位图层中存储的所述第一类型的体素的所述权重值之和,以及,所述第二总权重值等于在所述第一定位图层中存储的所述第二类型的体素的所述权重值之和;以及,计算每个位置姿态组的所述第一总权重值与所述第二总权重值之差,作为该位置姿态组的相似性得分,以得到所述多个位置姿态组的相似性得分。这里,如果两个定位图层越相似,则在这两个定位图层中所存储的权重值相同的第一类型的体素的数量通常越多,在这两个定位图层中所存储的权重值不同的第二类型的体素的数量通常越少,从而代表第一类型的体素的权重值之和的第一总权重值与代表第二类型的体素的权重值之和的第二总权重值的差值通常越大,反之,表示第一类型的体素的权重值的总和的第一总权重值与表示第二类型的体素的权重值的总和的第二总权重值的差值通常越小。因此,使用代表第一类型的体素的权重值之和的第一总权重值与代表第二 类型的体素的权重值之和的第二总权重值的差值作为位置姿态组的相似性得分,能够准确地表示出位置姿态组关联的基于在车辆处获取的激光点云数据做出的实时定位图层与预先构建的相应局部地理区域的定位图层的相似程度。
在一个可能的设计中,所述方法还包括:获取在第二位置处所述车辆的所述激光雷达的传感器数据;以及,生成地图更新数据,其包括所述第二位置和第二姿态以及所获取的传感器数据,其中,所述第二位置和所述第二姿态是在当前时间点处所述车辆的修正位置和修正姿态,其是根据在所述第一时间点处所述车辆的所述修正位置和所述修正姿态以及从所述第一时间点到所述当前时间点期间所述车辆的运动而确定的。这里,在确定车辆的修正位置和修正姿态之后生成地图更新数据以便用于更新已构建的相应局部地理区域的定位图层所存储的各个体素的占据权重,能够降低在构建定位图层时对激光雷达的传感器数据的精度要求,减少或消除在定位图层的构建过程中运动对象(例如,运动的车辆或行人等)对定位图层的不良影响,从而能够提升相对于定位图层的定位结果的精度。
本申请的第二方面提供一种定位图层生成的方法,包括:获取第二地理区域的激光点云数据;对所述激光点云数据进行体素化,以得到多个体素;计算所述多个体素的权重值,每一个体素的权重值指示该体素被地理环境对象占据的可能程度;以及,存储所述多个体素中的至少一部分体素的标识和权重值,以得到所述第二地理区域的定位图层。这里,所述方法生成的定位图层存储表示体素被地理环境对象占据的可能程度的权重值,其体现地理区域的地理空间结构,并不存储地理环境对象的具体语义(即具体种类)和易受环境变化影响的地理环境对象对激光的反射强度,因而,在使用所述方法生成的定位图层进行定位时,依赖于地理区域的地理空间结构,而不是地理环境对象的具体语义和地理环境对象对激光的反射强度;由于表示地理区域的地理空间结构不易受地理环境对象的具体语义和环境变化(例如,天气、路面随时间的磨损等)的影响,因此,不管是在地理环境对象的种类丰富的地理区域还是在地理环境对象的种类匮乏的地理区域,也不管环境如何变化,利用所述方法生成的定位图层都能获得基本相同的定位效果,从而能够提升定位稳定性和定位性能。
在一个可能的设计中,所述计算所述多个体素的权重值包括:计算所述多个体素中的每一个体素所包含的激光点的数量,作为该体素的权重值,以及,所述至少一部分体素是所述多个体素中的所述权重值大于零的各个体素。这里,一个体素被地理环境对象占据的可能程度越高,则该体素中包含的激光点的数量通常越多,因此,由该体素包含的激光点的数量来表示该体素的所述权重值能够可靠和简单地表示该体素被地理环境对象占据的可能程度。此外,所述定位图层构造成只存储所述权重值大于零的体素的标识和权重值,不存储所述权重值等于零的体素的标识和权重值,即不存储对定位没有帮助的信息,这将减少和压缩定位图层的数据量,从而能够降低存储定位图层所需的存储空间。
在一个可能的设计中,所述定位图层构造成以散列表的形式成对地存储所述至少一部分体素的所述标识和所述权重值,以及,每一个体素的所述标识由 该体素在与所述定位图层关联的空间坐标系下的位置信息的散列映射值表示。这里,散列表(也称为哈希表)是成对地存储键(key)和值(value)以便能根据键来对值进行直接访问或索引的数据结构,其能快速查找所存储的值。因此,把所述第一定位图层构造成散列表的形式,能够提高定位图层的索引/匹配的速度,从而提升定位效率。
本申请的第三方面提供一种定位图层更新的方法,包括:利用地图更新数据来生成第三局部地理区域的第四激光点云数据,其中,所述地图更新数据包括车辆的第三位置和第三姿态以及在所述第三位置处所述车辆的激光雷达的第一传感器数据,所述第三局部地理区域是包含所述第三位置的局部地理区域,以及,所述第四激光点云数据是利用所述第一传感器数据形成的;对第五激光点云数据进行体素化,以得到多个第三体素,其中,所述第五激光点云数据是利用基于所述第三位置和所述第三姿态而生成的三维空间变换,将所述第四激光点云数据从与所述车辆关联的空间坐标系变换到与第三定位图层关联的空间坐标系而得到的,以及,其中所述第三定位图层是以前构造的所述第三局部地理区域的定位图层,其存储所述多个第三体素中的至少一部分体素的标识和权重值,每一个体素的权重值表示该体素被地理环境对象占据的可能程度;计算所述多个第三体素的权重值;以及,利用所述多个第三体素的计算的权重值,更新在所述第三定位图层中存储的各体素的权重值。这里,利用车辆的激光雷达收集的传感器数据来更新已构建的相应局部地理区域的定位图层所存储的各个体素的占据权重,能够降低在构建定位图层时对激光雷达的传感器数据的精度要求,从而能够提升相对于定位图层的定位结果的精度。
在一个可能的设计中,所述第三定位图层存储所述多个第三体素中的所述权重值大于零的那些体素的所述标识和所述权重值,以及,所述更新在所述第三定位图层中存储的各体素的权重值包括:从所述多个第三体素中,选取所计算的权重值大于零的那些体素;对于所选取的每一个体素,如果所述第三定位图层存储有该体素的权重值且所存储的权重值和所计算的权重值不相同,则利用该体素的所计算的权重值替换所述第三定位图层中存储的该体素的权重值;对于所选取的每一个体素,如果所述第三定位图层未存储有该体素的权重值,则将该体素的所述标识和所计算的权重值存储在所述第三定位图层中;以及,如果所述第三定位图层存储有未在所选取的体素中出现的第四体素的标识和权重值,则从所述第三定位图层中删除所述第四体素的标识和权重值。这里,通过在定位图层中加入先前未存储的体素的权重值和删除未在所选取的体素中出现的那些体素的权重值,能够在定位图层中增加静态目标(例如,建筑物、路牌等)的权重值和删除运动目标(例如,运动的车辆或行人等)的权重值,减少或消除在定位图层的构建过程中运动目标对定位图层的不良影响,从而使定位图层匹配相应地理区域的真实环境,提升定位图层的可靠性。
本申请的第四方面提供一种车辆定位的装置,包括:预测模块,用于至少利用来自卫星导航系统和惯性测量单元的数据,获取在第一时间点处车辆的第 一位置和第一姿态;获取模块,用于利用所述车辆的激光雷达的传感器数据,获取包含所述第一位置的第一局部地理区域的第一激光点云数据;以及,修正模块,用于利用所述第一激光点云数据和预先构建的所述第一局部地理区域的第一定位图层来修正所述第一位置和所述第一姿态,以得到在所述第一时间点处所述车辆的修正位置和修正姿态,其中,所述第一定位图层构造成存储多个体素的标识和权重值,所述多个体素包括对在构建所述第一定位图层时获得的所述第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,以及,所述体素的权重值表示该体素被地理环境对象占据的可能程度。这里,所述装置使用的定位图层存储表示体素被地理环境对象占据的可能程度的权重值,其体现地理区域的地理空间结构,并不存储地理环境对象的具体语义(即具体种类)和易受环境变化影响的地理环境对象对激光的反射强度,因而,所述装置依赖于地理区域的地理空间结构,而不是地理环境对象的具体语义和地理环境对象对激光的反射强度,来对于车辆进行定位;由于地理区域的地理空间结构不易受地理环境对象的具体语义和环境变化(例如,天气、路面随时间的磨损等)的影响,因此,不管是在地理环境对象的种类丰富的地理区域还是在地理环境对象的种类匮乏的地理区域,也不管环境如何变化,所述装置都能获得基本相同的定位效果,从而能够提升定位稳定性和定位性能。
在一个可能的设计中,所述多个体素包括对所述第二激光点云数据进行体素化得到的各个体素中的所述权重值大于零的体素,以及,所述多个体素中的每一个体素的所述权重值由该体素所包含的激光点的数量表示。
在一个可能的设计中,所述第一定位图层构造成以散列表的形式成对地存储所述多个体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在所述第一定位图层所应用的空间坐标系下的位置信息的散列映射值表示。
在一个可能的设计中,所述修正模块包括:抽样模块,用于在所述第一位置和所述第一姿态的周围空间上针对位置和姿态进行多次空间抽样,以得到多个位置姿态组,其中,每个位置姿态组包括其中一次抽样得到的抽样位置和抽样姿态;第一计算模块,用于计算所述多个位置姿态组的相似性得分,其中,任一位置姿态组的相似性得分表示与所述任一位置姿态组关联的第二定位图层和所述第一定位图层的相似程度,所述第二定位图层是利用第三激光点云数据以生成所述第一定位图层的方式生成的,以及,所述第三激光点云数据是利用基于所述任一位置姿态组包括的抽样位置和抽样姿态生成的三维空间变换,将所述第一激光点云数据从与所述车辆关联的第一空间坐标系变换到与所述第一定位图层关联的第二空间坐标系而得到的;以及,确定模块,用于至少基于所述多个位置姿态组的所述相似性得分,确定所述修正位置和所述修正姿态。
在一个可能的设计中,所述确定模块包括:选取模块,用于从所述多个位置姿态组中,选取其相似性得分大于第一阈值的各个第一位置姿态组;以及,第二计算模块,用于将所述各个第一位置姿态组的相似性得分作为权重,对所述 各个第一位置姿态组所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。
在一个可能的设计中,所述第一计算模块包括:查找模块,用于查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,所述第一类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值相同,而所述第二类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值不同;第四计算模块,用于计算每个位置姿态组的第一总权重值和第二总权重值,其中,所述第一总权重值等于在所述第一定位图层中存储的所述第一类型的体素的所述权重值之和,以及,所述第二总权重值等于在所述第一定位图层中存储的所述第二类型的体素的所述权重值之和;以及,第五计算模块,用于计算每个位置姿态组的所述第一总权重值与所述第二总权重值之差,作为该位置姿态组的相似性得分,以得到所述多个位置姿态组的相似性得分。
在一个可能的设计中,所述装置还包括:获得模块,用于获取在第二位置处所述车辆的所述激光雷达的传感器数据;以及,生成模块,用于生成地图更新数据,其包括所述第二位置和第二姿态以及所获取的传感器数据,其中,所述第二位置和所述第二姿态是在当前时间点处所述车辆的位置和姿态,其是根据在所述第一时间点处所述车辆的所述修正位置和所述修正姿态以及从所述第一时间点到所述当前时间点期间所述车辆的运动而确定的。
本申请的第五方面提供一种定位图层生成的装置,包括:获取模块,用于获取第二地理区域的激光点云数据;体素化模块,用于对所述激光点云数据进行体素化,以得到多个体素;计算模块,用于计算所述多个体素的权重值,每一个体素的权重值指示该体素被地理环境对象占据的可能程度;以及,存储模块,用于存储所述多个体素中的至少一部分体素的标识和权重值,以得到所述第二地理区域的定位图层。这里,所述装置生成的定位图层存储表示体素被地理环境对象占据的可能程度的权重值,其体现地理区域的地理空间结构,并不存储地理环境对象的具体语义(即具体种类)和易受环境变化影响的地理环境对象对激光的反射强度,因而,在使用所述装置生成的定位图层进行定位时,依赖于表示地理区域的地理空间结构,而不是地理环境对象的具体语义和地理环境对象对激光的反射强度;由于地理区域的地理空间结构不易受地理环境对象的具体语义和环境变化(例如,天气、路面随时间的磨损等)的影响,因此,不管是在地理环境对象的种类丰富的地理区域还是在地理环境对象的种类匮乏的地理区域,也不管环境如何变化,利用所述装置生成的定位图层都能获得基本相同的定位效果,从而能够提升定位稳定性和定位性能。
在一个可能的设计中,所述计算模块进一步用于计算所述多个体素中的每一个体素所包含的激光点的数量,作为该体素的权重值,以及,所述至少一部分体素是所述多个体素中的所述权重值大于零的各个体素。
在一个可能的设计中,所述定位图层构造成以散列表的形式成对地存储所述至少一部分体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在与所述定位图层关联的空间坐标系下的位置信息的散列映射值表示。
本申请的第六方面提供一种定位图层更新的装置,包括:生成模块,用于利用地图更新数据来生成第三局部地理区域的第四激光点云数据,其中,所述地图更新数据包括车辆的第三位置和第三姿态以及在所述第三位置处所述车辆的激光雷达的第一传感器数据,所述第三局部地理区域是包含所述第三位置的局部地理区域,以及,所述第四激光点云数据是利用所述第一传感器数据形成的;体素化模块,用于对第五激光点云数据进行体素化,以得到多个第三体素,其中,所述第五激光点云数据是利用基于所述第三位置和所述第三姿态而生成的三维空间变换,将所述第四激光点云数据从与所述车辆关联的空间坐标系变换到与第三定位图层关联的空间坐标系而得到的,以及,其中所述第三定位图层是以前构造的所述第三局部地理区域的定位图层,其存储所述多个第三体素中的至少一部分体素的标识和权重值,每一个体素的权重值表示该体素被地理环境对象占据的可能程度;计算模块,用于计算所述多个第三体素的权重值;以及,更新模块,用于利用所述多个第三体素的计算的权重值,更新所述第三定位图层中存储的各体素的权重值。
在一个可能的设计中,所述第三定位图层存储所述多个第三体素中的所述权重值大于零的那些体素的所述标识和所述权重值,以及,所述更新模块包括:选取模块,用于从所述多个第三体素中,选取所计算的权重值大于零的那些体素;替换模块,用于对于所选取的每一个体素,如果所述第三定位图层存储有该体素的权重值且所存储的权重值和所计算的权重值不相同,则利用该体素的所计算的权重值替换所述第三定位图层中存储的该体素的权重值;存储模块,用于对于所选取的每一个体素,如果所述第三定位图层未存储有该体素的权重值,则将该体素的所述标识和所计算的权重值存储在所述第三定位图层中;以及,删除模块,用于如果所述第三定位图层存储有未在所选取的体素中出现的第四体素的标识和权重值,则从所述第三定位图层中删除所述第四体素的标识和权重值。
本申请的第七方面提供一种计算机设备,包括:总线;通信接口,与所述总线连接;至少一个处理器,其与所述总线连接;以及,至少一个存储器,其与所述总线连接并存储有程序指令,所述程序指令当被所述至少一个处理器执行时使得所述至少一个处理器执行前述第一方面所述的方法。
本申请的第八方面提供一种地图生成设备,包括:总线;输入输出接口,与所述总线连接;至少一个处理器,与所述总线连接;以及,至少一个存储器,其与所述总线连接并存储有程序指令,所述程序指令当被所述至少一个处理器执行时使得所述至少一个处理器执行前述第二方面所述的方法。
本申请的第九方面提供一种地图更新设备,包括:总线;输入输出接口,其与所述总线连接;至少一个处理器,其与所述总线连接;以及,至少一个存储 器,其与所述总线连接并存储有程序指令,所述程序指令当被所述至少一个处理器执行时使得所述至少一个处理器执行前述第三方面所述的方法。
本申请的第十方面提供一种计算机可读存储介质,其上存储有程序指令,所述程序指令当被计算机执行时使得所述计算机执行前述第一方面、第二方面或第三方面所述的方法。
本申请的第十一方面提供一种计算机程序,其包括有程序指令,所述程序指令当被计算机执行时使得所述计算机执行前述第一方面、第二方面或第三方面所述的方法。
本申请的第十二方面提供一种车辆,包括:传感器系统,其至少包括全球导航定位系统接收机、惯性测量单元和激光雷达;通信系统,用于所述车辆与外部进行通信;以及,前述计算机设备。
附图说明
本申请的特征、特点、特性和优点通过以下结合附图的详细描述将变得更加显而易见。
图1示出了按照本申请的实施例的车辆定位、定位图层生成和定位图层更新所涉及的实施环境的示意图。
图2A示出了按照本申请的实施例的车辆定位的方法的示意流程图。
图2B示出了按照本申请的实施例的修正位置和姿态的方法的示意流程图。
图2C示出了按照本申请的实施例的确定修正位置和修正姿态的方法的示意流程图。
图2D示出了按照本申请的实施例的计算相似性得分的方法的示意流程图。
图2E示出了按照本申请的实施例的生成地图更新数据的方法的示意流程图。
图3A示出了按照本申请的实施例的定位图层生成的方法的流程图。
图3B示出了按照本申请的实施例的定位图层更新的方法的流程图。
图3C示出了按照本申请的实施例的更新体素的权重值的方法的流程图。
图4A示出了按照本申请的实施例的车辆定位的装置的示意图。
图4B示出了按照本申请的实施例的修正模块的示意图。
图4C示出了按照本申请的实施例的确定模块的示意图。
图4D示出了按照本申请的实施例的第一计算模块的示意图。
图4E示出了按照本申请的实施例的地图更新数据生成模块的示意图。
图4F示出了按照本申请的实施例的定位图层生成的装置的示意图。
图4G示出了按照本申请的实施例的定位图层更新的装置的示意图。
图4H示出了按照本申请的实施例的更新模块的示意图。
图5A示出了按照本申请的实施例的定位图层生成、定位图层更新和车辆定位的系统的一种示例性具体实现。
图5B示出了按照本申请的实施例的定位图层生成方法的一种示例性具体实现。
图5C示出了按照本申请的实施例的车辆位置姿态预测方法的一种示例性具体实现。
图5D示出了按照本申请的实施例的车辆位置姿态修正方法的一种示例性具体实现。
图5E示出了按照本申请的实施例的定位图层更新方法的一种示例性具体实现。
图6示出了按照本申请的实施例的计算机设备的结构示意图。
图7示出了按照本申请的实施例的地图生成设备的结构示意图。
图8示出了按照本申请的实施例的地图更新设备的结构示意图。
具体实施方式
以下将参考所讨论的细节来描述本申请的各种实施方案和方面,附图将示出所述各种实施方案。下列描述和附图是对本申请的说明,而不应当解释为限制本申请。许多特定细节被描述以便提供对本申请的各种实施方案的全面理解。然而,在某些情况下,并未描述总所周知的或常规的细节以便提供对本申请的实施方案的简洁讨论。
本说明书中对“一个实施例”或“实施例”的提及意味着结合该实施例所描述的特定特征、结构或特性可以包括在本申请的至少一个实施例中。短语“在一些实施例中”在本说明书中各个地方的出现不意味着全部指相同的实施例。
此外,在本申请中,各种操作将以最有助于理解说明性实施例的方式被描述为多个彼此分离的操作,然而,所描述的顺序不应被解释为暗示这些操作必须依赖所描述的顺序。例如,一些操作也可以并行地执行或以与所描述的顺序相反的顺序执行。
本申请所涉及的术语“第一”、“第二”等仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
应当理解,在本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。
文献1(US20190204092A1)公开了一种定位系统和方法,其采用了激光雷达、车载相机、全球定位系统、惯性测量单元、车辆控制器域网数据和预先建立的定位图层来对自动驾驶车辆进行定位。
文献1所公开的定位图层(HD map)510,对于具有不同语义的地理环境对象的数据使用不同的方式进行存储,其中,车道的数据使用道标地图(landmark map)520来存储,而道路周围的三维(3D)地理环境对象的数据以曲面(mesh)、3D点云或体素格(volumetric grid)的方式使用占用地图530来存储,其中,当在占用地图530中以体素格的方式来存储3D地理环境对象时,体素格数据中同时保存占用单元格和无法提供语义信息的空白单元格两者的数据,占用单元格的数据还额外存储了在占用单元格中存在的局部曲面的法向量,而空白单元格不包含该法向量。占用地图530的数据量较大,可达到1GB/英里。
文献1所公开的技术方案至少存在以下两个缺陷。第一,定位过程高度依赖传感器提供的具有语义的地理环境对象,因此,在诸如城市道路等此类具有丰富地理环境对象的场景中能取得较高的定位精度,而在诸如地下车库、隧道等此类缺乏丰富地理环境对象的场景通常定位精度要下降,这导致减小的定位稳定性或定位能力。另外,由于定位过程对地理环境对象的高度依赖,因此,对地理环境对象的误检所导致的不同的地理环境对象之间的误匹配,也会导致减小的定位稳定性或定位能力。第二,定位图层以曲面的方式、以3D点云的方式或以包含占用单元格和空白单元格两者的数据的体素格的方式来存储地理环境对象的数据,因此定位图层的数据量较大,这导致定位图层的存储需要较大的存储空间。
文献2(US20180143647A1)公开了一种定位系统和方法,其采用了激光雷达、全球定位系统、惯性测量单元、车辆底盘轮速和预先建立的定位图层来对自动驾驶车辆进行定位。
文献2所公开的技术方案至少存在以下两个缺陷。第一,以激光反射强度来表征地理环境对象的特征,这会降低定位稳定性。例如,对于路面而言,路面对激光的反射强度大小受路面磨损程度、天气、激光雷达数据质量和激光雷达安装位置影响,因此,同一车辆在不同时间下或不同天气条件下针对同一路面的激光反射强度是不同的,从而当同一车辆在没有标记的相同路面上行驶时,车辆在不同时间下或在不同天气条件下的定位结果会有很大的不同。第二,定位图层以图像的形式存储,因此定位图层的数据量较大,这导致定位图层的存储需要较大的存储空间。
考虑到现有技术的以上问题,本申请提出以下将详细描述的车辆定位、定位图层生成和定位图层更新的各个实施例。
在本申请中,术语“地理环境对象”是指车辆在其上行驶的地理区域中存在的各种能够反射激光的对象,例如但不限于,建筑物、路标、路牌、路面、树木、灌木丛、隧道天花板、隧道墙面、行人、车辆、动物、电线杆等。
术语“体素(voxel)”是体积元素(volume pixel)的简称,其是数字数据于三维空间分割上的最小单位,在概念上类似二维空间的最小单位-像素。
图1示出了按照本申请实施例的车辆定位、定位图层生成和定位图层更新所涉及的实施环境的示意图。如图1所示,该实施环境包括车辆10、地图生成设备30和地图更新设备40。
车辆10可以是常规车辆或自动驾驶车辆。自动驾驶车辆也可以称为无人驾驶车辆或智能驾驶车辆等,其可以在手动模式、全自主模式或部分自主模式下行驶。当被配置成在全自主模式或部分自主模式下行驶时,自动驾驶车辆可以在极少或没有来自驾驶员的控制输入的情况下在地理区域上自主行驶。
除了诸如发动机或电动机、车轮、方向盘、变速器这样的常用部件之外,车辆10还包括传感器系统102、通信系统104和计算机设备108。
传感器系统102至少包括全球导航卫星系统(global navigation satellite system:GNSS)接收机110、惯性测量单元(inertial measurement unit:IMU)112和激光雷达(light detection and ranging:LiDAR)114。
GNSS接收机110用于接收卫星信号以对车辆进行定位。GNSS接收机可以是全球定位系统(global positioning system:GPS)接收机、北斗系统接收机或者其他类型定位系统接收机。
IMU 112可以基于惯性加速度来感测车辆的位置和朝向变化。可选的,IMU 112可以是加速度计和陀螺仪的组合,用于测量车辆的角速度、加速度。
激光雷达114利用激光来感测车辆10所位于的地理环境中的物体。利用激光雷达114的传感器数据,可以形成地理区域的激光点云数据(也称为激光点云地图)。例如,激光雷达114可以包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
例如,传感器系统102还可以包括底盘轮速传感器,其可以感测车辆10的底盘轮速。
通信系统104用于车辆10与外部进行通信,其可以直接或者经由通信网络与外部的一个或多个装置无线通信。例如,通信系统104可以使用第三代(3G)蜂窝通信(例如,码分多址(code division multiple access:CDMA)等)、第四代(4G)蜂窝通信(例如,长期演进(long term evolution:LTE)等)或者第五代(5G)蜂窝通信与外部装置通信。又例如,通信系统104可以利用WiFi和无线局域网(wireless local area network:WLAN)与外部装置通信。又例如,通信系统104可以利用红外链路、蓝牙技术或ZigBee与外部设备直接通信。
计算机设备108与传感器系统102和通信系统104连接。计算机设备108可以利用从传感器系统102接收的传感器数据和车辆10所处的局部地理区域的定位图层,来对车辆10进行定位。计算机设备108所使用的定位图层可以是预先存储在计算机设备108中的,或者,可以是通过通信系统104从诸如服务器等这样的外部设备获取的。
地图生成设备30可以是诸如服务器、工作站、台式计算机或笔记本电脑等这样的具有计算能力的电子设备,其用于利用地理区域的激光点云数据来生成用于对车辆进行定位的定位图层。所生成的定位图层可以存储在网络设备或云 端等以供车辆在使用之前下载到本地或在使用时实时下载,或者,可以提供给用户、车辆制造商、车辆销售商或者车辆服务人员以将其存储在车辆中。所述激光点云数据是利用诸如车辆或无人机之类的检测设备在所述地理区域中通过激光雷达获取的传感器数据而形成的。
地图更新设备40可以是诸如服务器、工作站、台式计算机或笔记本电脑等这样的具有计算能力的电子设备,其用于对已构建的定位图层进行更新。
图2A示出了按照本申请的实施例的车辆定位的方法的示意流程图。图2A所示的方法200可以例如由车辆10的计算机设备108或任何其它合适的设备来执行,以相对地图来确定车辆10的位置和姿态。方法200包括步骤S204-步骤S212。
在步骤S204,至少利用来自卫星导航系统和惯性测量单元的数据,获取在第一时间点处车辆的第一位置和第一姿态。
例如,可以仅利用来自车辆10的GNSS接收机110和IMU 112两者的数据来获取车辆10的第一位置和第一姿态。又例如,可以利用来自车辆10的GNSS接收机110、IMU 112和底盘轮速传感器三者的数据来获取车辆10的第一位置和第一姿态。又例如,利用来自车辆10的GNSS接收机110和IMU 112以及其它任何合适的传感器的数据,或者,利用来自车辆10的GNSS接收机110、IMU 112和底盘轮速传感器以及其它任何合适的传感器的数据,来获取车辆10的第一位置和第一姿态。
已经存在许多已知的仅基于GNSS接收机和IMU两者,或者,基于GNSS接收机和IMU以及其它合适传感器,来获取包括车辆在内的运动对象的位置和姿态的定位技术,这里,可以利用这些定位技术中的任意合适的定位技术来获取车辆10的第一位置和第一姿态,本申请对此不做任何限制。
在步骤S208,利用所述车辆的激光雷达的传感器数据,获取包含所述第一位置的第一局部地理区域的第一激光点云数据。
例如,所述第一局部地理区域可以是以车辆10的第一位置为中心的局部区域。又例如,所述第一局部地理区域可以是包含车辆10的第一位置但未以其为中心的局部区域。
例如,所述第一激光点云数据可以仅利用在第一时间点处获得的激光雷达114的传感器数据来构建。
又例如,所述第一激光点云数据可以利用在所述第一时间点处和在所述第一时间点之前的一次或多次所获取的激光雷达114的传感器数据来形成,如图5C的方法540中的步骤S544-S546所示的。
在步骤S212,利用所述第一激光点云数据和预先构建的所述第一局部地理区域的第一定位图层来修正所述第一位置和所述第一姿态,以得到在所述第一时间点处所述车辆的修正位置和修正姿态,其中,所述第一定位图层构造成存储多个体素的标识和权重值,所述多个体素包括对在构建所述第一定位图层时获 得的所述第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,以及,所述体素的权重值表示该体素被地理环境对象占据的可能程度。
所述多个体素可以是对所述第二激光点云数据进行体素化得到的各个体素中的所有体素或部分体素(例如,所述权重值大于零的那些体素)。
每一个体素的权重值例如可以由表示该体素中包含的激光点的数量来表示。或者,每一个体素的权重值可以利用其它合适的方式表示,例如但不限于,每一个体素的权重值可以由该体素中包含的激光点的数量与指定数量的比值来表示,所述指定数量是所述多个体素的所有体素中其包含的激光点的数量最大的那个体素所包含的激光点的数量。
体素的标识例如可以但不限于利用体素的位置信息来表示或基于体素的位置信息计算得到。体素的位置信息例如可以是体素在所述第一定位图层所应用的空间坐标系(例如,全局空间坐标系)中的经度坐标值、纬度坐标值和高度坐标值。或者,体素的位置信息可以是体素在所述第一定位图层所应用的空间坐标系中的在经度方向上的序号、在纬度方向上的序号和在高度方向上的序号。例如,某个体素的位置信息可以是[100,130,180],其表示该体素属于在所述第一定位图层所应用的空间坐标系中在经度方向上第100个体素、在纬度方向上第130个体素和在高度方向上第180个体素。又或者,体素的标识例如可以但不限于基于体素的位置信息计算得到。例如,体素的标识例如是该体素的位置信息的散列映射值。
所述第一定位图层例如可以是但不限于从所述车辆或者诸如服务器之类的其它设备获取的。
例如,步骤S212中的对车辆的第一位置和第一姿态的修正可以利用图5D所示的修正方法550中的步骤S552-S568所描述的方式来实现,其中,由于在进行空间抽样时并不抽样车辆的第一位置和第一姿态本身,因此,抽样得到的各个抽样位置和抽样姿态未包含有车辆的第一位置和第一姿态,从而所述多个位置姿态组不包括具有车辆的第一位置和第一姿态的位置姿态组。
又例如,步骤S212中的对车辆的第一位置和第一姿态的修正可以利用与图5D所示的修正方法550的步骤S552-S568所描述的方式不同的第一可选方式来实现,所述第一可选方式与图5D所示的修正方法550的步骤S552-S568所描述的方式不同在于:在进行空间抽样时也抽样车辆的第一位置和第一姿态本身,因此,其中一次抽样得到的抽样位置和抽样姿态是车辆的第一位置和第一姿态,从而所述多个位置姿态组包括具有车辆的第一位置和第一姿态的位置姿态组,然后,在步骤S568,将其相似性得分最大的那个位置姿态组所包含的位置和姿态作为在第一时间点T1处车辆10的修正位置和修正姿态。
这里,本实施例的方法所使用的定位图层存储表示体素被地理环境对象占据的可能程度的权重值,其体现地理区域的地理空间结构,并不存储地理环境对象的具体语义(即具体种类)和易受环境变化影响的地理环境对象对激光的 反射强度,因而,本实施例的方法依赖于地理区域的地理空间结构,而不是环境对象的具体语义和地理环境对象对激光的反射强度,来对车辆进行定位;由于地理区域的地理空间结构不易受地理环境对象的具体语义和环境变化(例如,天气、路面随时间的磨损等)的影响,因此,不管是在地理环境对象的种类丰富的地理区域还是在地理环境对象的种类匮乏的地理区域,也不管环境如何变化,本实施例的方法都能获得基本相同的定位效果,从而能够提升定位稳定性和定位性能。
在一些实施例中,所述多个体素包括对所述第二激光点云数据进行体素化得到的各个体素中的所述权重值大于零的体素。
这里,所述第一定位图层构造成只存储所述权重值大于零的体素的标识和权重值,不存储所述权重值等于零的体素的标识和权重值,即不存储对定位没有帮助的信息,这将减少和压缩定位图层的数据量,从而能够降低存储定位图层所需的存储空间。
在一些实施例中,所述多个体素中的每一个体素的所述权重值可以由该体素所包含的激光点的数量表示。
这里,一个体素被地理环境对象占据的可能程度越高,则该体素中包含的激光点的数量通常越多,因此,由该体素包含的激光点的数量来表示该体素的所述权重值能够可靠和简单地表示该体素被地理环境对象占据的可能程度。
在一些实施例中,所述第一定位图层可以构造成以散列表的形式成对地存储所述多个体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在所述第一定位图层所应用的空间坐标系下的位置信息的散列映射值表示。所述散列映射值例如是利用已知的散列函数针对所述位置信息计算得到的。
散列表(也称为哈希表)是成对地存储键(key)和值(value)以便能根据键来对值进行直接访问或索引的数据结构,其能快速查找所存储的值。因此,把所述第一定位图层构造成散列表的形式,能够提高定位图层的索引/匹配的速度,从而提升定位效率。
在一些实施例中,如图2B所示,步骤S212中的修正所述第一位置和所述第一姿态可以包括步骤S216、步骤S220和步骤S224中的操作。
其中,在步骤S216,在所述第一位置和所述第一姿态的周围空间上针对位置和姿态进行多次空间抽样,以得到多个位置姿态组,其中,每个位置姿态组包括其中一次空间抽样得到的抽样位置和抽样姿态。
在步骤S220,计算所述多个位置姿态组的相似性得分,其中,任一位置姿态组的相似性得分表示与所述任一位置姿态组关联的第二定位图层和所述第一定位图层的相似程度,所述第二定位图层是利用第三激光点云数据以制作所述第一定位图层的方式制作的,以及,所述第三激光点云数据是利用基于所述任一位置姿态组包括的抽样位置和抽样姿态生成的三维空间变换,将所述第一激光点云数据从与所述车辆关联的第一空间坐标系变换到与所述第一定位图层关联的第二空间坐标系而得到的。
例如,步骤S220中的计算所述多个位置姿态组的相似性得分可以利用图5D所示的修正方法550中的步骤S554-S564所描述的方式来实现。
又例如,步骤S220中的计算所述多个位置姿态组的相似性得分例如可以利用与图5D所示的修正方法550中的步骤S554-S564所描述的方式不同的第一可选方式来实现,所述第一可选方式与步骤S554-S564所描述的方式不同在于:在步骤S564,对于每个位置姿态组,将其第一总权重值,其第一总权重值与其第二总权重值的比值,或者,其第一总权重值与其第一总权重值跟其第二总权重值之和的比值,作为该位置姿态组的相似性得分。
例如,所述第一空间坐标系可以是但不限于所述车辆的局部空间坐标系,以及,所述第二空间坐标系可以是但不限于全局空间坐标系。
在步骤S224,至少基于所述多个位置姿态组的所述相似性得分,确定所述修正位置和所述修正姿态。
例如,步骤S224中的确定所述修正位置和所述修正姿态可以利用图5D所示的修正方法550中的步骤S566-S568所描述的方式来实现。
又例如,步骤S224中的确定所述修正位置和所述修正姿态可以利用与图5D所示的修正方法550中的步骤S566-S568所描述的方式不同的第一可选方式来实现,该第一可选方式与步骤S566-S568所描述的方式不同之处在于:在步骤S568,使用与最小二乘加权拟合不同的其它加权拟合算法来对各个位置姿态组C1所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。
这里,借助于在车辆的第一位置和第一姿态的周围空间上抽样得到的多组抽样位置和抽样姿态来修正车辆的第一位置和第一姿态,能够比较可靠地确定车辆的修正位置和修正姿态。
在一些实施例中,如图2C所示,步骤S224中的确定所述修正位置和所述修正姿态可以包括步骤S228和步骤S232。
其中,在步骤S228,从所述多个位置姿态组中,选取其相似性得分大于第一阈值的各个第一位置姿态组。
在步骤S232,将所述各个第一位置姿态组的相似性得分作为权重,对所述各个第一位置姿态组所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。
这里,仅选取相似性得分大于第一阈值的位置姿态组中的抽样位置和抽样姿态来修正车辆的第一位置和第一姿态以确定车辆的修正位置和修正姿态,可以有效地去除噪声抽样位置和噪声抽样姿态对车辆定位的影响,从而能够获得较高精度的定位结果。
在一些实施例中,如图2D所示,步骤S220中的计算所述多个位置姿态组的相似性得分可以包括步骤S240、步骤S244和步骤S248。
其中,在步骤S240,查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,所述第一类型的体素在与该位置姿态组关联的所述第二定位图 层中存储的所述权重值和在所述第一定位图层中存储的所述权重值相同,而所述第二类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值不同。
在步骤S244,计算每个位置姿态组的第一总权重值和第二总权重值,其中,所述第一总权重值等于在所述第一定位图层中存储的所述各个第一体素的所述权重值之和,以及,所述第二总权重值等于在所述第一定位图层中存储的所述各个第二体素的所述权重值之和。
在步骤S248,计算每个位置姿态组的所述第一总权重值与所述第二总权重值之差,作为该位置姿态组的相似性得分,以得到所述多个位置姿态组的相似性得分。
这里,如果两个定位图层越相似,则在这两个定位图层中所存储的权重值相同的第一类型的体素的数量通常越多,在这两个定位图层中所存储的权重值不同的第二类型的体素的数量通常越少,从而代表第一类型的体素的权重值之和的第一总权重值与代表第二类型的体素的权重值之和的第二总权重值的差值通常越大,反之,表示第一类型的体素的权重值的总和的第一总权重值与表示第二类型的体素的权重值的总和的第二总权重值的差值通常越小。因此,使用代表第一类型的体素的权重值之和的第一总权重值与代表第二类型的体素的权重值之和的第二总权重值的差值作为位置姿态组的相似性得分,能够准确地表示出与位置姿态组关联的基于在车辆处获取的激光点云数据做出的实时定位图层与预先构建的相应局部地理区域的定位图层的相似程度。
在一些实施例中,如图2E所示,方法200还可以包括步骤S252和步骤S256,以生成用于更新定位图层的地图更新数据。
在步骤S252,获取在第二位置处所述车辆的所述激光雷达的传感器数据。
在步骤S256,生成地图更新数据,其包括所述第二位置和第二姿态以及所获取的传感器数据,其中,所述第二位置和所述第二姿态是在当前时间点处所述车辆的位置和姿态,其是根据在所述第一时间点处所述车辆的所述修正位置和所述修正姿态以及从所述第一时间点到所述当前时间点期间所述车辆的运动而确定的。
这里,在确定车辆的修正位置和修正姿态之后生成地图更新数据以便用于更新已构建的相应局部地理区域的定位图层所存储的各个体素的占据权重,能够降低在构建定位图层时对激光雷达的传感器数据的精度要求,减少或消除在定位图层的构建过程中运动对象(例如,运动的车辆或行人等)对定位图层的不良影响,从而能够提升相对于定位图层的定位结果的精度。
图3A示出了按照本申请的实施例的定位图层生成的方法的流程图。图3A所示的方法300可以例如由图1的地图生成设备30或任何其它适当的设备来执行。方法300可以包括步骤S302-步骤S312。
在步骤S302,获取第二地理区域的激光点云数据。
这里,第二地理区域可以是任何适当的区域,例如但不限于,一个或多个城市的区域,一个或多个省的区域,一个或多个国家的区域等。例如,可以由检测人员驾驶配置有激光雷达的车辆在第二地理区域中行驶或控制配置有激光雷达的无人机在第二地理区域中飞翔以收集激光雷达的传感器数据,利用所收集的传感器数据就可以构造出第二地理区域的激光点云数据。
在步骤S306,对所述激光点云数据进行体素化,以得到多个体素。
在步骤S310,计算所述多个体素的权重值,每一个体素的权重值指示该体素被地理环境对象占据的可能程度。
每一个体素的权重值例如可以利用该体素所包括的激光点的数量表示。或者,每一个体素的权重值例如可以利用其它任何合适的方式表示,例如但不限于,每一个体素的权重值可以由该体素中包含的激光点的数量与指定数量的比值来表示,所述指定数量是所述多个体素的所有体素中其包含的激光点的数量最大的那个体素所包含的激光点的数量。
在步骤S312,存储所述多个体素中的至少一部分体素的标识和权重值,以得到所述第二地理区域的定位图层。
每个体素的所述标识例如可以由该体素在与所述定位图层关联的空间坐标系下的位置信息表示,或由该位置信息的散列映射值表示。所述散列映射值例如是利用已知的散列函数针对所述位置信息计算得到的。
这里,本实施例的定位图层存储表示体素被地理环境对象占据的可能程度的权重值,其体现地理区域的地理空间结构,并不存储地理环境对象的具体语义(即具体种类)和易受环境变化影响的地理环境对象对激光的反射强度,因而,在使用本实施例的定位图层对进行定位时,依赖于表示地理区域的地理空间结构,而不是环境对象的具体语义和地理环境对象对激光的反射强度;由于地理区域的地理空间结构不易受地理环境对象的具体语义和环境变化的影响,因此,不管是在地理环境对象的种类丰富的地理区域还是在地理环境对象的种类匮乏的地理区域,也不管环境如何变化,利用本实施例所制作的定位图层进行定位都能获得基本相同的定位效果,从而能够提升定位稳定性和定位性能。
在一些实施例中,步骤S310中的计算所述多个体素的权重值可以进一步包括:计算所述多个体素中的每一个体素所包含的激光点的数量,作为该体素的权重值。
通常,一个体素被地理环境对象占据的可能程度越高,则该该体素中包含的激光点的数量越多,因此,由该体素包含的激光点的数量来表示该体素的所述权重值能够可靠和简单地表示该体素被地理环境对象占据的可能程度。
在一些实施例中,所述至少一部分体素是所述多个体素中的所述权重值大于零的各个体素。
这里,所述第一定位图层构造成只存储所述权重值大于零的体素的标识和权重值,不存储所述权重值等于零的体素的标识和权重值,即不存储对定位 没有帮助的信息,这将减少和压缩定位图层的数据量,从而能够降低存储定位图层所需的存储空间。
在一些实施例中,所述定位图层构造成以散列表的形式成对地存储所述至少一部分体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在与所述定位图层关联的空间坐标系下的位置信息的散列映射值表示。
散列表是成对地存储键(key)和值(value)以便能根据键来对值进行直接访问或索引的数据结构,其能通过键快速查找所存储的值。因此,把所述第一定位图层构造成散列表的形式,能够提高定位图层的索引/匹配的速度,从而提升定位效率。
图3B示出了按照本申请的实施例的定位图层更新的方法的流程图。图3B所示的方法350可以例如由图1的地图更新设备40或任何其它适当的设备来执行。方法350可以包括步骤S352-步骤S358。
在步骤S352,利用地图更新数据来生成第三局部地理区域的第四激光点云数据,其中,所述地图更新数据包括车辆的第三位置和第三姿态以及在所述第三位置处所述车辆的激光雷达收集的第一传感器数据,所述第三局部地理区域是包含所述第三位置的局部地理区域,以及,所述第四激光点云数据是利用所述第一传感器数据形成的。
在步骤S354,对第五激光点云数据进行体素化,以得到多个第三体素,其中,所述第五激光点云数据是利用基于所述第三位置和所述第三姿态而生成的三维空间变换,将所述第四激光点云数据从与所述车辆关联的空间坐标系变换到与第三定位图层关联的空间坐标系而得到的,以及,其中所述第三定位图层是以前构造的所述第三局部地理区域的定位图层,其存储所述多个第三体素中的至少一部分体素的标识和权重值,每一个体素的权重值表示该体素被地理环境对象占据的可能程度。
例如,所述至少一部分体素可以是所述多个第三体素中的所有体素。又例如,所述至少一部分体素可以是所述多个第三体素中的其权重值大于零的那些体素。
体素的权重值例如可以利用该体素所包括的激光点的数量表示。或者,体素的权重值例如可以利用其它任何合适的方式表示,例如但不限于,体素的权重值可以由该体素中包含的激光点的数量与指定数量的比值来表示,所述指定数量是所述多个第三体素的所有体素中其包含的激光点的数量最大的那个体素所包含的激光点的数量。
例如,体素的标识可以由该体素在与所述第三定位图层关联的空间坐标系下的位置信息表示,或者可以由该位置信息的散列映射值表示。所述散列映射值例如是利用已知的散列函数针对所述位置信息计算得到的。
在步骤S354,计算所述多个第三体素的权重值。
在步骤S356,利用所述多个第三体素的权重值,更新在所述第三定位图层中存储的各体素的权重值。
可选地,例如但不限于可以利用图5E中的步骤S592所描述的方式来更新在所述第一定位图层中存储的各体素的权重值。
这里,利用车辆的激光雷达收集的传感器数据来更新已构建的相应局部地理区域的定位图层所存储的各个体素的占据权重,能够降低在构建定位图层时对激光雷达的传感器数据的精度要求,从而提升相对于定位图层的定位结果的精度。
在一些实施例中,所述第三定位图层存储所述多个第三体素中的所述权重值大于零的那些体素的所述标识和所述权重值,以及,如图3C所示的,步骤S356中的所述更新在所述第三定位图层中存储的各体素的权重值可以包括步骤S358、步骤S360、步骤S362和步骤S364。
其中,在步骤S358,从所述多个第三体素中,选取所计算的权重值大于零的那些体素。在步骤S360,对于所选取的每一个体素,如果所述第三定位图层存储有该体素的权重值且所存储的权重值和所计算的权重值不相同,则利用该体素的所计算的权重值替换所述第三定位图层中存储的该体素的权重值。在步骤S362,对于所选取的每一个体素,如果所述第三定位图层未存储有该体素的权重值,则将该体素的所述标识和所计算的权重值存储在所述第三定位图层中。在步骤S364,如果所述第三定位图层存储有未在所选取的体素中出现的第四体素的标识和权重值,则从所述第三定位图层中删除所述第四体素的标识和权重值。
这里,通过在定位图层中加入先前未存储的体素的权重值和删除未在所选取的体素中出现的那些体素的权重值,能够在定位图层中增加静态目标(例如,建筑物、路牌等)的权重值和删除运动目标(例如,运动的车辆或行人等)的权重值,减少或消除在定位图层的构建过程中运动目标对定位图层的不良影响,从而使定位图层匹配相应地理区域的真实环境,提升定位图层的可靠性。
上文结合图2A-2E和图3A-3C,详细描述了本申请的车辆定位、定位图层生成和定位图层更新的方法实施例,下面结合图4A-4H详细描述本申请的车辆定位、定位图层生成和定位图层更新的装置实施例。应理解,方法实施例的描述与装置实施例的描述相互对应,因此,在装置实施例中未详细描述的部分可以参见前面方法实施例。
图4A示出了按照本申请实施例的用于车辆定位的装置的示意图。图4A所示的装置400可以由图1的计算机设备108或其它任何合适的装置来实现。装置400包括预测模块402、获取模块404和修正模块406。
预测模块402用于至少利用来自卫星导航系统和惯性测量单元的数据,获取在第一时间点处车辆的第一位置和第一姿态。
获取模块404用于利用所述车辆的激光雷达的传感器数据,获取包含所述第一位置的第一局部地理区域的第一激光点云数据。
修正模块406用于利用所述第一激光点云数据和预先构建的所述第一局部地理区域的第一定位图层来修正所述第一位置和所述第一姿态,以得到在所述第一时间点处所述车辆的修正位置和修正姿态,其中,所述第一定位图层构造成存储多个体素的标识和权重值,所述多个体素包括对在构建所述第一定位图层时获得的所述第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,以及,所述体素的权重值表示该体素被地理环境对象占据的可能程度。
在一些实施例中,所述多个体素包括对所述第二激光点云数据进行体素化得到的各个体素中的所述权重值大于零的体素。
在一些实施例中,所述多个体素中的每一个体素的所述权重值由该体素所包含的激光点的数量表示。
在一些实施例中,所述第一定位图层构造成以散列表的形式成对地存储所述多个体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在所述第一定位图层所应用的空间坐标系下的位置信息的散列映射值表示。
在一些实施例中,如图4B所示,修正模块406可以包括抽样模块408、第一计算模块410和确定模块412。
抽样模块408用于在所述第一位置和所述第一姿态的周围空间上针对位置和姿态进行多次空间抽样,以得到多个位置姿态组,其中,每个位置姿态组包括其中一次抽样得到的抽样位置和抽样姿态。
第一计算模块410用于计算所述多个位置姿态组的相似性得分,其中,任一位置姿态组的相似性得分表示与所述任一位置姿态组关联的第二定位图层和所述第一定位图层的相似程度,所述第二定位图层是利用第三激光点云数据以生成所述第一定位图层的方式生成的,以及,所述第三激光点云数据是利用基于所述任一位置姿态组包括的抽样位置和抽样姿态生成的三维空间变换,将所述第一激光点云数据从与所述车辆关联的第一空间坐标系变换到与所述第一定位图层关联的第二空间坐标系而得到的。
确定模块412用于至少基于所述多个位置姿态组的所述相似性得分,确定所述修正位置和所述修正姿态。
在一些实施例中,如图4C所示,确定模块412可以包括选取模块414和第二计算模块416。
选取模块414用于从所述多个位置姿态组中,选取其相似性得分大于第一阈值的各个第一位置姿态组。
第二计算模块416用于将所述各个第一位置姿态组的相似性得分作为权重,对所述各个第一位置姿态组所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。
在一些实施例中,如图4D所示,第一计算模块410可以包括查找模块420、第四计算模块422和第五计算模块424。
查找模块420用于查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,所述第一类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值相同,而所述第二类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值不同。
第四计算模块422用于计算每个位置姿态组的第一总权重值和第二总权重值,其中,所述第一总权重值等于在所述第一定位图层中存储的所述第一类型的体素的所述权重值之和,以及,所述第二总权重值等于在所述第一定位图层中存储的所述第二类型的体素的所述权重值之和。
第五计算模块424用于计算每个位置姿态组的所述第一总权重值与所述第二总权重值之差,作为该位置姿态组的相似性得分,以得到所述多个位置姿态组的相似性得分。
在一些实施例中,如图4E所示,装置400还可以包括地图更新数据生成模块425,其包括获得模块426和生成模块428。
获得模块426用于获取在第二位置处所述车辆的所述激光雷达的传感器数据。
生成模块428用于生成地图更新数据,其包括所述第二位置和第二姿态以及所获取的传感器数据,其中,所述第二位置和所述第二姿态是在当前时间点处所述车辆的位置和姿态,其是根据在所述第一时间点处所述车辆的所述修正位置和所述修正姿态以及从所述第一时间点到所述当前时间点期间所述车辆的运动而确定的。
图4F示出了按照本申请实施例的定位图层生成的装置的示意图。图4F所示的装置450可以由图1中的地图生成设备30或其它任何合适的设备来实现。装置450可以包括获取模块452、体素化模块454、计算模块456和存储模块458。
获取模块452用于获取第二地理区域的激光点云数据。
体素化模块454用于对所述激光点云数据进行体素化,以得到多个体素。
计算模块456用于计算所述多个体素的权重值,每一个体素的权重值指示该体素被地理环境对象占据的可能程度。
存储模块458用于存储所述多个体素中的至少一部分体素的标识和权重值,以得到所述第二地理区域的定位图层。
在一些实施例中,计算模块456进一步用于计算所述多个体素中的每一个体素所包含的激光点的数量,作为该体素的权重值。
在一些实施例中,所述至少一部分体素是所述多个体素中的所述权重值大于零的各个体素。
在一些实施例中,所述定位图层构造成以散列表的形式成对地存储所述至少一部分体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在与所述定位图层关联的空间坐标系下的位置信息的散列映射值表示。
图4G示出了按照本申请实施例的用于定位图层更新的装置的示意图。图4G所示的装置480可以由图1中的地图更新设备40或其它任何合适的设备来实现。装置480可以包括生成模块482、体素化模块484、计算模块486和更新模块488。
生成模块482用于利用地图更新数据来生成第三局部地理区域的第四激光点云数据,其中,所述地图更新数据包括车辆的第三位置和第三姿态以及在所述第三位置处所述车辆的激光雷达的第一传感器数据,所述第三局部地理区域是包含所述第三位置的局部地理区域,以及,所述第四激光点云数据是利用所述第一传感器数据形成的。
体素化模块484用于对第五激光点云数据进行体素化,以得到多个第三体素,其中,所述第五激光点云数据是利用基于所述第三位置和所述第三姿态而生成的三维空间变换,将所述第四激光点云数据从与所述车辆关联的空间坐标系变换到与第三定位图层关联的空间坐标系而得到的,以及,其中所述第三定位图层是以前构造的所述第三局部地理区域的定位图层,其存储所述多个第三体素中的至少一部分体素的标识和权重值,每一个体素的权重值表示该体素被地理环境对象占据的可能程度。
计算模块486用于计算所述多个第三体素的权重值。
更新模块488用于利用所述多个第三体素的计算的权重值,更新所述第三定位图层中存储的各体素的权重值。
在一些实施例中,所述第三定位图层存储所述多个第三体素中的所述权重值大于零的那些体素的所述标识和所述权重值,以及,如图4H中所示,更新模块488可以包括选取模块490、替换模块492、存储模块494和删除模块496。
其中,选取模块490用于从所述多个第三体素中,选取所计算的权重值大于零的那些体素。替换模块492用于对于所选取的每一个体素,如果所述第三定位图层存储有该体素的权重值且所存储的权重值和所计算的权重值不相同,则利用该体素的所计算的权重值替换所述第三定位图层中存储的该体素的权重值。存储模块494用于对于所选取的每一个体素,如果所述第三定位图层未存储有该体素的权重值,则将该体素的所述标识和所计算的权重值存储在所述第三定位图层中。删除模块496用于如果所述第三定位图层存储有未在所选取的体素中出现的第四体素的标识和权重值,则从所述第三定位图层中删除所述第四体素的标识和权重值。
需要说明的是,上述实施例提供的装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
图5A示出了按照本申请的实施例的用于定位图层生成、定位图层更新和车辆定位的系统的一种示例性具体实现。
如图5A所示,用于定位图层生成、定位图层更新和车辆定位的系统包括四个模块:定位图层的离线生成模块502、车辆全局位置姿态的预测模块506、车辆全局位置姿态的修正模块512和定位图层的离线更新模块514。
离线生成模块502以某个局部地理区域的原始激光点云数据为输入,输出该局部地理区域的用于定位的定位图层。
预测模块506以GNSS数据、INS数据、车辆底盘轮速、激光雷达的传感器数据为输入,输出预测的在全局空间坐标系下车辆10的位置和姿态(即,车辆的位置和姿态的预测值),以及在车辆10的局部空间坐标系下车辆10的周围区域的激光点云数据。
修正模块512以加载的预先构建的定位图层、预测的车辆10的位置和姿态、在车辆10的空间坐标系下车辆10的周围区域的激光点云数据为输入,输出在全局空间坐标系下车辆10的修正位置和修正姿态。
离线更新模块514在判定车辆驶入已构建定位图层的区域之后,以修正后的定位结果、在车辆10的局部空间坐标系下车辆10的周围区域的激光点云数据、已构建的定位图层为输入,输出更新权重值的定位图层。
其中,离线生成模块502例如可以由图1中的地图生成设备30来实现,预测模块506和修正模块512例如可以由图1中的计算机设备108来实现,以及,离线更新模块514例如可以由图1中的地图更新设备40来实现。
图5B示出了按照本申请的实施例的定位图层生成方法的一种示例性具体实现。图5B所示的定位图层生成方法531由离线生成模块502实现。定位图层生成方法531可以包括步骤S532-步骤S540。
在步骤S532,对指定地理区域的激光点云数据进行体素化,以得到多个体素。
所述指定地理区域例如是但不限于一个或多个城市的区域,一个或多个省的区域,一个或多个国家的区域等。例如,可以由检测人员驾驶配置有激光雷达的车辆在所述指定地理区域中行驶以收集激光雷达的传感器数据,然后利用所收集的传感器数据来形成所述指定地理区域在全局空间坐标系下的激光点云数据。
在步骤S534,计算所述多个体素中的每一个体素的散列映射值和权重值,其中,每一个体素的权重值表示该体素被地理环境对象占据的可能程度,其 等于该体素所包含的激光点的数量,以及,每一个体素的散列映射值用作该体素的标识,其是利用散列映射函数对该体素在全局空间坐标下的位置信息进行散列映射计算得到的。所述位置信息例如可以但不限于由体素在全局空间坐标系中的经度坐标值、纬度坐标值和高度坐标值表示,或者,由体素在全局空间坐标系中的在经度方向上的序号、在纬度方向上的序号和在高度方向上的序号来表示。
在步骤S536,从所述多个体素中,选取其权重值大于零的那些体素。这里,权重值等于零的体素被认为对定位没有帮助,因此,并不会存储这些权重值等于零的体素的信息,以减少和压缩需要存储的数据量。
在步骤S538,以用作体素的标识的散列映射值和权重值分别作为键和值,将所选取的那些体素各自所包括的散列映射值和权重值以键值对的方式成对地存储到散列表中,从而得到所述指定地理区域的定位图层。
在步骤S540,考虑所述指定地理区域的范围大小以及以后要使用定位图层来定位的定位设备的运行时内存空间和运算能力等因素,将所述指定地理区域的定位图层分成多片以进行存储。
散列表形式的定位图层可以通过键值对的插入和删除操作,非常方便地加载期望区域的地图数据,同时删除期望区域外的地图数据。散列表形式的定位图层的加载和删除的过程可动态完成,因此,散列表形式的定位图层非常适于分片存储,而且无需确保相邻地理区域的分片定位图层之间存在重叠区域。
各个分片定位图层中的部分或全部可以存储在常规服务器或云服务器中以供各个车辆下载使用,或者,可以提供给车辆的用户、制造商、销售商或服务人员以直接存储在车辆中。
图5C示出了按照本申请的实施例的车辆位置姿态预测方法的一种示例性具体实现。图5C所示的车辆位置姿态预测方法540由预测模块506来实现。方法540可以包括步骤S542-步骤S546。
在步骤S542,利用从车辆10的GNSS接收机112、IMU 114和底盘轮速传感器接收的GNSS数据、IMU数据和车辆底盘轮速,预测在第一时间点T1处车辆10的位置和姿态。应理解,在第一时间点T1处车辆10的位置和姿态是全局空间坐标系下的位置和姿态,不是在车辆10的局部空间坐标系下的位置和姿态。
利用GNSS数据、IMU数据和车辆底盘轮速来确定车辆的位置和姿态是已知的技术,在此省略对其的详细描述。
在步骤S544,利用车辆10的传感器系统102所接收的GNSS数据、IMU数据和车辆底盘轮速,计算多帧传感器数据之间的相对运动,其中,所述多帧传感器数据包括在第一时间点T1处获取的车辆10的传感器系统102的激光雷达的传感器数据和在第一时间点T1之前的一次或多次中获取的车辆10的传感器系统102的激光雷达的传感器数据。
在步骤S546,根据所计算的相对运动,将所述多帧传感器数据进行叠加,以形成在车辆10的局部空间坐标系下车辆10的周围区域(即包含所预测的车辆10的位置的局部地理区域)的激光点云数据(为了便于描述,以下将其称为激光点云数据P1)
图5D示出了按照本申请的实施例的车辆位置姿态修正方法的一种示例性具体实现。图5D所示的车辆位置姿态修正方法550由修正模块512来实现,其对图5C的车辆位置姿态预测方法540所预测的在第一时间点T1处车辆10的位置和姿态进行修正,以得到在第一时间点T1处车辆10的修正位置和修正姿态。修正方法550可以包括步骤S552-步骤S576。
在步骤S552,在所预测的在第一时间点T1处车辆10的位置和姿态的周围空间上,以一定空间抽样间隔,针对位置和姿态进行多次空间抽样,得到多个位置姿态组,其中,每个位置姿态组包括其中一次空间抽样得到的抽样位置和抽样姿态。
这里,在第一时间点T1处车辆10的位置和姿态的周围空间的范围不是固定的,而基于在第一时间点T1处的车辆10的车速估计得到的,因此无需通过多次迭代来确定和查找该周围空间的范围。
在步骤S554,利用每个位置姿态组所包括的抽样位置和抽样姿态,生成每个位置姿态组的从车辆10的局部空间坐标系到全局空间坐标系的三维空间变换,以得到所述多个位置姿态组各自的三维空间变换。
在步骤S556,利用每个位置姿态组的三维空间变换,将在车辆10的空间坐标系下车辆10的周围区域的激光点云数据P1变换到全局空间坐标系,以得到每个位置姿态组的变换的激光点云数据。
在步骤S558,按照与图5B所示的定位图层生成方法相同的方法,利用每个位置姿态组的变换的激光点云数据,生成与每个位置姿态组关联的实时定位图层,以得到所述多个位置姿态组各自关联的实时定位图层。
在步骤S560,查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,第一类型的体素是其在与该位置姿态组关联的实时定位图层中所存储的权重值和在预先构建的包含在第一时间点T1处车辆10的第一位置的局部地理区域的定位图层(以下称为定位图层M1)中所存储的权重值相同的各个体素,而第二类型的体素是其在与该位置姿态组关联的实时定位图层中所存储的权重值和在定位图层M1中所存储的权重值不同的各个体素。这里,如果某个体素仅在位置姿态组关联的实时定位图层和定位图层M1的其中一个存储有权重值,则其也属于第二类型的体素。
在步骤S562,计算每个位置姿态组的第一总权重值和第二总权重值,其中,第一总权重值等于该位置姿态组的第一类型的体素在定位图层M1中所存储的权重值之和,第二总权重值等于该位置姿态组的第二类型的体素在定位图层M1中所存储的权重值之和。
在步骤S564,计算每个位置姿态组的第一总权重值和第二总权重值之差,作为该位置姿态组的相似性得分,从而得到所述多个位置姿态组的相似性得分。这里,每个位置姿态组的相似性得分表示与该位置姿态组关联的实时定位图层和定位图层M1的相似程度。
在步骤S566,从所述多个位置姿态组中,选取其相似性得分大于指定阈值的那些位置姿态组。为了便于描述,以下将所选取的位置姿态组称为位置姿态组C1。
在步骤S568,将所选取的各个位置姿态组C1的相似性得分作为权重,对所选取的各个位置姿态组C1所包括的抽样位置和抽样姿态进行最小二乘加权拟合,以得到在第一时间点T1处车辆10的修正位置和修正姿态。
在步骤S572,根据在第一时间点T1处车辆10的修正位置和修正姿态,以及,基于车辆10的传感器系统102所接收的传感器数据而确定的从第一时间点T1到当前时间点期间车辆10的运动,确定在当前时间点处车辆10的修正位置和修正姿态,作为在当前时间点处车辆10的定位结果。为了便于描述,以下将在当前时间点处车辆10的修正位置和修正姿态称为修正位置AP和修正姿态AZ。
在步骤S574,获取在修正位置AP处车辆10的激光雷达的传感器数据。
在步骤S576,生成地图更新数据,其包括车辆10的修正位置AP和修正姿态AZ以及在步骤S574获取的传感器数据。所生成的地图更新数据例如可以提供给地图更新设备以更新已构建的定位图层。
车辆位置姿态预测方法540和车辆位置姿态修正方法550可以例如但不限于周期地执行以连续地对车辆10进行定位。
图5E示出了按照本申请的实施例的定位图层更新方法的一种示例性具体实现。图5E所示的定位图层更新方法580由离线更新模块514来实现,其利用车辆位置姿态修正方法550所生成的地图更新数据来更新已构建的定位图层。更新方法580可以包括步骤S582-步骤S592。
在步骤S582,如果地图更新数据所包含的车辆10的修正位置(以下称为修正位置KK)位于预先构建的定位图层覆盖的地理区域中,则利用地图更新数据所包含的传感器数据,形成包含修正位置KK的局部地理区域(以下将其称为局部地理区域A1)在车辆10的局部空间坐标系下的激光点云数据(以下将其称为激光点云数据M2)。
在步骤S584,使用基于地图更新数据所包含的车辆10的修正位置和修正姿态而生成的从车辆10的局部空间坐标系到全局空间坐标系的三维空间变换,将激光点云数据M2变换到全局空间坐标系,以得到变换的激光点云数据M2。以下将变换的激光点云数据M2称为激光点云数据M3。
在步骤S586,对激光点云数据M3进行体素化,以得到多个体素。
在步骤S588,计算所述多个体素中的每一个体素的散列映射值和权重值,其中,每一个体素的权重值等于该体素所包含的激光点的数量,以及,每一个体素的散列映射值作为该体素的标识,其是利用已知的散列映射函数对该体素在全局空间坐标下的位置信息进行散列映射计算得到的。
在步骤S590,从所述多个体素中,选取其权重值大于零的那些体素。
在步骤S592,利用所选取的体素的权重值和作为标识的散列映射值,更新局部地理区域A1的定位图层中的各体素的权重值。
具体地,对所选取的每一个体素,以其散列映射值作为标识来索引局部地理区域A1的定位图层,以查看在局部地理区域A1的定位图层中是否存储有该体素的权重值。
如果在局部地理区域A1的定位图层中存储有该体素的权重值且所计算的权重值与所存储的权重值相同,则不更新局部地理区域A1的定位图层中所存储的该体素的权重值。
如果在局部地理区域A1的定位图层中存储有该体素的权重值且所计算的权重值与所存储的权重值不同,则利用该体素的计算的权重值来替换局部地理区域A1的定位图层中所存储的该体素的权重值。
如果在局部地理区域A1的定位图层中未存储有该体素的权重值,则将该体素的作为标识的散列映射值和权重值以键值对的方式加入到局部地理区域A1的定位图层中进行存储。这种情形的出现大多数情况下是由于在先前生成局部地理区域A1的定位图层之后在局部地理区域A1中新增了诸如建筑物、路牌等之类的静态目标而导致的。因此,通过在局部地理区域A1的定位图层中加入先前未存储的体素的标识和权重值,能够在定位图层中增加静态目标的权重值,从而使定位图层匹配相应地理区域的真实环境,提升定位图层的可靠性。
此外,还检查局部地理区域A1的定位图层中是否存储有其未在所选取的体素中出现的体素的权重值。如果检查结果表明存储有这样的体素的权重值,则从局部地理区域A1的定位图层中删除这样的体素的标识和权重值。这样的体素在局部地理区域A1的定位图层中存在通常是由于先前生成局部地理区域A1的定位图层时在局部地理区域A1中存在诸如车辆和/或行人等此类运动目标导致的。因此,通过从局部地理区域A1的定位图层中删除其未在所选取的体素中出现的那些体素的权重值,能够从定位图层中删除运动目标的权重值,从而使定位图层匹配相应地理区域的真实环境,提升定位图层的可靠性。
图6示出了按照本申请的实施例的计算机设备的结构示意图。图6所示的计算机设备601例如可以是但不限于图1中的计算机设备108。
计算机设备601可以包括至少一个处理器602、至少一个存储器604、通信接口606和总线608。
处理器602、存储器604和通信接口606通过总线608相连。通信接口606用于计算机设备600与其他设备之间进行通信。存储器604用于存储程序 代码和数据。处理器602用于执行存储器604中的程序代码以执行图2A-2E或图5C-5D所述的方法。存储器604可以是处理器602内部的存储单元,也可以是与处理器602独立的外部存储单元,还可以是包括处理器602内部的存储单元和与处理器602独立的外部存储单元的部件。
图7示出了按照本申请的实施例的地图生成设备的结构示意图。图7所示的地图生成设备701例如可以是但不限于图1中的地图生成设备30。
地图生成设备701可以包括至少一个处理器702、至少一个存储器704、输入输出接口706和总线708。
处理器702、存储器704和输入输出接口706通过总线708相连。输入输出接口706用于从外部接收数据和信息以及向外部输出数据和信息。输入输出接口706例如可以包括鼠标、键盘、显示器等。存储器704用于存储程序代码和数据。处理器702用于执行存储器704中的程序代码以执行图3A或图5B所述的方法。存储器704可以是处理器702内部的存储单元,也可以是与处理器702独立的外部存储单元,还可以是包括处理器702内部的存储单元和与处理器702独立的外部存储单元的部件。
图8示出了按照本申请的实施例的地图更新设备的结构示意图。图8所示的地图更新设备801例如可以是但不限于图1中的地图更新设备40。
地图更新设备801可以包括至少一个处理器802、至少一个存储器804、输入输出接口806和总线808。
处理器802、存储器804和输入输出接口806通过总线808相连。输入输出接口806用于从外部接收数据和信息以及向外部输出数据和信息。输入输出接口806例如可以包括鼠标、键盘、显示器等。存储器804用于存储程序代码和数据。处理器802用于执行存储器804中的程序代码以执行图3B或图5E所述的方法。存储器804可以是处理器802内部的存储单元,也可以是与处理器802独立的外部存储单元,还可以是包括处理器802内部的存储单元和与处理器802独立的外部存储单元的部件。
处理器602、702和902可以是但不限于通用处理器、数字信号处理器(digital signal processing:DSP)、专用集成电路(application specific integrated circuit:ASIC)、现场可编程门阵列(field programmable gate array:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。存储器604、704和804例如可以是但不限于随机存储器、闪存、只读存储器、可编程只读存储器或者电可擦写可编程存储器、磁碟或者光盘等。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元、模块及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,并且这种实现不应认为超出本申请的范围。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置、模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干程序指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。前述存储介质可以包括但不限于U盘、移动硬盘、只读存储器(read-only memory:ROM)、随机存取存储器(random access memory:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (24)

  1. 一种车辆定位的方法,其特征在于,包括:
    至少利用来自卫星导航系统和惯性测量单元的数据,获取在第一时间点处车辆的第一位置和第一姿态;
    利用所述车辆的激光雷达的传感器数据,获取包含所述第一位置的第一局部地理区域的第一激光点云数据;以及
    利用所述第一激光点云数据和预先构建的所述第一局部地理区域的第一定位图层来修正所述第一位置和所述第一姿态,以得到在所述第一时间点处所述车辆的修正位置和修正姿态,
    其中,所述第一定位图层构造成存储多个体素的标识和权重值,所述多个体素包括对在构建所述第一定位图层时获得的所述第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,所述体素的权重值表示该体素被地理环境对象占据的可能程度。
  2. 如权利要求1所述的方法,其特征在于,
    所述多个体素包括对所述第二激光点云数据进行体素化得到的各个体素中的所述权重值大于零的体素,以及
    所述多个体素中的每一个体素的所述权重值由该体素所包含的激光点的数量表示。
  3. 如权利要求1所述的方法,其特征在于,
    所述第一定位图层构造成以散列表的形式成对地存储所述多个体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在所述第一定位图层所应用的空间坐标系下的位置信息的散列映射值表示。
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述修正所述第一位置和所述第一姿态包括:
    在所述第一位置和所述第一姿态的周围空间上针对位置和姿态进行多次空间抽样,以得到多个位置姿态组,其中,每个位置姿态组包括其中一次空间抽样得到的抽样位置和抽样姿态;
    计算所述多个位置姿态组的相似性得分,其中,任一位置姿态组的相似性得分表示与所述任一位置姿态组关联的第二定位图层和所述第一定位图层的相似程度,所述第二定位图层是利用第三激光点云数据以生成所述第一定位图层的方式生成的,以及,所述第三激光点云数据是利用基于所述任一位置姿态组包括的抽样位置和抽样姿态而生成的三维空间变换,将所述第一激光点云数据从与所述车辆关联的第一空间坐标系变换到与所述第一定位图层关联的第二空间坐标系而得到的;以及
    至少基于所述多个位置姿态组的所述相似性得分,确定所述修正位置和所述修正姿态。
  5. 如权利要求4所述的方法,其特征在于,所述确定所述修正位置和所述修正姿态包括:
    从所述多个位置姿态组中,选取相似性得分大于第一阈值的各个第一位置姿态组;以及
    将所述各个第一位置姿态组的相似性得分作为权重,对所述各个第一位置姿态组所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。
  6. 如权利要求4所述的方法,其特征在于,所述计算所述多个位置姿态组的相似性得分包括:
    查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,所述第一类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值相同,而所述第二类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值不同;
    计算每个位置姿态组的第一总权重值和第二总权重值,其中,所述第一总权重值等于在所述第一定位图层中存储的所述第一类型的体素的所述权重值之和,以及,所述第二总权重值等于在所述第一定位图层中存储的所述第二类型的体素的所述权重值之和;以及
    计算每个位置姿态组的所述第一总权重值与所述第二总权重值之差,作为该位置姿态组的相似性得分,以得到所述多个位置姿态组的相似性得分。
  7. 如权利要求1所述的方法,其特征在于,还包括:
    获取在第二位置处所述车辆的所述激光雷达的传感器数据;以及
    生成地图更新数据,其包括所述第二位置和第二姿态以及所获取的传感器数据,
    其中,所述第二位置和所述第二姿态是在当前时间点处所述车辆的位置和姿态,其是根据在所述第一时间点处所述车辆的所述修正位置和所述修正姿态以及从所述第一时间点到所述当前时间点期间所述车辆的运动而确定的。
  8. 一种定位图层生成的方法,其特征在于,包括:
    获取第二地理区域的激光点云数据;
    对所述激光点云数据进行体素化,以得到多个体素;
    计算所述多个体素的权重值,每一个体素的权重值指示该体素被地理环境对象占据的可能程度;以及
    存储所述多个体素中的至少一部分体素的标识和权重值,以得到所述第二地理区域的定位图层。
  9. 如权利要求8所述的方法,其特征在于,
    所述计算所述多个体素的权重值包括:计算所述多个体素中的每一个体素所包含的激光点的数量,作为该体素的权重值,以及
    所述至少一部分体素是所述多个体素中的所述权重值大于零的各个体素。
  10. 如权利要求8所述的方法,其特征在于,
    所述定位图层构造成以散列表的形式成对地存储所述至少一部分体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在与所述定位图层关联的空间坐标系下的位置信息的散列映射值表示。
  11. 一种定位图层更新的方法,其特征在于,包括:
    利用地图更新数据来生成第三局部地理区域的第四激光点云数据,其中,所述地图更新数据包括车辆的第三位置和第三姿态以及在所述第三位置处所述车辆的激光雷达的第一传感器数据,所述第三局部地理区域是包含所述第三位置的局部地理区域,以及,所述第四激光点云数据是利用所述第一传感器数据形成的;
    对第五激光点云数据进行体素化,以得到多个第三体素,其中,所述第五激光点云数据是利用基于所述第三位置和所述第三姿态而生成的三维空间变换,将所述第四激光点云数据从与所述车辆关联的空间坐标系变换到与第三定位图层关联的空间坐标系而得到的,以及,其中所述第三定位图层是以前构造的所述第三局部地理区域的定位图层,其存储所述多个第三体素中的至少一部分体素的标识和权重值,每一个第三体素的权重值表示该体素被地理环境对象占据的可能程度;以及
    计算所述多个第三体素的权重值;以及
    利用所述多个第三体素的计算的权重值,更新在所述第三定位图层中存储的各体素的权重值。
  12. 如权利要求11所述的方法,其特征在于,
    所述第三定位图层存储所述多个第三体素中的所述权重值大于零的那些体素的所述标识和所述权重值,以及
    所述更新在所述第三定位图层中存储的各体素的权重值包括:
    从所述多个第三体素中,选取所计算的权重值大于零的那些体素;
    对于所选取的每一个体素,如果所述第三定位图层存储有该体素的权重值且所存储的权重值和所计算的权重值不相同,则利用该体素的所计算的权重值替换所述第三定位图层中存储的该体素的权重值;
    对于所选取的每一个体素,如果所述第三定位图层未存储有该体素的权重值, 则将该体素的所述标识和所计算的权重值存储在所述第三定位图层中;以及
    如果所述第三定位图层存储有未在所选取的体素中出现的第四体素的标识和权重值,则从所述第三定位图层中删除所述第四体素的标识和权重值。
  13. 一种车辆定位的装置,其特征在于,包括:
    预测模块,用于至少利用来自卫星导航系统和惯性测量单元的数据,获取在第一时间点处车辆的第一位置和第一姿态;
    获取模块,用于利用所述车辆的激光雷达的传感器数据,获取包含所述第一位置的第一局部地理区域的第一激光点云数据;以及
    修正模块,用于利用所述第一激光点云数据和预先构建的所述第一局部地理区域的第一定位图层来修正所述第一位置和所述第一姿态,以得到在所述第一时间点处所述车辆的修正位置和修正姿态,
    其中,所述第一定位图层构造成存储多个体素的标识和权重值,所述多个体素包括对在构建所述第一定位图层时获得的所述第一局部地理区域的第二激光点云数据进行体素化得到的各个体素中的至少一部分,以及,所述体素的权重值表示该体素被地理环境对象占据的可能程度。
  14. 如权利要求13所述的装置,其特征在于,
    所述多个体素包括对所述第二激光点云数据进行体素化得到的各个体素中的所述权重值大于零的体素,以及
    所述多个体素中的每一个体素的所述权重值由该体素所包含的激光点的数量表示。
  15. 如权利要求13所述的装置,其特征在于,
    所述第一定位图层构造成以散列表的形式成对地存储所述多个体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在所述第一定位图层所应用的空间坐标系下的位置信息的散列映射值表示。
  16. 如权利要求13-15中任一项所述的装置,其特征在于,所述修正模块包括:
    抽样模块,用于在所述第一位置和所述第一姿态的周围空间上针对位置和姿态进行多次空间抽样,以得到多个位置姿态组,其中,每个位置姿态组包括其中一次空间抽样得到的抽样位置和抽样姿态;
    第一计算模块,用于计算所述多个位置姿态组的相似性得分,其中,任一位置姿态组的相似性得分表示与所述任一位置姿态组关联的第二定位图层和所述第一定位图层的相似程度,所述第二定位图层是利用第三激光点云数据以生成所述第一定位图层的方式生成的,以及,所述第三激光点云数据是利用基于所述任一位置姿态组包括的抽样位置和抽样姿态生成的三维空间变换,将所述第一激光 点云数据从与所述车辆关联的第一空间坐标系变换到与所述第一定位图层关联的第二空间坐标系而得到的;以及
    确定模块,用于至少基于所述多个位置姿态组的所述相似性得分,确定所述修正位置和所述修正姿态。
  17. 如权利要求16所述的装置,其特征在于,所述确定模块包括:
    选取模块,用于从所述多个位置姿态组中,选取其相似性得分大于第一阈值的各个第一位置姿态组;以及
    第二计算模块,用于将所述各个第一位置姿态组的相似性得分作为权重,对所述各个第一位置姿态组所包括的抽样位置和抽样姿态进行加权拟合,以得到所述修正位置和所述修正姿态。
  18. 如权利要求16所述的装置,其特征在于,所述第一计算模块包括:
    查找模块,用于查找每个位置姿态组的第一类型的体素和第二类型的体素,其中,所述第一类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值相同,而所述第二类型的体素在与该位置姿态组关联的所述第二定位图层中存储的所述权重值和在所述第一定位图层中存储的所述权重值不同;
    第四计算模块,用于计算每个位置姿态组的第一总权重值和第二总权重值,其中,所述第一总权重值等于在所述第一定位图层中存储的所述第一类型的体素的所述权重值之和,以及,所述第二总权重值等于在所述第一定位图层中存储的所述第二类型的体素的所述权重值之和;以及
    第五计算模块,用于计算每个位置姿态组的所述第一总权重值与所述第二总权重值之差,作为该位置姿态组的相似性得分,以得到所述多个位置姿态组的相似性得分。
  19. 如权利要求13所述的装置,其特征在于,还包括:
    获得模块,获取在第二位置处所述车辆的所述激光雷达的传感器数据;以及
    生成模块,用于生成地图更新数据,其包括所述第二位置和第二姿态以及所获取的传感器数据,
    其中,所述第二位置和所述第二姿态是在当前时间点处所述车辆的位置和姿态,其是根据在所述第一时间点处所述车辆的所述修正位置和所述修正姿态以及从所述第一时间点到所述当前时间点期间所述车辆的运动而确定的。
  20. 一种定位图层生成的装置,其特征在于,包括:
    获取模块,用于获取第二地理区域的激光点云数据;
    体素化模块,用于对所述激光点云数据进行体素化,以得到多个体素;
    计算模块,用于计算所述多个体素的权重值,每一个体素的权重值指示该体 素被地理环境对象占据的可能程度;以及
    存储模块,用于存储所述多个体素中的至少一部分体素的标识和权重值,以得到所述第二地理区域的定位图层。
  21. 如权利要求20所述的装置,其特征在于,
    所述计算模块进一步用于计算所述多个体素中的每一个体素所包含的激光点的数量,作为该体素的权重值,以及
    所述至少一部分体素是所述多个体素中的所述权重值大于零的各个体素。
  22. 如权利要求20所述的装置,其特征在于,
    所述定位图层构造成以散列表的形式成对地存储所述至少一部分体素的所述标识和所述权重值,以及,每一个体素的所述标识由该体素在与所述定位图层关联的空间坐标系下的位置信息的散列映射值表示。
  23. 一种定位图层更新的装置,其特征在于,包括:
    生成模块,用于利用地图更新数据来生成第三局部地理区域的第四激光点云数据,其中,所述地图更新数据包括车辆的第三位置和第三姿态以及在所述第三位置处所述车辆的激光雷达的第一传感器数据,所述第三局部地理区域是包含所述第三位置的局部地理区域,以及,所述第四激光点云数据是利用所述第一传感器数据形成的;
    体素化模块,用于对第五激光点云数据进行体素化,以得到多个第三体素,其中,所述第五激光点云数据是利用基于所述第三位置和所述第三姿态而生成的三维空间变换,将所述第四激光点云数据从与所述车辆关联的空间坐标系变换到与第三定位图层关联的空间坐标系而得到的,以及,其中所述第三定位图层是以前构造的所述第三局部地理区域的定位图层,其存储所述多个第三体素中的至少一部分体素的标识和权重值,每一个体素的权重值表示该体素被地理环境对象占据的可能程度;
    计算模块,用于计算所述多个第三体素的权重值;以及
    更新模块,用于利用所述多个第三体素的计算的权重值,更新所述第三定位图层中存储的各体素的权重值。
  24. 如权利要求23所述的装置,其特征在于,
    所述第三定位图层存储所述多个第三体素中的所述权重值大于零的那些体素的所述标识和所述权重值,以及
    所述更新模块包括:
    选取模块,用于从所述多个第三体素中,选取所计算的权重值大于零的那些体素;
    替换模块,用于对于所选取的每一个体素,如果所述第三定位图层存储有该 体素的权重值且所存储的权重值和所计算的权重值不相同,则利用该体素的所计算的权重值替换所述第三定位图层中存储的该体素的权重值;
    存储模块,用于对于所选取的每一个体素,如果所述第三定位图层未存储有该体素的权重值,则将该体素的所述标识和所计算的权重值存储在所述第三定位图层中;以及
    删除模块,用于如果所述第三定位图层存储有未在所选取的体素中出现的第四体素的标识和权重值,则从所述第三定位图层中删除所述第四体素的标识和权重值。
PCT/CN2020/085060 2020-04-16 2020-04-16 车辆定位的方法和装置、定位图层生成的方法和装置 WO2021207999A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/085060 WO2021207999A1 (zh) 2020-04-16 2020-04-16 车辆定位的方法和装置、定位图层生成的方法和装置
CN202080004104.3A CN112703368B (zh) 2020-04-16 2020-04-16 车辆定位的方法和装置、定位图层生成的方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/085060 WO2021207999A1 (zh) 2020-04-16 2020-04-16 车辆定位的方法和装置、定位图层生成的方法和装置

Publications (1)

Publication Number Publication Date
WO2021207999A1 true WO2021207999A1 (zh) 2021-10-21

Family

ID=75514810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085060 WO2021207999A1 (zh) 2020-04-16 2020-04-16 车辆定位的方法和装置、定位图层生成的方法和装置

Country Status (2)

Country Link
CN (1) CN112703368B (zh)
WO (1) WO2021207999A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114459471A (zh) * 2022-01-30 2022-05-10 中国第一汽车股份有限公司 定位信息确定方法、装置、电子设备及存储介质
US20220355805A1 (en) * 2021-05-04 2022-11-10 Hyundai Motor Company Vehicle position correction apparatus and method thereof

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11803977B2 (en) * 2021-12-13 2023-10-31 Zoox, Inc. LIDAR point cloud alignment validator in HD mapping
CN114220053B (zh) * 2021-12-15 2022-06-03 北京建筑大学 一种基于车辆特征匹配的无人机视频车辆检索方法
CN115527034B (zh) * 2022-10-26 2023-08-01 北京亮道智能汽车技术有限公司 一种车端点云动静分割方法、装置及介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107709928A (zh) * 2015-04-10 2018-02-16 欧洲原子能共同体由欧洲委员会代表 用于实时建图与定位的方法和装置
CN108089572A (zh) * 2016-11-23 2018-05-29 百度(美国)有限责任公司 用于稳健且有效的车辆定位的算法和基础设施
CN108885105A (zh) * 2016-03-15 2018-11-23 索尔菲斯研究股份有限公司 用于提供车辆认知的系统和方法
CN109297510A (zh) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质
CN110388924A (zh) * 2018-04-18 2019-10-29 法拉第未来公司 用于与自动导航有关的基于雷达的车辆定位的系统和方法
US20200025578A1 (en) * 2017-12-12 2020-01-23 Maser Consulting, P.A. Tunnel mapping system and methods
JP2020034451A (ja) * 2018-08-30 2020-03-05 パイオニア株式会社 データ構造、記憶媒体及び記憶装置
CN110889808A (zh) * 2019-11-21 2020-03-17 广州文远知行科技有限公司 一种定位的方法、装置、设备及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996944B2 (en) * 2016-07-06 2018-06-12 Qualcomm Incorporated Systems and methods for mapping an environment
CN106338736B (zh) * 2016-08-31 2019-01-25 东南大学 一种基于激光雷达的全3d占据体元地形建模方法
EP3570253B1 (en) * 2017-02-17 2021-03-17 SZ DJI Technology Co., Ltd. Method and device for reconstructing three-dimensional point cloud
EP3633655A4 (en) * 2017-05-31 2021-03-17 Pioneer Corporation CARD GENERATION DEVICE, ORDERING PROCESS, PROGRAM AND STORAGE MEDIA
US10706611B2 (en) * 2018-06-15 2020-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Three-dimensional representation by multi-scale voxel hashing
CN109059906B (zh) * 2018-06-26 2020-09-29 上海西井信息科技有限公司 车辆定位方法、装置、电子设备、存储介质
CN109341707B (zh) * 2018-12-03 2022-04-08 南开大学 未知环境下移动机器人三维地图构建方法
CN110378997B (zh) * 2019-06-04 2023-01-20 广东工业大学 一种基于orb-slam2的动态场景建图与定位方法
CN110807782B (zh) * 2019-10-25 2021-08-20 中山大学 一种视觉机器人的地图表示系统及其构建方法
CN110989619B (zh) * 2019-12-23 2024-01-16 阿波罗智能技术(北京)有限公司 用于定位对象的方法、装置、设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107709928A (zh) * 2015-04-10 2018-02-16 欧洲原子能共同体由欧洲委员会代表 用于实时建图与定位的方法和装置
CN108885105A (zh) * 2016-03-15 2018-11-23 索尔菲斯研究股份有限公司 用于提供车辆认知的系统和方法
CN108089572A (zh) * 2016-11-23 2018-05-29 百度(美国)有限责任公司 用于稳健且有效的车辆定位的算法和基础设施
US20200025578A1 (en) * 2017-12-12 2020-01-23 Maser Consulting, P.A. Tunnel mapping system and methods
CN110388924A (zh) * 2018-04-18 2019-10-29 法拉第未来公司 用于与自动导航有关的基于雷达的车辆定位的系统和方法
JP2020034451A (ja) * 2018-08-30 2020-03-05 パイオニア株式会社 データ構造、記憶媒体及び記憶装置
CN109297510A (zh) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质
CN110889808A (zh) * 2019-11-21 2020-03-17 广州文远知行科技有限公司 一种定位的方法、装置、设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220355805A1 (en) * 2021-05-04 2022-11-10 Hyundai Motor Company Vehicle position correction apparatus and method thereof
US11821995B2 (en) * 2021-05-04 2023-11-21 Hyundai Motor Company Vehicle position correction apparatus and method thereof
CN114459471A (zh) * 2022-01-30 2022-05-10 中国第一汽车股份有限公司 定位信息确定方法、装置、电子设备及存储介质
CN114459471B (zh) * 2022-01-30 2023-08-11 中国第一汽车股份有限公司 定位信息确定方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112703368A (zh) 2021-04-23
CN112703368B (zh) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2021207999A1 (zh) 车辆定位的方法和装置、定位图层生成的方法和装置
CN110914777B (zh) 用于自主车辆的高清地图以及路线存储管理系统
CN108387241B (zh) 更新自动驾驶车辆的定位地图的方法和系统
CN108089572B (zh) 用于车辆定位的方法和装置
CN108399218B (zh) 基于Walsh内核投影技术的自动驾驶车辆定位
Hashemi et al. A critical review of real-time map-matching algorithms: Current issues and future directions
CN111161353B (zh) 车辆定位方法、装置、可读存储介质和计算机设备
CN109435955B (zh) 一种自动驾驶系统性能评估方法、装置、设备及存储介质
JP2019527832A (ja) 正確な位置特定およびマッピングのためのシステムおよび方法
CN110345955A (zh) 用于自动驾驶的感知与规划协作框架
CN110386142A (zh) 用于自动驾驶车辆的俯仰角校准方法
CN111402339B (zh) 一种实时定位方法、装置、系统及存储介质
CN108268481B (zh) 云端地图更新方法及电子设备
US9874450B2 (en) Referencing closed area geometry
CN111551186A (zh) 一种车辆实时定位方法、系统及车辆
US20220291012A1 (en) Vehicle and method for generating map corresponding to three-dimensional space
US11668573B2 (en) Map selection for vehicle pose system
CN114930122B (zh) 用于更新数字道路地图的方法和处理器电路
CN111241224B (zh) 目标距离估计的方法、系统、计算机设备和存储介质
CN114061611A (zh) 目标对象定位方法、装置、存储介质和计算机程序产品
CN113822944A (zh) 一种外参标定方法、装置、电子设备及存储介质
KR102408981B1 (ko) Nd 맵 생성방법 및 그를 활용한 맵 업데이트 방법
CN112132951B (zh) 一种基于视觉的网格语义地图的构建方法
KR20200032776A (ko) 다중 센서 플랫폼 간 정보 융합을 위한 시스템
CN113503883A (zh) 采集用于构建地图的数据的方法、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20930961

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20930961

Country of ref document: EP

Kind code of ref document: A1