WO2022087916A1 - 定位方法、装置、电子设备和存储介质 - Google Patents

定位方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022087916A1
WO2022087916A1 PCT/CN2020/124519 CN2020124519W WO2022087916A1 WO 2022087916 A1 WO2022087916 A1 WO 2022087916A1 CN 2020124519 W CN2020124519 W CN 2020124519W WO 2022087916 A1 WO2022087916 A1 WO 2022087916A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
point
current
pose
attribute
Prior art date
Application number
PCT/CN2020/124519
Other languages
English (en)
French (fr)
Inventor
赵小文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080004463.9A priority Critical patent/CN112543859B/zh
Priority to CN202210726454.1A priority patent/CN115143962A/zh
Priority to PCT/CN2020/124519 priority patent/WO2022087916A1/zh
Priority to EP20959079.3A priority patent/EP4215874A4/en
Publication of WO2022087916A1 publication Critical patent/WO2022087916A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/42Simultaneous measurement of distance and other co-ordinates
    • G01S13/426Scanning radar, e.g. 3D radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Definitions

  • the present application relates to the technical field of intelligent driving, and in particular, to a positioning method, an apparatus, an electronic device and a storage medium.
  • Positioning technology is the key technology in intelligent driving.
  • the current positioning technology includes satellite positioning technology, inertial navigation positioning technology, visual positioning technology and lidar positioning technology.
  • Visual positioning technology and lidar positioning technology are mostly used for indoor positioning.
  • an indoor point cloud map can be pre-built, and the point cloud map can include point clouds of multiple key frames.
  • the point cloud around the vehicle can be collected in real time, and then the collected point cloud and the point cloud of each key frame in the point cloud map are matched according to the iterative closest point (ICP) algorithm.
  • the pose corresponding to the key frame with the highest matching degree is used as the pose of the vehicle.
  • Embodiments of the present application provide a positioning method, apparatus, electronic device, and storage medium, which can identify similar scenes and improve positioning accuracy.
  • an embodiment of the present application provides a positioning method, and the method can be applied to an object to be located, and the object to be located can be a vehicle, a robot, a terminal device, or the like.
  • the object to be positioned can be integrated with a laser emitting device, and the laser emitting device can emit a laser radar or a millimeter-wave radar around the object to be positioned to collect the current point cloud.
  • the current point cloud may include the point cloud of the object to be positioned and the point cloud of the environment where the object to be positioned is located.
  • the point cloud of the object to be located is a point cloud obtained by the object to be located reflecting lidar or millimeter wave radar
  • the point cloud of the environment where the object to be located is located is the object reflecting lidar or the object in the environment where the object to be located is located.
  • the attribute feature of the point cloud in this embodiment of the present application may include at least one of the following: projection feature of the point cloud, normal vector feature of the point cloud, and curvature feature of the point cloud.
  • projection feature of the point cloud normal vector feature of the point cloud
  • curvature feature of the point cloud The process of extracting the projection feature, the normal vector feature, and the curvature feature in the current point cloud will be sequentially described below.
  • the attribute feature includes the projection feature of the point cloud.
  • the process of extracting the projection feature in the current point cloud in the embodiment of the present application may be: project the current point cloud to at least one of the following coordinate planes: xoy plane, yoz plane, xoz plane, obtain the current projected point cloud, and obtain all the The distance between the current projected point cloud and the o, and the distance is classified into a preset distance interval; According to the number of points in the current point cloud in each distance interval, obtain the distance of the current point cloud. attribute characteristics. Wherein, o is the center position of the object to be positioned, and x, y, and z are the coordinate axes constructed with o as the coordinate origin respectively.
  • the preset distance interval is related to the farthest distance and the shortest distance of the points in the point cloud collected by the object to be positioned.
  • the interval formed by the farthest distance and the shortest distance may be divided according to the farthest distance and the shortest distance of the points in the point cloud collected by the object to be located, to obtain a preset distance interval.
  • the number of points in the current point cloud in each distance interval may be formed into a one-dimensional vector, and then the one-dimensional vector may be used as the projection feature of the point cloud.
  • the current point cloud may be projected onto the xoy plane, the yoz plane, and the xoz plane, respectively, so as to obtain a result compared to projecting the current point cloud to one or two planes.
  • the coordinate plane can obtain more detailed projected features to improve positioning accuracy.
  • the current point cloud may be projected onto the xoy plane, the yoz plane, and the xoz plane, respectively, to obtain three current projected point clouds corresponding to the current point cloud, and obtain each of the current point clouds.
  • each coordinate plane (xoy plane, yoz plane and xoz plane) may respectively correspond to a preset distance interval.
  • the attribute feature includes the normal vector feature of the point cloud.
  • the process of extracting the normal vector feature in the current point cloud in the embodiment of the present application may be as follows: For the convenience of description, the points in the current point cloud are all the first points for description here, and for each point in the current point cloud One point, in this embodiment of the present application, the first scan line to which the first point belongs can be determined, and the closest points to the first point are obtained on the first scan line and on both sides of the first point. the first adjacent point of the According to the first adjacent point and the second adjacent point, the normal vector of the first point is obtained; according to the normal vector of the first point, the attribute feature of the current point cloud is obtained.
  • a first vector from the first point to the first adjacent point may be obtained, and a second vector from the first point to the second adjacent point may be obtained, and then the first vector may be obtained in the first
  • a cross product calculation is performed pairwise between the vector and the second vector, and the mean value of the results of the cross product calculation is used as the normal vector of the first point.
  • the length of the projection of the normal vector of the first point on at least one coordinate axis can be obtained, and the length can be classified into a preset length interval, and according to the number of the first points in each length interval, Obtain the attribute features of the current point cloud.
  • the preset length interval is related to the length of the projection of the first point to the coordinate axis, and the preset length interval can be preset according to experience, or, in the embodiment of the present application, the first point can be projected
  • the minimum and maximum lengths of the projection to the axes determine the length interval.
  • the number of points in the current point cloud in each length interval may be formed into a one-dimensional vector, and then the one-dimensional vector may be used as the normal vector feature of the point cloud.
  • the length of the projection of the normal vector of the first point on one coordinate axis is obtained in the embodiment of the present application, the length of the projection of the normal vector of the first point on the z coordinate axis can be obtained, because In the current point cloud, the height value (corresponding to the z-axis) of the point cloud has more obvious length characteristics than the length value (corresponding to the x-axis and y-axis) in the horizontal plane. Therefore, when obtaining the length of the projection of the normal vector of the first point on one coordinate axis, obtaining the length of the projection of the normal vector of the first point on the z coordinate axis is more conducive to the matching accuracy of features and the accuracy of positioning.
  • the attribute feature includes the curvature feature of the point cloud.
  • the process of extracting the curvature feature in the current point cloud in the embodiment of the present application may be: for each first point in the current point cloud, the first scan line to which the first point belongs may be determined, and then according to the The position of the first point, the positions of the points on both sides of the first point on the first scan line, the curvature of the first point is acquired, and the curvature of the first point is classified into a preset curvature interval medium; according to the number of first points in each curvature interval, the attribute features of the current point cloud are acquired.
  • the preset curvature interval is related to the curvature of the first point, and the preset curvature interval can be preset according to experience, or, in the embodiment of the present application, The curvature interval may be determined by the minimum curvature and the maximum curvature of the first point.
  • the number of points in the current point cloud in each curvature interval may be formed into a one-dimensional vector, and then the one-dimensional vector may be used as the curvature feature of the point cloud.
  • the above-mentioned first to third methods may be used to extract the projection feature, normal vector feature, and curvature feature of the current point cloud, so that the attribute features of the current point cloud can be described more accurately and efficiently. Comprehensive, in order to further improve the positioning accuracy of the object to be located.
  • the iterative closest point (ICP) algorithm can be used to match the object to be located further.
  • the pose is further accurately determined, thereby improving the positioning accuracy.
  • the point cloud map may include attribute features of point clouds of multiple key frames.
  • the similarity between the attribute feature of the current point cloud and the attribute feature of the point cloud of each of the key frames can be obtained; in descending order of the similarity, a preset number of top-ranked points are obtained.
  • the target key frame corresponding to the similarity; the pose corresponding to each target key frame is used as a candidate pose; the pose of the object to be positioned is determined from the multiple candidate poses.
  • the process of determining the pose of the object to be positioned among the plurality of candidate poses may be combined with the ICP algorithm to determine the pose of the object to be positioned.
  • the current point cloud can be converted into the point cloud map according to the conversion relationship corresponding to each candidate pose, and the converted point cloud corresponding to each candidate pose can be obtained.
  • the point cloud map obtain the target point cloud with the closest distance to the conversion point cloud corresponding to each candidate pose, and perform iterative closest point ICP matching between the conversion point cloud corresponding to each candidate pose and the corresponding target point cloud, and obtain the one with the highest matching degree.
  • Convert the point cloud obtain the pose of the object to be positioned according to the pose corresponding to the converted point cloud with the highest matching degree and the conversion relationship of ICP matching.
  • each candidate pose corresponds to a conversion relationship, which can convert the point cloud in the actual space into the point cloud map.
  • obtaining the target point cloud with the closest distance to the transformation point cloud corresponding to each candidate pose refers to: for the transformation point cloud corresponding to each candidate pose, obtaining the distance transformation point in the point cloud map Each point in the cloud is the closest point to form the target point cloud for each candidate pose.
  • the candidate pose with the highest matching degree can be used as the pose of the object to be positioned according to the ICP algorithm.
  • the point cloud map of the environment where the object to be located may change at any time.
  • fixed objects such as pillars, ground, and lights remain unchanged, but vehicles in the underground garage may enter and exit at any time, which in turn affects the point cloud map, which in turn affects the accuracy of matching based on the point cloud map. Therefore, in the embodiment of the present application, when constructing a point cloud map, the changed point cloud (such as the point cloud of a vehicle in an underground garage) can be deleted, and then the point cloud in the point cloud map constructed by the deleted point cloud can be used. is unchanged, which is beneficial to improve the accuracy of positioning. Correspondingly, before extracting the attribute feature of the current point cloud, the changed point cloud in the current point cloud can be deleted.
  • filtering processing may be performed on the current point cloud.
  • the filtering process can simplify the complex point cloud but retain the characteristics of the point cloud, so as to reduce the calculation amount of the subsequent processing.
  • the current point cloud can be clustered to obtain the point cloud clusters of each object in the environment.
  • clustering can group the point clouds belonging to the same object in the current point cloud into one category to form a point cloud cluster of an object, such as the point cloud cluster belonging to the ground, the point cloud cluster belonging to the vehicle, and the Point cloud clusters of pillars, etc.
  • the point cloud cluster whose maximum z value is less than the preset value can be deleted, that is, the point cloud belonging to the changed object can be deleted, and then the deleted current point can be extracted. Attribute characteristics of clouds.
  • the point cloud cluster may be the point cloud cluster of static objects in the underground garage, such as the point cloud cluster of the ground, isolation piers on the ground, anti-collision piers, etc. If the point cloud clusters of these objects are deleted, a lot of information may be missing. Therefore, in the embodiment of the present application, the static objects in the underground garage can be used as the preset objects.
  • point cloud clusters of the objects except the preset objects delete the point cloud clusters whose maximum z value of the point cloud is less than the preset value. For example, in each object except point cloud clusters on the ground, isolation piers on the ground, anti-collision piers, etc., delete the point cloud clusters whose maximum z value of the point cloud is less than the preset value. This preserves the detailed and complete point cloud of the underground garage, improving the accuracy of the point cloud map.
  • the above embodiment describes the process of positioning according to the attribute features of the point cloud corresponding to each key frame in the point cloud map and the attribute features of the current point cloud. illustrate:
  • the execution body for constructing the point cloud map in the embodiment of the present application may be the same as or different from the execution body for executing the above positioning method.
  • the object to be positioned may obtain the point cloud map in advance.
  • the following describes the process of constructing a point cloud map by taking the object to be positioned as an example.
  • point clouds corresponding to each pose of the object to be positioned may be collected, and attribute features of the point clouds corresponding to each pose may be extracted; The attribute features of the point cloud corresponding to each pose and the corresponding storage of each pose are used to construct the point cloud map.
  • the method of extracting the attribute features of the point cloud corresponding to each pose of the object to be located may be the same as the method of extracting the attribute feature of the current point cloud for the object to be located.
  • the object to be located may collect a point cloud of a key frame at every preset distance during the movement of the environment, and obtain the pose of the object to be located when the point cloud of the one key frame is collected to obtain The point cloud corresponding to each pose is obtained, and then the pose corresponding to each key frame and the point cloud are mapped to obtain a point cloud map.
  • the point cloud corresponding to each pose may be filtered deal with.
  • the filtering process can simplify the complex point cloud but retain the characteristics of the point cloud, so as to reduce the calculation amount of the subsequent processing.
  • the point cloud corresponding to each pose can be clustered to obtain the point cloud cluster of each object in the environment.
  • the point cloud cluster whose maximum z value of the point cloud is less than the preset value may be deleted from the point cloud clusters of the objects except for the preset object, so that The attribute features of the point cloud corresponding to the deleted poses are extracted to obtain a complete point cloud that does not contain changed objects, so as to improve the accuracy of positioning.
  • an embodiment of the present application provides a positioning device.
  • the positioning device is the object to be positioned in the first aspect above, and the positioning device includes:
  • the first processing module is used to collect the current point cloud, where the current point cloud includes the point cloud of the object to be positioned and the point cloud of the environment where the object to be positioned is located.
  • the second processing module is configured to extract the attribute feature of the current point cloud, and obtain the attribute feature according to the attribute feature of the current point cloud and the attribute feature of the point cloud in the point cloud map of the environment where the object to be located is located.
  • the pose of the object to be positioned is configured to extract the attribute feature of the current point cloud, and obtain the attribute feature according to the attribute feature of the current point cloud and the attribute feature of the point cloud in the point cloud map of the environment where the object to be located is located. The pose of the object to be positioned.
  • the attribute feature includes at least one of the following: projection feature of the point cloud, normal vector feature of the point cloud, and curvature feature of the point cloud.
  • the attribute feature includes a projection feature of the point cloud
  • the second processing module is specifically configured to project the current point cloud to at least one of the following coordinate planes: xoy plane, yoz plane, xoz plane , obtain the current projected point cloud, o is the center position of the object to be positioned, x, y and z are the coordinate axes constructed with o as the origin of the coordinates; obtain the distance between the current projected point cloud and the o, and The distances are classified into preset distance intervals; and the attribute features of the current point cloud are acquired according to the number of points in the current point cloud in each distance interval.
  • the second processing module is specifically configured to project the current point cloud to the xoy plane, the yoz plane and the xoz plane respectively, to obtain three current projected point clouds corresponding to the current point cloud; Obtain the distance between each of the current projected point clouds of the current point cloud and the o, and classify the distance into a preset distance interval of the corresponding coordinate plane; The number of points in the current point cloud in each distance interval is used to obtain the attribute characteristics of the current point cloud.
  • the preset distance interval is related to the farthest distance and the shortest distance of the points in the point cloud collected by the object to be positioned.
  • the attribute feature includes a normal vector feature of a point cloud, and the points in the current point cloud are all first points; for each first point in the current point cloud, the first point
  • the second processing module is specifically configured to determine the first scan line to which the first point belongs. On the first scan line and on both sides of the first point, the first scan line closest to the first point is obtained respectively. a proximity point, and acquiring a second proximity point closest to the first point on a second scan line, the second scan line being adjacent to the first scan line; according to the first point, the For the first adjacent point and the second adjacent point, the normal vector of the first point is obtained; according to the normal vector of the first point, the attribute feature of the current point cloud is obtained.
  • the second processing module is specifically configured to obtain a first vector from the first point to the first adjacent point; obtain a first vector from the first point to the second adjacent point Two vectors; perform a pairwise cross product calculation between the first vector and the second vector, and use the mean value of the results of the cross product calculation as the normal vector of the first point.
  • the second processing module is specifically configured to acquire the length of the projection of the normal vector of the first point on at least one coordinate axis, and classify the length into a preset length interval ; According to the number of the first points in each length interval, the attribute feature of the current point cloud is acquired.
  • the second processing module is specifically configured to acquire the length of the projection of the normal vector of the first point on the z-coordinate axis.
  • the attribute feature includes a curvature feature of a point cloud, and the points in the current point cloud are all first points; for each first point in the current point cloud, the second point a processing module, specifically configured to determine the first scan line to which the first point belongs; obtain the The curvature of the first point; classifying the curvature of the first point into a preset curvature interval; and acquiring the attribute feature of the current point cloud according to the number of the first points in each curvature interval.
  • the point cloud map includes attribute features of point clouds of multiple key frames; the second processing module is specifically configured to obtain the attribute features of the current point cloud and each of the key frames The similarity of the attribute features of the point cloud; according to the order of similarity from large to small, obtain the target key frames corresponding to the preset number of similarities before the ranking; take the corresponding pose of each target key frame as the candidate pose ; Determine the pose of the object to be positioned from among the multiple candidate poses.
  • the second processing module is specifically configured to convert the current point cloud into the point cloud map according to the conversion relationship corresponding to each candidate pose, and obtain the corresponding correspondence of each candidate pose
  • the point cloud map obtain the target point cloud with the closest distance to the conversion point cloud corresponding to each candidate pose
  • the closest point ICP matching is iterated to obtain the transformation point cloud with the highest matching degree
  • the pose of the object to be positioned is obtained according to the pose corresponding to the transformation point cloud with the highest matching degree and the transformation relationship of the ICP matching.
  • the second processing module is further configured to cluster the current point cloud to obtain point cloud clusters of each object in the environment; Among the point cloud clusters, the point cloud clusters whose maximum z value is less than the preset value are deleted to extract the attribute features of the current point cloud after deletion.
  • a point cloud map construction module is used to collect the point cloud corresponding to each pose of the object to be located, and extract the point cloud corresponding to each pose
  • the attribute features of the point cloud corresponding to the various poses are stored correspondingly, and the point cloud map is constructed.
  • a point cloud map building module is used to collect a point cloud of a key frame every preset distance, and obtain the location of the object to be located in The pose of the point cloud of the one key frame is collected to obtain the point cloud corresponding to each pose.
  • the second processing module is further configured to cluster the point clouds corresponding to the various poses to obtain the point cloud clusters of the objects in the environment.
  • the point cloud cluster of the object delete the point cloud cluster whose maximum z value is less than the preset value, so as to extract the attribute features of the point cloud corresponding to the deleted poses.
  • an embodiment of the present application provides an electronic device, where the electronic device includes: a processor, a memory, and a point cloud collection device;
  • the point cloud collection device is used to transmit laser radar or millimeter wave radar to collect the current point cloud, and the current point cloud includes the point cloud of the object to be positioned and the point cloud of the environment where the object to be positioned is located;
  • the memory is used to store computer-executable program codes, and the program codes include instructions; when the processor executes the instructions, the instructions cause the electronic device to perform the method provided by the first aspect or each possible implementation manner of the first aspect.
  • an embodiment of the present application provides a positioning apparatus, including a unit, a module, or a circuit for executing the method provided by the first aspect or each possible implementation manner of the first aspect.
  • the positioning device may be an object to be positioned, or a module applied to the object to be positioned, for example, a chip applied to the object to be positioned.
  • embodiments of the present application provide a computer program product containing instructions, which, when run on a computer, cause the computer to execute the method in the first aspect or various possible implementations of the first aspect.
  • the embodiments of the present application provide a positioning method, device, electronic device, and storage medium, because the attribute characteristics of point clouds are the characteristics of the point cloud itself, even if the scenes are similar, the attributes or attribute characteristics of the point clouds in similar scenes are also different, Therefore, the positioning method of the embodiment of the present application performs positioning by matching the attribute features of the current point cloud around the object to be positioned and the attribute features of the point cloud in the point cloud map, so that similar scenes can be identified, thereby improving the accuracy of indoor positioning.
  • 1 is a schematic flow chart of a current indoor positioning
  • FIG. 2 is a schematic flowchart of another indoor positioning at present
  • FIG. 3 is a schematic flowchart of an embodiment of a positioning method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another embodiment of a positioning method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of projecting a current point cloud to an xoy plane in an embodiment of the present application
  • FIG. 6 is a schematic flowchart of another embodiment of a positioning method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a point on a scan line provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of another embodiment of a positioning method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first point and points on both sides of the first point provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another embodiment of a positioning method provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of an embodiment of constructing a point cloud map according to an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of another embodiment of constructing a point cloud map according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of obtaining an angle between projection points according to an embodiment of the present application.
  • FIG. 14 is a schematic flowchart of another embodiment of a positioning method provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a current indoor positioning process.
  • a camera and a lidar can be provided on the vehicle. While the vehicle is moving indoors, the camera can collect images around the vehicle, and the lidar can emit laser light around the vehicle to collect point clouds around the vehicle.
  • An indoor image feature map may be pre-stored in the vehicle, and the image feature map includes image features corresponding to various poses of the vehicle.
  • the vehicle can match the image features in the collected image with the image features corresponding to each pose in the map, and determine the initial pose, such as the map with the highest matching degree. The pose corresponding to the image features is used as the initial pose.
  • the vehicle can also use particle filtering to determine the vehicle's pose according to the point cloud around the initial pose.
  • the method shown in Figure 1 makes full use of the rich features in the image, and combined with the point cloud, it can accurately locate the vehicle in an indoor environment.
  • the illumination is weak, the features in the image are easy to be lost or difficult to obtain.
  • the illumination of different locations is different, which will also affect the extraction of image features, resulting in the failure of positioning.
  • the robustness of the indoor positioning method Not enough stick.
  • FIG. 2 is a schematic flow chart of another indoor positioning at present.
  • an indoor positioning method is also provided, in which positioning can be carried out according to the point cloud around the vehicle.
  • the vehicle may store a point cloud map in advance, and the point cloud map may include point clouds corresponding to various poses of the vehicle.
  • the point cloud around the vehicle can be collected, and then the collected point cloud and the point cloud corresponding to each pose in the point cloud map are matched by the iterative closest point (ICP) algorithm, and then The pose with the highest matching degree is taken as the pose of the vehicle.
  • ICP iterative closest point
  • this method avoids the influence of external light, because the scenes at each location of the indoor environment (such as an underground garage) are very similar, the point clouds at each location are very close, and the pose with the highest matching degree may be other similar scenes. location, so the accuracy of this indoor localization method is low.
  • an embodiment of the present application provides a positioning method, in which positioning can be performed by matching attribute features of point clouds. Because the attribute characteristics of the point cloud are the characteristics of the point cloud itself, even if the scene is similar, the attributes or attribute characteristics of the point cloud in the scene are different. Accuracy of indoor positioning.
  • the positioning methods in the embodiments of the present application can be applied to the positioning scenarios of electronic devices such as vehicles or robots, and are not limited to the scenarios of other electronic devices that can emit laser point clouds and perform positioning through point clouds.
  • the execution body for executing the positioning method may be a positioning device, and the positioning device may be a vehicle, a processing chip in a vehicle, a robot, a chip in a robot, a terminal device, or the like.
  • the positioning device may be integrated with or independently provided with a transmitting device, and the transmitting device may emit laser light to the surroundings of the positioning device to collect point clouds around the positioning device.
  • the transmitting device in the embodiment of the present application may be, but not limited to, a laser radar, a millimeter-wave radar, or the like.
  • the positioning device is a robot and the transmitting device is a laser radar as an example for description.
  • FIG. 3 is a schematic flowchart of an embodiment of a positioning method provided by an embodiment of the present application. As shown in FIG. 3 , the positioning method provided in this embodiment of the present application may include:
  • S301 Collect a current point cloud, where the current point cloud includes a point cloud of an object to be positioned and a point cloud of an environment where the object to be positioned is located.
  • S303 Obtain the pose of the object to be positioned according to the attribute feature of the current point cloud and the attribute feature of the point cloud in the point cloud map of the environment where the object to be positioned is located.
  • the lidar can emit laser light around the object to be located, so that the object to be located can obtain information of laser reflection, and then collect the current point cloud.
  • the current point cloud may include the point cloud of the object to be located and the point cloud of the environment where the object to be located is located.
  • the point cloud of the object to be located is a point cloud obtained by the object to be located reflecting lidar or millimeter wave radar
  • the point cloud of the environment where the object to be located is located is the object reflecting lidar or the object in the environment where the object to be located is located.
  • the object to be located may be a vehicle, a robot, or a terminal device.
  • the point cloud obtained by the robot at the indoor position A may be the current point cloud.
  • the current point cloud may include multiple points (eg, laser points, hereinafter referred to as points), and information of each point.
  • the point is the point where the emitted laser is reflected back when encountering an obstacle, and the information of the point can be the spatial three-dimensional coordinates of the point, the laser reflection intensity, and the like.
  • the point cloud acquired by the robot in real time may be used as the current point cloud, so as to locate the robot according to the current point cloud at each moment.
  • the attribute of the current point cloud may be a set of attributes of multiple points included in the current point cloud.
  • the attributes of each point can be the spatial three-dimensional coordinates of the point, the laser reflection intensity, the distance between the point and the object to be located, the distance between the point and surrounding points, the normal vector of the line segment formed by the point and surrounding points, and the like. It should be understood that the attribute of each point is the attribute of the point itself, so the attribute of the current point cloud is also the attribute of the current point cloud itself.
  • the attribute feature of the current point cloud may be extracted according to the attribute of the current point cloud, and the attribute feature of the current point cloud is used to represent the attribute of the current point cloud.
  • the attribute feature of the current point cloud may be a one-dimensional vector corresponding to a normal vector of a line segment formed by a point in the current point cloud and surrounding points.
  • the attribute feature of the current point cloud may be a one-dimensional vector composed of the distance between the point and the object to be located.
  • a point cloud map of the environment where the object to be located may be pre-stored in the object to be located.
  • the point cloud map is a point cloud map of the underground parking lot.
  • the vehicle can obtain the point cloud map of the underground parking lot when entering the underground parking lot, for example, by accessing a server corresponding to the underground parking lot, and downloading the point cloud map of the underground parking lot in the server.
  • the object to be located is a robot
  • the robot is an indoor defense guard (used to move indoors to obtain indoor information)
  • the environment where the robot is located can be a shopping mall
  • the point cloud map can be the point cloud of the shopping mall. map.
  • the staff can store the point cloud map of the mall in the robot in advance, or the robot can collect the point cloud of the mall by itself to generate the point cloud map of the mall.
  • the point cloud map may include attribute features of the point cloud corresponding to each pose.
  • the poses can be each pose of the device that generates the point cloud map in the environment where the object to be located is located.
  • the device for generating the point cloud map may be the object to be located or other devices. That is to say, the point cloud map may include multiple poses, and attribute features of the point cloud corresponding to each pose.
  • the object to be located may obtain the pose of the object to be located according to the attribute characteristics of the current point cloud and the attribute characteristics of the point cloud in the point cloud map of the environment where the object to be located is located.
  • the object to be located can match the attribute characteristics of the current point cloud and the attribute characteristics of the point cloud corresponding to each pose in the point cloud map of the environment where the object to be located is located, and the attribute characteristics of the point cloud with the highest matching degree correspond to The pose is used as the pose of the object to be positioned.
  • the device to be positioned can obtain the attribute feature of the current point cloud and the Euclidean distance (or other distances that characterize the similarity of the attribute feature) and the attribute feature of the point cloud corresponding to each pose in the point cloud map, and The pose corresponding to the attribute feature of the point cloud with the smallest Euclidean distance is used as the pose of the object to be located.
  • the positioning method provided by the embodiment of the present application includes: collecting a current point cloud, where the current point cloud may include the point cloud of the object to be located and the point cloud of the environment where the object to be located is located, and extracting attribute features of the current point cloud; The attribute features, and the attribute features of the point cloud in the point cloud map of the environment where the object to be located is located, to obtain the pose of the object to be located. Because the attribute characteristics of the point cloud are the characteristics of the point cloud itself, even if the scenes are similar, the attributes or attribute characteristics of the point clouds in the similar scenes are also different. Therefore, the positioning method of the embodiment of the present application uses the current point cloud around the object to be located. The matching of attribute features and the attribute features of the point cloud in the point cloud map for positioning can improve the accuracy of indoor positioning.
  • the attribute feature of the point cloud may include at least one of the following: projection feature of the point cloud, normal vector feature of the point cloud, and curvature feature of the point cloud.
  • projection feature of the point cloud may include at least one of the following: projection feature of the point cloud, normal vector feature of the point cloud, and curvature feature of the point cloud.
  • the process of extracting the projection feature, the normal vector feature, and the curvature feature from the current point cloud will be described in sequence below.
  • FIG. 4 is a schematic flowchart of another embodiment of the positioning method provided by the embodiment of the present application. As shown in Figure 4, the above S302 can be replaced with:
  • S402 Obtain the distance between the current projected point cloud and o, and classify the distance into a preset distance interval.
  • S403 Acquire attribute features of the current point cloud according to the number of points in the current point cloud in each distance interval.
  • the current point cloud may be projected onto at least one of the following coordinate planes: xoy plane, yoz plane, and xoz plane, to obtain the current projected point cloud.
  • projecting the current point cloud may include projecting each point included in the current point cloud to obtain a projection point corresponding to each point
  • the current projection point cloud may include a projection point corresponding to each point.
  • o is the center position of the object to be positioned
  • x, y, and z are respectively coordinate axes constructed with o as the coordinate origin.
  • the center position of the object to be positioned may be the position where the laser radar is located, specifically the position where the laser radar emits laser light.
  • the coordinate plane may take the center position of the object to be positioned as the coordinate origin, set mutually perpendicular x-axis and y-axis on the plane parallel to the horizontal plane, and set the z-axis passing the coordinate origin on the vertical plane perpendicular to the horizontal plane. It should be understood that the coordinate plane may also be set in other manners in this embodiment of the present application.
  • FIG. 5 is a schematic diagram of projecting the current point cloud to the xoy plane in the embodiment of the present application.
  • the following is an example of projecting the current point cloud to the xoy plane.
  • the current projected point cloud can be obtained on the xoy plane.
  • the xoy plane can include the current point cloud. Project multiple points in a point cloud. Points are represented by black circles, and three consecutive points are represented as omitted representations of distance intervals.
  • the distance between the current projected point cloud and o can be obtained, and the distance can be classified into a preset distance interval.
  • the distance between each point in the current projected point cloud and o can be obtained, that is, the distance between each point and the coordinate origin.
  • the distance is divided into preset distance intervals.
  • the preset distance interval is predefined, and the preset distance interval is related to the farthest distance and the shortest distance of the points in the point cloud collected by the object to be positioned.
  • the farthest distance of the points in the point cloud collected by the object to be located is 80m, and the shortest distance is 0m, then the preset distance may be 0-80m, and in this embodiment of the present application, the preset distance may be Divided into a preset number of distance intervals. If the preset number is 20, 0-4m can be divided into a distance interval, 4m-8m can be divided into a distance interval... and so on, 76m-80m can be divided into a distance interval, and then 20 distance interval. It should be understood that when dividing distance intervals, the distances corresponding to each distance interval can be different.
  • one distance interval can be 0-4m, the distance corresponding to the distance interval is 4m, and the other distance interval can be 4m-6m.
  • the distance corresponding to the interval is 2m, and the embodiment of the present application does not limit the manner of dividing the distance interval, as long as the number of distance intervals satisfies the preset number.
  • r represents the distance corresponding to the distance interval.
  • the number of points in the current point cloud in each distance interval may be acquired, so as to acquire the attributes of the current point cloud according to the number of points in the current point cloud in each distance interval feature (projected feature).
  • the number of points falling in the distance interval 0-4m is 6, the number of points falling in the distance interval 4m-8m is 6..., the number of points falling in the distance interval 76m-80m is 6
  • the quantity is 2, 20 quantities can be obtained, so the 20 quantities can be arranged in interval order to obtain a one-dimensional vector.
  • the vector may be (6, 6..., 2), and in this embodiment of the present application, a one-dimensional vector may be used as the attribute feature of the current point cloud.
  • the embodiment of the present application may further normalize the number of points in the current point cloud in each distance interval, such as reducing the same multiple at the same time, so that the numbers are all in the range of 0-1 In the range.
  • the number of points in the current point cloud in each distance interval can be divided by the maximum number to obtain a normalized result, and then a one-dimensional vector can be generated according to the normalized result.
  • the maximum number of points in the current point cloud in each distance interval is 6, the number of points in the current point cloud in each distance interval can be divided by 6 to obtain a one-dimensional vector(1, 1..., 1/3).
  • the current point cloud can be projected onto the xoy plane, the yoz plane, and the xoz plane, respectively, so as to obtain a result compared to the above-mentioned projecting the current point cloud onto the xoy plane respectively. More detailed projected features, so more accurate results can be obtained when matching with attribute features in the point cloud map.
  • the above S401 can be replaced with S401': project the current point cloud to the xoy plane, the yoz plane and the xoz plane respectively, and obtain three current projected point clouds corresponding to the current point cloud.
  • the above S402 may be replaced with S402': obtaining the distance between each current projected point cloud of the current point cloud and o, and classifying the distance into a preset distance interval of the corresponding coordinate plane.
  • the above S403 can be replaced with S403': according to the number of points in the current point cloud in each distance interval of each coordinate plane, the attribute feature of the current point cloud is obtained.
  • the distance between each current projected point cloud and o can be obtained in the same manner as the above S402, and the distance between each current projected point cloud and o can be divided into It is classified into the preset distance interval of the corresponding coordinate plane.
  • obtaining the distance between each current projection point cloud and o may be obtaining the distance between each point in each current projection point cloud and o, and then obtaining the distance between each point and o in the three current projection point clouds.
  • each coordinate plane (xoy plane, yoz plane, and xoz plane) in the embodiment of the present application corresponds to a preset distance interval respectively.
  • the distance interval of the xoy plane is: 0-4m, 4m-8m..., 76m-80m
  • the distance interval of the yoz plane is: 0-4m, 4m-6m..., 76m-80m
  • the distance of the xoz plane The interval is: 0-1m, 1m-2m..., 7m-8m.
  • the preset number of distance intervals corresponding to each coordinate plane may be different or the same.
  • the number of points in the current point cloud in each distance interval of each coordinate plane is obtained, and then according to the points in the current point cloud in each distance interval of each coordinate plane to get the attribute features of the current point cloud.
  • the number of points in the current point cloud in each distance interval of each coordinate plane can be in the order of the coordinate plane (such as from the xoy plane to the yoz plane, and then to the xoz plane, which can be agreed in advance), and each The order of the distance interval of the coordinate plane, a one-dimensional vector is obtained, and the one-dimensional vector is used as the attribute feature of the current point cloud.
  • the preset number of distance intervals corresponding to each coordinate plane is 20
  • the number of points falling into the distance interval in the xoy plane is 6, 6..., 2
  • the number of points falling into the distance interval in the yoz plane are respectively is 1, 2..., 3
  • the number of points in the xoz plane that fall into the distance interval is 2, 4..., 8, then the one-dimensional vector can be (6, 6..., 2, 1, 2... ..., 3, 2, 4..., 8).
  • FIG. 6 is a schematic flowchart of another embodiment of the positioning method provided by the embodiment of the present application. As shown in Figure 6, the above S302 can be replaced with:
  • S601 Determine the first scan line to which the first point belongs, and the points in the current point cloud are all the first points.
  • S603 Acquire a second adjacent point closest to the first point on the second scan line, where the second scan line is adjacent to the first scan line.
  • S604 Obtain the normal vector of the first point according to the first point, the first adjacent point and the second adjacent point.
  • the points included in the current point cloud are all referred to as the first point, that is, the steps in S601-S604 are all performed on the first point in the embodiment of the present application.
  • the lidar emits laser light, it can emit multiple laser scanning lines at the same time, and each laser scanning line can touch various objects in the room and reflect.
  • a 16-line lidar can emit 16 laser scan lines simultaneously.
  • the object to be located acquires the current point cloud, it can be determined which scan line the point in the current point cloud belongs to. Accordingly, in the embodiment of the present application, the first scan line to which the first point belongs can be determined.
  • FIG. 7 is a schematic diagram of a point on a scan line according to an embodiment of the present application. As shown in FIG. 7 , for the first point a on the first scan line, the first adjacent points closest to the first point on both sides of the first point a on the scan line are point b and point c, respectively. . It should be understood that if the first point a has only one adjacent point on the first scan line, the one point may be used as the first adjacent point of the point a.
  • a second adjacent point closest to the first point may also be acquired on a second scan line adjacent to the first scan line.
  • the second adjacent points on the second scan line are point d and point e, respectively. It should be understood that when the first scan line has only one adjacent second scan line, a second adjacent point can be acquired on the second scan line.
  • the normal vector of the first point may be obtained according to the first point, the first adjacent point, and the second adjacent point.
  • a first vector from the first point to the first adjacent point may be obtained, and a second vector from the first point to the second adjacent point may be obtained, and the method of obtaining the first point according to the first vector and the second vector vector.
  • the first vectors from point a to point b and point c are vector v1 and vector v2 respectively
  • the second vectors from point a to point d and point e are vector v3 and vector v4 respectively
  • any vector perpendicular to the vector v1, the vector v2, the vector v3, and the vector v4 is used as the normal vector of the point a.
  • the first vector and the second vector may be performed in pairs. Calculate the cross product, and then approximate the cross product result to represent the vectors that are all perpendicular to the vector v1, the vector v2, the vector v3 and the vector v4.
  • the mean value of the results of the cross product calculation may be used as the normal vector of the first point.
  • the cross product calculation is performed on the vector v1 and the vector v2, the cross product calculation is performed on the vector v1 and the vector v3..., the cross product calculation is performed on the vector v3 and the vector v4, and then multiple cross product results are obtained.
  • This application implements In an example, the mean value of the results of the multiple cross product calculations can be used as the normal vector of the first point.
  • the normal vector n i of the first point can be calculated by the following formula 1:
  • i is any first point in the current point cloud
  • m is the sum of the number of the first vector and the second vector corresponding to the first point
  • v i is the first vector or the second vector
  • v i% 4+1 is a vector that is cross-producted with the first vector or the second vector.
  • point a may not have a first vector, or may have a first vector, and point a may have a second vector, and the normal vector of the first point a can be obtained in the same manner as above.
  • the lidar in this embodiment of the present application may be a 16-line lidar, and 16 lasers may be emitted simultaneously (for example, the emission points of the 16 lasers may be the same point, the 16 lasers form a fan-shaped plane, and two adjacent The angles between the laser bars can be the same).
  • the first points on the two lasers located at the edge position among the 16 lasers may not be calculated.
  • the 16 lasers are sequentially Arranged from top to bottom, the normal vector of the first point on the uppermost and lowermost two lasers may not be calculated.
  • the attribute feature of the current point cloud may be acquired according to the normal vector of the first point.
  • the set of normal vectors of the first point may be used as the normal vector feature of the current point cloud.
  • the embodiment of the present application may acquire the length of the projection of the normal vector of the first point on at least one coordinate axis, and classify the length into a preset length interval.
  • the length of the projection of the normal vector of the first point on the x-axis is taken as an example.
  • the length of the projection of the normal vector of the first point on the x-axis can be obtained.
  • the length is equal to the value of the component of the normal vector of the first point on the x-axis.
  • the length of the projection of the normal vector of the first point on one coordinate axis is obtained in the embodiment of the present application, the length of the projection of the normal vector of the first point on the z coordinate axis can be obtained, because in the current point cloud, the point The height value of the cloud (corresponding to the z-axis) has more obvious length characteristics than the length value of the horizontal plane (corresponding to the x-axis and y-axis). Therefore, when obtaining the length of the projection of the normal vector of the first point on one coordinate axis, obtaining the length of the projection of the normal vector of the first point on the z coordinate axis is more conducive to the matching accuracy of features and the accuracy of positioning.
  • the length after obtaining the length of the projection of the normal vector of the first point on at least one coordinate axis, the length may be classified into a preset length interval. It should be understood that different coordinate axes may correspond to different preset length intervals (similar to the above preset distance intervals). Therefore, the number of first points in each length interval corresponding to at least one coordinate axis can be obtained. Similar to obtaining the projection feature of the current point cloud above, in this embodiment of the present application, the attribute feature (normal vector feature) of the current point cloud may also be obtained according to the number of first points in each length interval.
  • the preset length interval in the embodiment of the present application may be an empirical value, wherein the preset length interval is related to the length of the projection of the normal vector of the first point to the coordinate axis.
  • the preset length interval may also be determined according to the length of the projection of the normal vector of the first point to the coordinate axis. Exemplarily, the maximum length of the projection of the first point to the coordinate axis is 10m, and the minimum length is 0m, then it can be determined that 0-10m is divided into a preset length interval. If 0-10m is divided into 20 length intervals with the same length, the length of each preset length interval is 0.5m.
  • the number of the first points of each length interval of the x-axis is 3, 4..., 8, and the number of each length interval of the y-axis
  • the number of first points is 1, 2..., 4, and the number of first points in each length interval of the z-axis is 3, 4..., 6, then the attribute characteristics of the current point cloud can be determined by each week A one-dimensional vector (3, 4..., 8, 1, 2..., 4, 3, 4..., 6) formed by the number of the first points of each length interval of .
  • the attribute feature of the current point cloud may be (3, 4..., 6). It should be understood that in the embodiment of the present application, after the number of the first points in each length interval is obtained, normalization can also be performed to obtain the attribute characteristics of the current point cloud. The number of points is normalized for the description.
  • FIG. 8 is a schematic flowchart of another embodiment of the positioning method provided by the embodiment of the present application. As shown in Figure 8, the above S302 can be replaced with:
  • S802 Acquire the curvature of the first point according to the position of the first point and the positions of the points on both sides of the first point on the first scan line.
  • S803 Classify the curvature of the first point into a preset curvature interval.
  • the points on both sides of the first point may be a preset number of points on both sides of the first point. If the preset number is 3, the points on both sides of the first point may be 3 points on both sides of the first point, plus a total of 7 points on both sides of the first point.
  • FIG. 9 is a schematic diagram of a first point and points on both sides of the first point according to an embodiment of the present application. As shown in FIG. 9 , on the first scan line, the first point is point a, and the points on both sides of point a are point b, point c, point f, point g, point h, and point k, respectively.
  • the curvature of the first point may be acquired according to the position of the first point and the positions of points on both sides of the first point on the first scan line.
  • the position of the first point may be the spatial three-dimensional coordinates of the first point
  • the positions of the points on both sides of the first point may be the spatial three-dimensional coordinates of the points on both sides of the first point.
  • the curvature ci of the first point can be calculated according to the following formula 2:
  • p i is the position of the first point
  • p j is the position of any one of the points on both sides of the first point
  • S is the preset number of points on both sides of the first point.
  • the curvature of the first point may be classified into a preset curvature interval.
  • the preset curvature interval is also predefined.
  • the number of the preset curvature intervals may be the same as the number of the above distance intervals and length intervals, for example, 20.
  • the number of the first point in each curvature interval can be obtained, and then according to the number of the first point in each curvature interval, Get the attribute feature (curvature feature) of the current point cloud.
  • the process of acquiring the attribute feature of the current point cloud is similar to the process of acquiring the projection feature and the normal vector feature, and reference may be made to the relevant descriptions in the above embodiments.
  • the curvature feature can be a vector (1, 1..., 1).
  • normalization processing can also be performed to obtain the attribute characteristics of the current point cloud. The number of points is normalized for the description.
  • the attribute feature of the point cloud in this embodiment of the present application may include at least one of a projection feature, a normal vector feature, and a curvature feature.
  • projection features, normal vector features and curvature features can be extracted from the current point cloud. The more accurate the attribute features, the more accurate the positioning of the object to be located.
  • the attribute features of the point cloud can include projection features, normal vector features and curvature features
  • the above S302 can be replaced by S401-S403, S601-S605, S801-S804, correspondingly, the attribute features of the current point cloud can be projection features
  • the reorganization of the one-dimensional vectors corresponding to the normal vector feature and the curvature feature, such as the vector corresponding to the attribute feature of the current point cloud can be (6, 6..., 2, 1, 2..., 3, 2, 4..., 8 , 3, 4..., 6, 1, 1..., 1)
  • the attribute feature of the current point cloud is a one-dimensional vector with a length of 100, wherein the length of the vector corresponding to the projected feature is 60, and the normal vector feature The length of the vector corresponding to the curvature feature is 20.
  • the attribute features of the current point cloud may include at least one of the following: the projection feature of the point cloud, the normal vector feature of the point cloud, and the curvature feature of the point cloud, and any attribute feature can be used to effectively distinguish similar scenes, Therefore, accurate positioning can be achieved.
  • the attribute features of the current point cloud include projection features, normal vector features and curvature features. The attribute features of the current point cloud are described more accurately and comprehensively, so the positioning accuracy of the object to be positioned can be further improved.
  • FIG. 10 is a schematic flowchart of another embodiment of the positioning method provided by the embodiment of the present application. As shown in FIG. 10 , the positioning method in this embodiment of the present application may include:
  • S1001 Collect a current point cloud, where the current point cloud includes a point cloud of an object to be positioned and a point cloud of an environment where the object to be positioned is located.
  • the pose corresponding to each target key frame is used as a candidate pose.
  • the point cloud map may include attribute features of point clouds of multiple key frames.
  • the key frame may be the frame when the point cloud is collected, for example, the point cloud of one frame collected is the point cloud of the key frame. That is to say, the point cloud map includes attribute features of multiple frames of point clouds. It should be understood that the point clouds of different frames are collected when the object to be positioned is in different poses.
  • FIGS. 11 and 12 For the process of acquiring a point cloud map in this embodiment of the present application, reference may be made to the following related descriptions in FIGS. 11 and 12 .
  • the attribute features of the current point cloud and the attributes of the point cloud of each key frame can be obtained.
  • the similarity of features that is, the similarity between the one-dimensional vector corresponding to the current point cloud and the one-dimensional vector corresponding to the point cloud of each key frame (such as Euclidean distance).
  • the embodiment of the present application may obtain a pre-ranked preset number of items in descending order of similarity.
  • the target keyframe corresponding to the similarity. If the preset number is 10, the keyframes in the top 10 point cloud maps in similarity can be used as target keyframes. That is to say, in the embodiment of the present application, several candidate key frames are first determined according to the matching of attributes and features of the point cloud.
  • the point clouds of the key frames in the point cloud map are collected in different poses, so each key frame point cloud in the point cloud map corresponds to a pose, and correspondingly, the target key frame also corresponds to Posing.
  • the pose corresponding to each target key frame may be used as a candidate pose.
  • the embodiment of the present application may determine the pose of the object to be positioned from among multiple candidate poses.
  • the average value of the multiple candidate poses may be used as the pose of the object to be positioned.
  • the embodiment of the present application may also combine the ICP algorithm to determine the pose of the object to be positioned among multiple candidate poses.
  • the ICP algorithm determines the pose of the object to be positioned among multiple candidate poses.
  • R is a rotation matrix
  • T is a translation matrix, that is, the point cloud in the actual space can be converted to the point cloud map through this conversion relationship, That is to say, the conversion relationship can be understood as the conversion relationship between the actual space coordinate system and the coordinate system in the point cloud map.
  • the current point cloud can be converted into the point cloud map according to the conversion relationship corresponding to each candidate pose, and the converted point cloud corresponding to each candidate pose can be obtained.
  • the target point cloud with the closest distance to the converted point cloud corresponding to each candidate pose can be obtained in the point cloud map.
  • the transformation point cloud corresponding to the candidate pose 1 includes multiple points, and the point closest to each point can be obtained in the point cloud map.
  • the point cloud composed of the points closest to each point is the target point cloud. Among them, each point in the target point cloud is closest to the corresponding point in the transformed point cloud corresponding to the candidate pose.
  • the object to be located can perform ICP matching between the transformation point cloud corresponding to each candidate pose and the corresponding target point cloud to obtain the transformation point cloud with the highest matching degree.
  • the iterative closest point ICP matching between the transformation point cloud corresponding to each candidate pose and the corresponding target point cloud can obtain a matching degree.
  • the transformation point cloud with the highest matching degree can be determined.
  • the object to be positioned can obtain the pose of the object to be positioned according to the pose corresponding to the converted point cloud with the highest matching degree and the conversion relationship matched by the ICP.
  • the pose corresponding to the transformation point cloud with the highest matching degree may be the pose corresponding to the transformation relationship corresponding to the transformation point cloud.
  • the result of the ICP matching may include not only the above-mentioned matching degree, but also the matching transformation relationship (ie, the transformation relationship obtained between each point and its closest point). Therefore, in the embodiment of the present application, the pose of the object to be positioned can be obtained according to the pose corresponding to the transformation point cloud with the highest matching degree and the transformation relationship of ICP matching. It should be understood that the ICP matching algorithm is not described repeatedly in the embodiments of the present application, and reference may be made to the relevant description in the current technical solution.
  • the pose corresponding to the converted point cloud with the highest matching degree is P i
  • the conversion relationship of ICP matching is T.
  • the pose P of the object to be positioned can be calculated by the following formula 3:
  • the candidate pose of the object to be positioned may be preliminarily determined according to the matching of the attribute features of the point cloud, and then combined with the ICP algorithm to determine the pose of the object to be positioned among the candidate poses. Compared with the above embodiment , which can further improve the accuracy of the pose of the object to be located.
  • Table 1 below shows the positioning accuracy using the positioning method shown in FIG. 10 and the positioning methods shown in FIG. 1 and FIG. 2 .
  • FIG. 11 is a schematic flowchart of an embodiment of constructing a point cloud map according to an embodiment of the present application. As shown in FIG. 11 , the construction of a point cloud map provided by this embodiment of the present application may include:
  • the attribute features of the point cloud in the point cloud map are consistent with the attribute features extracted from the current point cloud. If the attribute features of the point cloud in the map are projection features, the attribute features extracted from the current point cloud are also projection features. Which attribute features to extract can be pre-agreed.
  • the object to be positioned may move in the environment where it is located, and point clouds corresponding to various poses of the object to be positioned may be collected during the movement.
  • the point cloud corresponding to each pose can be a point cloud of a key frame.
  • the object to be positioned may collect a point cloud of a key frame every time the object moves a preset distance.
  • the object to be positioned may also collect a point cloud of a key frame for each preset duration.
  • the pose of the object to be positioned may be corresponding to the key frame, and then the point cloud of the key frame corresponding to a pose can be determined.
  • the object to be located may obtain the pose of the object to be located according to an open-source simultaneous localization and mapping (SLAM) algorithm. For this process, reference may be made to the relevant description in the current technical solution. I won't go into details.
  • SLAM simultaneous localization and mapping
  • the point cloud map may include attribute features of the point cloud corresponding to each pose (attribute features of the point cloud corresponding to each key frame) and the point cloud.
  • the point cloud map of the environment where the object to be located may change at any time.
  • fixed objects such as pillars, ground, lights, etc. are unchanged, but vehicles in the underground garage may enter and exit at any time, which will affect the point cloud map, which in turn affects the matching based on the point cloud map. accuracy. Therefore, in the embodiment of the present application, when constructing a point cloud map, the changed point cloud (such as the point cloud of a vehicle in an underground garage) can be deleted, and then the point cloud in the point cloud map constructed by the deleted point cloud can be used. is unchanged, which is beneficial to improve the accuracy of positioning.
  • FIG. 12 is a schematic flowchart of another embodiment of constructing a point cloud map according to an embodiment of the present application. As shown in FIG. 12 , before the above S1102, it may further include:
  • S1105 Cluster the point clouds corresponding to the various poses to obtain point cloud clusters of each object in the environment.
  • S1102 can be replaced with S1102': extract the attribute features of the point cloud corresponding to each pose after deletion processing.
  • the point cloud of each pose may be filtered based on the voxel grid filter in the point cloud library (point cloud library, PCL), and the complex point cloud is simplified but the point cloud is retained.
  • point cloud library point cloud library, PCL
  • the point cloud corresponding to each pose is filtered, in order to delete the variable point cloud caused by the object that changes in the environment where the object to be located is located, the point cloud corresponding to each pose can be clustered, Then, the point cloud clusters of each object in the environment are obtained.
  • the clustering in the embodiment of the present application is to group the point clouds belonging to the same object in the point clouds corresponding to each pose into one category to form a point cloud cluster.
  • the environment where the object to be located is an underground garage, and the point cloud corresponding to each pose is clustered, and the point cloud cluster belonging to the ground, the point cloud cluster belonging to the vehicle, and the point belonging to the pillar corresponding to each pose can be obtained. Clouds, etc.
  • the method of clustering the point cloud corresponding to each pose can be (in the description of the point cloud corresponding to any pose): project each point in the point cloud to any coordinate plane , to calculate the angle between the projected point of each point on the coordinate plane and the surrounding projected points, and then cluster the points in the point cloud with this angle.
  • the xoy coordinate plane is pre-divided into a matrix, for example, for a 16-line lidar
  • the size of the matrix that can be pre-divided into the xoy coordinate plane may be 16 ⁇ 1800, where 1800 is related to the horizontal resolution rotation angle of the lidar. .
  • FIG. 13 is a schematic diagram of acquiring an angle between projection points according to an embodiment of the present application.
  • the xoy coordinate plane includes a projection point p and a projection point b.
  • the distance d 1 from the projection point p to the coordinate origin o and the distance d from the projection point b to the coordinate origin o can be obtained respectively. 2 , and then use the following formula 4 to obtain the angle angle between the projection point p and the projection point b.
  • represents the horizontal angular resolution or vertical angular resolution of the lidar.
  • the point corresponding to the surrounding projection point is the same as the point corresponding to the projected points.
  • a preset angle such as 30 degrees
  • the vehicles in the underground garage are parked on the ground, the vehicles parked in the underground garage have a maximum height in the space coordinate system, and clustering to obtain the point cloud clusters of each object can obtain multiple point cloud clusters, but It is not possible to determine the object corresponding to each point cloud cluster. Therefore, in the embodiment of the present application, after obtaining the point cloud clusters corresponding to the various poses, the point cloud clusters corresponding to the vehicle can be deleted according to the z values of the points included in each point cloud cluster.
  • the preset value may be 2.5m.
  • the point cloud cluster whose maximum height of the point cloud is less than 2.5m may be deleted, and then the points of other objects other than vehicles can be obtained.
  • Cloud clusters, and then the point cloud of the static object in the environment where the object to be located can be obtained, and the accuracy of the point cloud map can be improved.
  • the point cloud cluster may be the point cloud cluster of static objects in the underground garage, such as the point cloud of the ground, isolation piers on the ground, anti-collision piers, etc. cluster. Therefore, in the embodiment of the present application, the static object in the underground garage can be used as the preset object, and in the point cloud clusters of the objects except the preset object, the point cloud whose maximum z value is less than the preset value is deleted. cluster.
  • FIG. 14 is a schematic flowchart of another embodiment of the positioning method provided by the embodiment of the present application. As shown in FIG. 14 , before the above S1002, it may further include:
  • S1008 Cluster the current point cloud to obtain point cloud clusters of each object in the environment.
  • S1002 can be replaced with S1002': extract the attribute features of the current point cloud after deletion processing.
  • Table 2 below shows the positioning accuracy using the positioning method shown in FIG. 14 and the positioning methods shown in FIG. 1 and FIG. 2 .
  • the matching of the attribute features of the point cloud and the ICP algorithm can be combined to improve the positioning accuracy in similar scenes;
  • the point cloud of the object thereby improving the accuracy of the point cloud map, can improve the positioning accuracy, and before extracting the attribute features in the current point cloud, the point cloud of the object whose position has changed in the scene is also deleted, which can further improve the object to be located. positioning accuracy.
  • FIG. 15 is a schematic structural diagram of a positioning apparatus provided by an embodiment of the present application.
  • the positioning device shown in FIG. 15 may be the object to be positioned in the foregoing embodiment, and may execute the positioning methods in the foregoing FIGS. 3 , 4 , 6 , 8 , 10 to 12 , and 14 .
  • the positioning apparatus 1500 may include: a first processing module 1501 , a second processing module 1502 and a point cloud map building module 1503 .
  • the first processing module 1501 is configured to collect a current point cloud, where the current point cloud includes the point cloud of the object to be positioned and the point cloud of the environment where the object to be positioned is located.
  • the second processing module 1502 is used to extract the attribute features of the current point cloud, and obtain the position of the object to be positioned according to the attribute features of the current point cloud and the attribute features of the point cloud in the point cloud map of the environment where the object to be positioned is located. posture.
  • the attribute feature includes at least one of the following: a projection feature of the point cloud, a normal vector feature of the point cloud, and a curvature feature of the point cloud.
  • the attribute feature includes the projection feature of the point cloud
  • the second processing module 1502 is specifically configured to project the current point cloud to at least one of the following coordinate planes: xoy plane, yoz plane, and xoz plane, to obtain the current Projected point cloud, o is the center position of the object to be positioned, x, y and z are the coordinate axes constructed with o as the coordinate origin; obtain the distance between the current projected point cloud and o, and classify the distance to a preset distance In the interval; according to the number of points in the current point cloud in each distance interval, the attribute characteristics of the current point cloud are obtained.
  • the second processing module 1502 is specifically configured to project the current point cloud onto the xoy plane, the yoz plane and the xoz plane respectively to obtain three current projected point clouds corresponding to the current point cloud; obtain the current point cloud The distance between each current projected point cloud of the cloud and o, and the distance is classified into the preset distance interval of the corresponding coordinate plane; according to the point in the current point cloud in each distance interval of each coordinate plane number to get the attribute features of the current point cloud.
  • the preset distance interval is related to the farthest distance and the shortest distance of the points in the point cloud collected by the object to be positioned.
  • the attribute feature includes the normal vector feature of the point cloud, and the points in the current point cloud are the first points; for each first point in the current point cloud, the second processing module 1502, specifically It is used to determine the first scan line to which the first point belongs. On the first scan line and on both sides of the first point, the first adjacent point closest to the first point is obtained, and the distance is obtained on the second scan line. The second adjacent point closest to the first point, the second scan line is adjacent to the first scan line; according to the first point, the first adjacent point and the second adjacent point, the normal vector of the first point is obtained; according to the first point Normal vector, get the attribute features of the current point cloud.
  • the second processing module 1502 is specifically configured to obtain a first vector from the first point to the first adjacent point; obtain a second vector from the first point to the second adjacent point; and the second vector to perform a cross product calculation in pairs, and take the mean of the results of the cross product calculation as the normal vector of the first point.
  • the second processing module 1502 is specifically configured to obtain the length of the projection of the normal vector of the first point on at least one coordinate axis, and classify the length into a preset length interval; The number of the first points in each length interval to obtain the attribute characteristics of the current point cloud.
  • the second processing module 1502 is specifically configured to acquire the length of the projection of the normal vector of the first point on the z-coordinate axis.
  • the attribute feature includes the curvature feature of the point cloud, and the points in the current point cloud are the first points; for each first point in the current point cloud, the second processing module 1502 specifically uses is used to determine the first scan line to which the first point belongs; obtain the curvature of the first point according to the position of the first point and the positions of the points on both sides of the first point on the first scan line; classify the curvature of the first point into In the preset curvature interval; according to the number of first points in each curvature interval, the attribute characteristics of the current point cloud are obtained.
  • the point cloud map includes the attribute features of the point clouds of multiple key frames; the second processing module 1502 is specifically configured to obtain the attribute features of the current point cloud and the attributes of the point cloud of each key frame The similarity of the features; obtain the target key frames corresponding to the preset number of similarities before the ranking according to the order of similarity; take the pose corresponding to each target key frame as a candidate pose; in multiple candidate poses Determine the pose of the object to be positioned.
  • the second processing module 1502 is specifically configured to convert the current point cloud into the point cloud map according to the conversion relationship corresponding to each candidate pose, and obtain the conversion point corresponding to each candidate pose Cloud; in the point cloud map, obtain the target point cloud with the closest distance to the converted point cloud corresponding to each candidate pose; perform iterative closest point ICP matching between the converted point cloud corresponding to each candidate pose and the corresponding target point cloud , to obtain the transformation point cloud with the highest matching degree; according to the pose corresponding to the transformation point cloud with the highest matching degree and the transformation relationship of ICP matching, obtain the pose of the object to be positioned.
  • the second processing module 1502 is further configured to cluster the current point cloud to obtain the point cloud clusters of each object in the environment; in the point cloud clusters of each object except the preset object, Delete the point cloud clusters whose maximum z value is less than the preset value, so as to extract the attribute features of the current point cloud after deletion.
  • the point cloud map construction module 1503 is used to collect the point cloud corresponding to each pose of the object to be positioned, and extract the attribute features of the point cloud corresponding to each pose , and store the attribute features of the point cloud corresponding to each pose, and store the corresponding poses to construct a point cloud map.
  • the point cloud map construction module 1503 is used to collect a point cloud of a key frame every preset distance, and obtain a key frame of the object to be located. The pose of the point cloud of the frame to get the point cloud corresponding to each pose.
  • the second processing module 1502 is further configured to cluster the point clouds corresponding to the various poses to obtain the point cloud clusters of each object in the environment. In the cluster, delete the point cloud cluster whose maximum z value is less than the preset value, so as to extract the attribute features of the point cloud corresponding to the deleted poses.
  • the positioning apparatus provided in the embodiment of the present application can perform the action of the object to be positioned in the foregoing method embodiment, and the implementation principle and technical effect thereof are similar, and are not repeated here.
  • processing modules may be implemented in the form of software calling through processing elements; and may also be implemented in the form of hardware.
  • the processing module may be a separately established processing element, or may be integrated into a certain chip of the above-mentioned device to be implemented, in addition, it may also be stored in the memory of the above-mentioned device in the form of program code, and a certain processing element of the above-mentioned device Call and execute the function of the above processing module.
  • all or part of these modules can be integrated together, and can also be implemented independently.
  • the processing element described here may be an integrated circuit with signal processing capability.
  • each step of the above-mentioned method or each of the above-mentioned modules can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more application specific integrated circuits (ASIC), or one or more microprocessors (digital) signal processor, DSP), or, one or more field programmable gate arrays (field programmable gate array, FPGA), etc.
  • ASIC application specific integrated circuits
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the processing element may be a general-purpose processor, such as a central processing unit (central processing unit, CPU) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • FIG. 16 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic equipment is such as the robot, the vehicle, and the like as described above.
  • the electronic device 1600 may include: a processor 1601 (eg, a CPU), a memory 1602 and a point cloud collection device 1603 .
  • the point cloud collecting device 1603 can be coupled to the processor 1601, and the processor 1601 controls the point cloud collecting device 1603 to perform the action of transmitting lidar or millimeter wave radar, so as to collect the point cloud around the electronic device, so that the electronic device can obtain the surrounding point cloud .
  • the memory 1602 may include high-speed random-access memory (RAM), and may also include non-volatile memory (non-volatile memory, NVM), such as at least one disk memory, in which various instructions can be stored , used to complete various processing functions and implement the method steps of the present application.
  • the electronic device 1600 involved in this application may further include: a power supply 1604 , a communication bus 1605 and a communication port 1606 .
  • the above-mentioned communication port 1606 is used to implement connection and communication between the electronic device and other peripheral devices.
  • the above-mentioned memory 1602 is used to store computer-executable program codes, and the program codes include instructions; when the processor 1601 executes the instructions, the instructions cause the processor 1601 of the electronic device to perform the actions in the foregoing method embodiments, which The implementation principle and technical effect are similar, and are not repeated here.
  • a computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • a computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.
  • plural refers to two or more.
  • the term “and/or” in this article is only an association relationship to describe the associated objects, indicating that there can be three kinds of relationships, for example, A and/or B, it can mean that A exists alone, A and B exist at the same time, and A and B exist independently B these three cases.
  • the character "/" in this article generally indicates that the related objects before and after are an “or” relationship; in the formula, the character "/" indicates that the related objects are a "division" relationship.

Abstract

一种定位方法、装置、电子设备和存储介质,该方法包括:采集当前点云,当前点云包括待定位对象的点云和待定位对象所处环境的点云(S301),提取当前点云的属性特征(S302);根据当前点云的属性特征,以及待定位对象所处环境的点云地图中的点云的属性特征,获取待定位对象的位姿(S303)。因为点云的属性特征为点云自身的特征,即使场景相似,但相似场景中点云的属性或属性特征也是不同的,因此所述定位方法通过待定位对象周围的当前点云的属性特征和点云地图中的点云的属性特征的匹配进行定位,可以提高室内定位的准确性。

Description

定位方法、装置、电子设备和存储介质 技术领域
本申请涉及智能驾驶技术领域,尤其涉及一种定位方法、装置、电子设备和存储介质。
背景技术
定位技术是智能驾驶中的关键技术,目前的定位技术包括卫星定位技术、惯导定位技术、视觉定位技术和激光雷达定位技术等。视觉定位技术和激光雷达定位技术多用于室内定位。
目前的室内定位中,可以预先构建室内的点云地图,点云地图中可以包括多个关键帧的点云。车辆在室内移动时,可以实时采集车辆周围的点云,进而将采集的点云与点云地图中的每个关键帧的点云依据迭代最近点(iterative closest point,ICP)算法进行匹配,将匹配度最高的关键帧对应的位姿作为车辆的位姿。
但是室内场景,如地下车库,各位置处的场景很相似,进而各位置处的点云非常接近,因此采ICP匹配进行定位的准确度低。
发明内容
本申请实施例提供一种定位方法、装置、电子设备和存储介质,可以识别相似的场景,能够提高定位的准确性。
第一方面,本申请实施例提供一种定位方法,该方法可以应用于待定位对象,该待定位对象可以为车辆、机器人、终端设备等。该定位方法中,待定位对象可以集成有激光发射装置,该激光发射装置可以向待定位对象周围发射激光雷达或者毫米波雷达,以采集当前点云。应理解,当前点云可以包括待定位对象的点云以及待定位对象所处环境的点云。其中,待定位对象的点云是由待定位对象反射激光雷达或者毫米波雷达得到的点云,待定位对象所处环境的点云为所述待定位对象所处环境中的物体反射激光雷达或者毫米波雷达得到的点云。因为在室内定位的场景中,有很多位置处的场景很类似,但相似场景中的点云的属性特征并不相同,因此本申请实施例中可以提取所述当前点云的属性特征,进而根据所述当前点云的属性特征,以及所述待定位对象所处环境的点云地图中的点云的属性特征,获取所述待定位对象的位姿。因为点云的属性特征为点云自身的特征,即使场景相似,但相似场景中点云的属性或属性特征也是不同的,因此本申请实施例中采用点云的属性特征进行定位的方法的定位准确性高。
本申请实施例中的点云的属性特征可以包括如下至少一项:点云的投影特征、点云的法向量特征、点云的曲率特征。下面依次对提取当前点云中的投影特征、法向量特征、曲率特征的过程进行说明。
第一种可能的实现方式:所述属性特征包括点云的投影特征。本申请实施例中提取当 前点云中的投影特征的过程可以为:将所述当前点云投影至如下至少一个坐标平面:xoy平面、yoz平面、xoz平面,得到当前投影点云,且获取所述当前投影点云与所述o的距离,且将所述距离归类至预设的距离区间中;根据每个距离区间中的当前点云中的点的数量,获取所述当前点云的属性特征。其中,o为所述待定位对象的中心位置,x、y和z分别为以o为坐标原点构建的坐标轴。
在该种方法中,预设的距离区间与所述待定位对象采集到的点云中的点的最远距离和最近距离相关。如本申请实施例可以根据待定位对象采集到的点云中的点的最远距离和最近距离,将该最远距离和最近距离形成的区间进行划分,得到预设的距离区间。其中,本申请实施例中可以将每个距离区间中的当前点云中的点的数量组成一个一维向量,进而将该一维向量作为点云的投影特征。
在获取当前点云的投影特征的方式中,本申请实施例中可以将当前点云分别投影至xoy平面、yoz平面和xoz平面中,进而得到相较于将当前点云投影至一个或者两个坐标平面可以获取更为详细的投影特征,以提高定位准确性。如,本申请实施例中可以将所述当前点云分别投影至xoy平面、yoz平面和xoz平面,得到所述当前点云对应的三个当前投影点云,获取所述当前点云的每个所述当前投影点云与所述o的距离,且将所述距离归类至对应的坐标平面的预设的距离区间中,根据每个所述坐标平面的每个距离区间中的当前点云中的点的数量,得到所述当前点云的属性特征。应理解,每个坐标平面(xoy平面、yoz平面和xoz平面)可以分别对应有预设的距离区间。
第二种可能的实现方式:所述属性特征包括点云的法向量特征。本申请实施例中提取当前点云中的法向量特征的过程可以为:为了便于说明,此处以当前点云中的点均为第一点进行说明,对于所述当前点云中的每个第一点,本申请实施例中可以确定所述第一点所属的第一扫描线,在所述第一扫描线上,且在所述第一点的两侧分别获取距离所述第一点最近的第一临近点;在第二扫描线上获取距离所述第一点最近的第二临近点,所述第二扫描线与所述第一扫描线相邻;根据所述第一点、所述第一临近点和所述第二临近点,获取所述第一点的法向量;根据所述第一点的法向量,获取所述当前点云的属性特征。
本申请实施例中,可以获取所述第一点至所述第一临近点的第一向量,以及获取所述第一点至所述第二临近点的第二向量,进而在所述第一向量和所述第二向量中进行两两进行叉积计算,且将所述叉积计算的结果的均值作为所述第一点的法向量。其中,可以获取所述第一点的法向量在至少一个坐标轴的投影的长度,且将所述长度归类至预设的长度区间中,根据每个长度区间中的第一点的数量,获取所述当前点云的属性特征。
在该种方法中,预设的长度区间与第一点投影至坐标轴的投影的长度相关,该预设的长度区间可以根据经验预先设定,或者,本申请实施例中可以第一点投影至坐标轴的投影的最小长度和最大长度确定该长度区间。其中,本申请实施例中可以将每个长度区间中的当前点云中的点的数量组成一个一维向量,进而将该一维向量作为点云的法向量特征。
在一种可能的实现方式中,若本申请实施例中获取第一点的法向量在一个坐标轴的投影的长度,则可以获取第一点的法向量在z坐标轴的投影的长度,因为在当前点云中,点云的高度值(对应z轴)相较于在水平面的长度值(对应x轴和y轴)具有更为明显的长度特征。因此,当获取第一点的法向量在一个坐标轴的投影的长度时,获取第一点的法向量在z坐标轴的投影的长度更有利于特征的匹配准确性,以及定位的准确性。
第三种可能的实现方式:所述属性特征包括点云的曲率特征。本申请实施例中提取当前点云中的曲率特征的过程可以为:对于所述当前点云中的每个第一点,可以确定所述第一点所属的第一扫描线,进而根据所述第一点的位置、所述第一扫描线上所述第一点两侧的点的位置,获取所述第一点的曲率,将所述第一点的曲率归类至预设的曲率区间中;根据每个曲率区间中的第一点的数量,获取所述当前点云的属性特征。
与上述投影特征、法向量特征类似的,在该种方法中,预设的曲率区间与第一点的曲率相关,该预设的曲率区间可以根据经验预先设定,或者,本申请实施例中可以第一点的最小曲率和最大曲率确定该曲率区间。其中,本申请实施例中可以将每个曲率区间中的当前点云中的点的数量组成一个一维向量,进而将该一维向量作为点云的曲率特征。
应理解,本申请实施例中可以采用上述第一种方式至第三种方式,以提取当前点云的投影特征、法向量特征和曲率特征,使得当前点云的属性特征描述地更为准确和全面,以进一步提高待定位对象的定位准确性。
在一种可能的实现方式中,本申请实施例中在根据当前点云的属性特征获取待定位对象的位姿后,可以结合迭代最近点(iterative closest point,ICP)算法匹配进一步对待定位对象的位姿进行进一步精准确定,进而提高定位准确性。其中,所述点云地图可以包括多个关键帧的点云的属性特征。本申请实施例中,可以获取所述当前点云的属性特征和每个所述关键帧的点云的属性特征的相似度;按照相似度从大到小的顺序,获取排名前预设数量个相似度对应的目标关键帧;将每个所述目标关键帧对应位姿作为候选位姿;在多个候选位姿中确定所述待定位对象的位姿。
其中,在多个候选位姿中确定所述待定位对象的位姿的过程可以结合ICP算法确定待定位对象的位姿。本申请实施例中可以根据每个候选位姿对应的转换关系,将所述当前点云转换至所述点云地图中,得到每个候选位姿对应的转换点云,在所述点云地图中,获取与每个候选位姿对应的转换点云距离最近的目标点云,将每个候选位姿对应的转换点云和对应的目标点云进行迭代最近点ICP匹配,得到匹配度最高的转换点云;根据所述匹配度最高的转换点云对应的位姿,以及ICP匹配的转换关系,获取所述待定位对象的位姿。
应注意的是,每个候选位姿对应有一个转换关系,该转换关系可以将实际空间中的点云转换至点云地图中。本申请实施例中获取与每个候选位姿对应的转换点云距离最近的目标点云指的是:对于每个候选位姿对应的转换点云来说,在点云地图中获取距离转换点云中每个点距离最近的点,形成每个候选位姿的目标点云。在获取每个位姿对应的转换点云的目标点云后,可以根据ICP算法将匹配度最高的候选位姿作为待定位对象的位姿。
待定位对象所处环境的点云地图可能是随时变化的。如在地下车库中,柱子、地面、灯等固定的对象是不变的,但地下车库中的车辆随时都可能进出,进而会影响点云地图,进而影响依据点云地图进行匹配的准确性。因此,本申请实施例中可以在构建点云地图时,可以将变化的点云(如地下车库中的车辆的点云)删除,进而采用删除后的点云构建的点云地图中的点云是不变的,有利于提高定位的准确性。相对应的,在提取当前点云的属性特征之前,可以删除当前点云中的变化的点云。
其中,本申请实施例中在获取当前点云后,可以对当前点云进行滤波处理。该滤波处理可以将庞杂的点云进行简化但保留点云的特征,以减少后续处理的计算量。在对当前点云进行滤波处理后,可以对当前点云进行聚类,得到环境中的各对象的点云簇。其中,聚 类可以将当前点云中属于相同对象的点云聚成一类,形成一个对象的点云簇,如可以获取当前点云中属于地面的点云簇、属于车辆的点云簇、属于柱子的点云簇等。
本申请实施例中可以在各对象的点云簇中,删除点云的最大z值小于预设值的点云簇,即可以删除属于变化的对象的点云,进而提取删除处理后的当前点云的属性特征。但因为地下车库中点云的最大z值小于预设值的点云簇可能是地下车库中静止的对象的点云簇,如地面、地面上的隔离墩、防撞墩等的点云簇,若将这些对象的点云簇删除,则可能会缺失很多信息,因此本申请实施例中可以该地下车库中静止的对象作为预设对象,在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇。如在各对象中除了地面、地面上的隔离墩、防撞墩等的点云簇之外的点云簇中,删除点云的最大z值小于预设值的点云簇。这样可以保留地下车库详细又完整的点云,提高了点云地图的准确性。
上述实施例中讲述了依据点云地图中的各关键帧对应的点云的属性特征和当前点云的属性特征进行匹配进行定位的过程,此处对本申请实施例中构建点云地图的过程进行说明:
应理解,本申请实施例中构建点云地图的执行主体可以与执行上述定位方法的执行主体相同或不同。当构建点云地图的执行主体可以与执行上述定位装置的执行主体不同时,待定位对象可以预先获取点云地图。下述以待定位对象为例对构建点云地图的过程进行说明。本申请实施例中,所述待定位对象在所述环境的移动过程中,可以采集所述待定位对象的各位姿对应的点云,且提取各位姿对应的点云的属性特征;将所述各位姿对应的点云的属性特征、各位姿对应存储,构建所述点云地图。应理解,在构建点云地图的过程中,待定位对象提取各位姿对应的点云的属性特征的方式可以与上述待定位对象提取当前点云的属性特征的方式相同。
其中,待定位对象可以在所述环境的移动过程中,每隔预设距离,采集一个关键帧的点云,获取所述待定位对象在采集所述一个关键帧的点云的位姿,以得到所述各位姿对应的点云,进而将每个关键帧对应的位姿和点云进行映射,以得到点云地图。
在一种可能的实现方式中,对于每个位姿对应的点云来说,本申请实施例中在获取每个位姿对应的点云后,可以对每个位姿对应的点云进行滤波处理。该滤波处理可以将庞杂的点云进行简化但保留点云的特征,以减少后续处理的计算量。在对每个位姿对应的点云进行滤波处理后,可以对每个位姿对应的点云进行聚类,得到环境中的各对象的点云簇。与上述对当前点云的处理方式相同的,本申请实施例中可以在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇,以提取删除处理后的各位姿对应的点云的属性特征,进而得到完整且不包含变化的对象的点云,以提高定位的准确性。
第二方面,本申请实施例提供一种定位装置,该定位装置如上述第一方面的待定位对象,该定位装置包括:
第一处理模块,用于采集当前点云,当前点云包括待定位对象的点云和待定位对象所处环境的点云。
第二处理模块,用于提取所述当前点云的属性特征,且根据所述当前点云的属性特征,以及所述待定位对象所处环境的点云地图中的点云的属性特征,获取待定位对象的位姿。
在一种可能的实现方式中,所述属性特征包括如下至少一项:点云的投影特征、点云的法向量特征、点云的曲率特征。
在一种可能的实现方式中,所述属性特征包括点云的投影特征,第二处理模块,具体用于将所述当前点云投影至如下至少一个坐标平面:xoy平面、yoz平面、xoz平面,得到当前投影点云,o为所述待定位对象的中心位置,x、y和z分别为以o为坐标原点构建的坐标轴;获取所述当前投影点云与所述o的距离,且将所述距离归类至预设的距离区间中;根据每个距离区间中的当前点云中的点的数量,获取所述当前点云的属性特征。
在一种可能的实现方式中,第二处理模块,具体用于将所述当前点云分别投影至xoy平面、yoz平面和xoz平面,得到所述当前点云对应的三个当前投影点云;获取所述当前点云的每个所述当前投影点云与所述o的距离,且将所述距离归类至对应的坐标平面的预设的距离区间中;根据每个所述坐标平面的每个距离区间中的当前点云中的点的数量,得到所述当前点云的属性特征。
在一种可能的实现方式中,预设的距离区间与所述待定位对象采集到的点云中的点的最远距离和最近距离相关。
在一种可能的实现方式中,所述属性特征包括点云的法向量特征,所述当前点云中的点均为第一点;对于所述当前点云中的每个第一点,第二处理模块,具体用于确定所述第一点所属的第一扫描线,在所述第一扫描线上,且在所述第一点的两侧分别获取距离所述第一点最近的第一临近点,以及在第二扫描线上获取距离所述第一点最近的第二临近点,所述第二扫描线与所述第一扫描线相邻;根据所述第一点、所述第一临近点和所述第二临近点,获取所述第一点的法向量;根据所述第一点的法向量,获取所述当前点云的属性特征。
在一种可能的实现方式中,第二处理模块,具体用于获取所述第一点至所述第一临近点的第一向量;获取所述第一点至所述第二临近点的第二向量;在所述第一向量和所述第二向量中进行两两进行叉积计算,且将所述叉积计算的结果的均值作为所述第一点的法向量。
在一种可能的实现方式中,第二处理模块,具体用于获取所述第一点的法向量在至少一个坐标轴的投影的长度,且将所述长度归类至预设的长度区间中;根据每个长度区间中的第一点的数量,获取所述当前点云的属性特征。
在一种可能的实现方式中,第二处理模块,具体用于获取所述第一点的法向量在z坐标轴的投影的长度。
在一种可能的实现方式中,所述属性特征包括点云的曲率特征,所述当前点云中的点均为第一点;对于所述当前点云中的每个第一点,第二处理模块,具体用于确定所述第一点所属的第一扫描线;根据所述第一点的位置、所述第一扫描线上所述第一点两侧的点的位置,获取所述第一点的曲率;将所述第一点的曲率归类至预设的曲率区间中;根据每个曲率区间中的第一点的数量,获取所述当前点云的属性特征。
在一种可能的实现方式中,所述点云地图包括多个关键帧的点云的属性特征;第二处理模块,具体用于获取所述当前点云的属性特征和每个所述关键帧的点云的属性特征的相似度;按照相似度从大到小的顺序,获取排名前预设数量个相似度对应的目标关键帧;将每个所述目标关键帧对应位姿作为候选位姿;在多个候选位姿中确定所述待定位对象的位姿。
在一种可能的实现方式中,第二处理模块,具体用于根据每个候选位姿对应的转换关 系,将所述当前点云转换至所述点云地图中,得到每个候选位姿对应的转换点云;在所述点云地图中,获取与每个候选位姿对应的转换点云距离最近的目标点云;将每个候选位姿对应的转换点云和对应的目标点云进行迭代最近点ICP匹配,得到匹配度最高的转换点云;根据所述匹配度最高的转换点云对应的位姿,以及ICP匹配的转换关系,获取所述待定位对象的位姿。
在一种可能的实现方式中,第二处理模块,还用于对所述当前点云进行聚类,得到所述环境中的各对象的点云簇;在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇,以提取删除处理后的当前点云的属性特征。
在一种可能的实现方式中,待定位对象在所述环境的移动过程中,点云地图构建模块,用于采集所述待定位对象的各位姿对应的点云,提取各位姿对应的点云的属性特征,且将所述各位姿对应的点云的属性特征、各位姿对应存储,构建所述点云地图。
在一种可能的实现方式中,待定位对象在所述环境的移动过程中,点云地图构建模块,用于每隔预设距离,采集一个关键帧的点云,获取所述待定位对象在采集所述一个关键帧的点云的位姿,以得到所述各位姿对应的点云。
在一种可能的实现方式中,第二处理模块,还用于对所述各位姿对应的点云进行聚类,得到所述环境中的各对象的点云簇,在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇,以提取删除处理后的各位姿对应的点云的属性特征。
第三方面,本申请实施例提供一种电子设备,所述电子设备包括:处理器、存储器和点云采集装置;
其中,所述点云采集装置,用于发射激光雷达或毫米波雷达,以采集当前点云,当前点云包括待定位对象的点云和待定位对象所处环境的点云;
存储器用于存储计算机可执行程序代码,程序代码包括指令;当处理器执行指令时,指令使所述电子设备执行如第一方面或第一方面的各可能的实现方式所提供的方法。
第四方面,本申请实施例提供一种定位装置,包括用于执行以上第一方面或第一方面各可能的实现方式所提供的方法的单元、模块或电路。该定位装置可以为待定位对象,也可以为应用于待定位对象的一个模块,例如,可以为应用于待定位对象的芯片。
第五方面,本申请实施例提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面的各种可能的实现方式中的方法。
本申请实施例提供一种定位方法、装置、电子设备和存储介质,因为点云的属性特征为点云自身的特征,即使场景相似,但相似场景中点云的属性或属性特征也是不同的,因此本申请实施例的定位方法通过待定位对象周围的当前点云的属性特征和点云地图中的点云的属性特征的匹配进行定位,可以识别相似场景,进而提高室内定位的准确性。
附图说明
图1为目前一种室内定位的流程示意图;
图2为目前另一种室内定位的流程示意图;
图3为本申请实施例提供的定位方法的一实施例的流程示意图;
图4为本申请实施例提供的定位方法的另一实施例的流程示意图;
图5为本申请实施例中的当前点云投影至xoy平面的示意图;
图6为本申请实施例提供的定位方法的另一实施例的流程示意图;
图7为本申请实施例提供的扫描线上的点的示意图;
图8为本申请实施例提供的定位方法的另一实施例的流程示意图;
图9为本申请实施例提供的第一点和第一点两侧的点的示意图;
图10为本申请实施例提供的定位方法的另一实施例的流程示意图;
图11为本申请实施例提供的构建点云地图的一实施例的流程示意图;
图12为本申请实施例提供的构建点云地图的另一实施例的流程示意图;
图13为本申请实施例提供的获取投影点之间的角度的示意图;
图14为本申请实施例提供的定位方法的另一实施例的流程示意图;
图15为本申请实施例提供的定位装置的结构示意图;
图16为本申请实施例提供的电子设备的结构示意图。
具体实施方式
图1为目前一种室内定位的流程示意图。其中,车辆上可以设置有摄像头和激光雷达。车辆在室内移动的过程中,摄像头可以采集车辆周围的图像,激光雷达可以向车辆周围发射激光,以采集车辆周围的点云。车辆中可以预先存储有室内的图像特征地图,该图像特征地图中包括车辆的各位姿对应的图像特征。在进行定位时,如图1所示,车辆可以将采集到的图像中的图像特征与地图中的各位姿对应的图像特征进行匹配,确定初始位姿,如可以将匹配度最高的地图中的图像特征对应的位姿作为初始位姿。车辆还可以根据该初始位姿周围的点云,采用粒子滤波的方式确定车辆的位姿。图1所示的方法充分利用图像中的丰富的特征,且结合点云,能够在室内环境下对车辆准确定位。但在光照较弱时,图像中的特征容易丢失或不易获取,尤其在地下车库的环境中,不同位置的光照不一样,也会影响图像特征的提取,导致定位失败,该室内定位方法的鲁棒性不足。
图2为目前另一种室内定位的流程示意图。为了解决依据图像特征进行定位受光线影响的问题,目前还提供了一种室内定位的方法,在该方法中可以根据车辆周围的点云进行定位。其中,车辆可以预先存储有点云地图,该点云地图中可以包括车辆的各位姿对应的点云。车辆在室内移动的过程中,可以采集车辆周围的点云,进而将采集到的点云与点云地图中的各位姿对应的点云进行迭代最近点(iterative closest point,ICP)算法匹配,进而将匹配度最高的位姿作为车辆的位姿。这种方法虽然避免了外界光线的影响,但是因为室内环境(如地下车库)的各位置的场景很相似,导致各位置处的点云非常接近,匹配度最高的位姿可能是场景相似的其他位置,因此该室内定位方法的准确性低。
为了解决目前室内定位中的问题,本申请实施例提供了一种定位方法,该定位方法中可以通过点云的属性特征的匹配来进行定位。因为点云的属性特征为点云本身的特征,即使场景相似,该场景中点云的属性或属性特征也是不同的,因此根据点云的属性特征的匹配可以识别相似度高的场景,进而提高室内定位的准确性。
应理解,本申请实施例中的定位方法可以应用于车辆或者机器人等电子设备的定位场景中,也不限于应用于其他可以发射激光点云,通过点云进行定位的电子设备的场景中。本申请实施例中执行定位方法的执行主体可以为定位装置,该定位装置可以为车辆、车辆中的处理芯片、机器人、机器人中的芯片、终端设备等。应理解,该定位装置上可以集成 或单独设置有发射装置,该发射装置可以向定位装置的周围发射激光,以采集定位装置周围的点云。本申请实施例中的发射装置可以但不限于为激光雷达、毫米波雷达等。下述实施例中以定位装置是机器人、以发射装置为激光雷达为例进行说明。
下面结合具体的实施例对本申请实施例提供的定位方法进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。图3为本申请实施例提供的定位方法的一实施例的流程示意图。如图3所示,本申请实施例提供的定位方法可以包括:
S301,采集当前点云,当前点云包括待定位对象的点云和待定位对象所处环境的点云。
S302,提取当前点云的属性特征。
S303,根据当前点云的属性特征,以及待定位对象所处环境的点云地图中的点云的属性特征,获取待定位对象的位姿。
上述S301中,激光雷达可以向待定位对象的周围发射激光,使得待定位对象可以获取激光反射的信息,进而采集当前点云。应理解,当前点云可以包括待定位对象的点云和待定位对象所处环境的点云。其中,待定位对象的点云是由待定位对象反射激光雷达或者毫米波雷达得到的点云,待定位对象所处环境的点云为所述待定位对象所处环境中的物体反射激光雷达或者毫米波雷达得到的点云。本申请实施例中对待定位对象如何控制激光雷达获取点云的过程不做详述,可以参照目前的技术方案。其中,待定位对象可以为车辆、机器人或终端设备等。
示例性的,如待定位对象为机器人,机器人在室内的位置A处获取的点云可以为当前点云。当前点云中可以包括多个点(如激光点,下述称为点)、以及每个点的信息。其中,点是发射的激光遇到障碍物反射回来的点,点的信息可以为点的空间三维坐标、激光反射强度等。本申请实施例中可以将机器人实时获取的点云作为当前点云,以根据各时刻的当前点云对机器人进行定位。
上述S302中,本申请实施例中,当前点云的属性可以为当前点云中包括的多个点的属性的集合。其中,每个点的属性可以为点的空间三维坐标、激光反射强度、点距离待定位对象的距离、点与周围点的距离、点与周围的点构成的线段的法向量等。应理解,每个点的属性为点自身的属性,因此当前点云的属性也为当前点云自身的属性。
本申请实施例中,可以根据当前点云的属性,提取当前点云的属性特征,当前点云的属性特征用于表征当前点云的属性。可选的,在一种可能的实现方式中,当前点云的属性特征可以为当前点云中的点与周围的点构成的线段的法向量对应的一维向量。可选的,在一种可能的实现方式中,当前点云的属性特征可以为点距离待定位对象的距离组成的一维向量。
上述S303中,待定位对象中可以预先存储有待定位对象所处环境的点云地图。示例性的,如待定位对象为车辆,车辆所处环境为地下停车场,则该点云地图为地下停车场的点云地图。可选的,车辆可以在进入地下停车场时获取地下停车场的点云地图,如访问地下停车场对应的服务器,在该服务器中下载地下停车场的点云地图。示例性的,如待定位对象为机器人,该机器人为室内的防卫卫士(用于在室内移动获取室内的信息),机器人所处的环境可以为商场,则该点云地图可以为商场的点云地图。可选的,工作人员可以预 先将商场的点云地图存储至机器人中,或者机器人可以自己采集商场的点云,生成该商场的点云地图。本申请实施例中对待定位对象如何获取待定位对象所处环境的点云地图的方式不做限制。
应理解,点云地图中可以包括各位姿对应的点云的属性特征。各位姿可以为生成点云地图的设备在待定位对象所处环境中每个位姿。其中,生成点云地图的设备可以为待定位对象或者其他设备。也就是说,点云地图可以包括多个位姿,以及每个位姿对应的点云的属性特征。
本申请实施例中,待定位对象可以根据当前点云的属性特征,以及待定位对象所处环境的点云地图中的点云的属性特征,获取待定位对象的位姿。其中,待定位对象可以将当前点云的属性特征、以及待定位对象所处环境的点云地图中的各位姿对应的点云的属性特征进行匹配,将匹配度最高的点云的属性特征对应的位姿作为待定位对象的位姿。示例性的,待定位设备可以获取当前点云的属性特征和点云地图中的每个位姿对应的点云的属性特征的欧式距离(或者其他表征属性特征的相似度的距离),且将欧式距离最小的点云的属性特征对应的位姿作为待定位对象的位姿。
本申请实施例提供的定位方法包括:采集当前点云,当前点云可以包括待定位对象的点云和待定位对象所处环境的点云,提取当前点云的属性特征;根据当前点云的属性特征,以及待定位对象所处环境的点云地图中的点云的属性特征,获取待定位对象的位姿。因为点云的属性特征为点云自身的特征,即使场景相似,但相似场景中点云的属性或属性特征也是不同的,因此本申请实施例的定位方法通过待定位对象周围的当前点云的属性特征和点云地图中的点云的属性特征的匹配进行定位,可以提高室内定位的准确性。
本申请实施例中,点云的属性特征可以包括如下至少一项:点云的投影特征、点云的法向量特征、点云的曲率特征。下面依次对在当前点云中提取投影特征、法向量特征、曲率特征的过程进行说明。首先结合图4对在当前点云中提取投影特征进行说明,图4为本申请实施例提供的定位方法的另一实施例的流程示意图。如图4所示,上述S302可以替换为:
S401,将当前点云投影至如下至少一个坐标平面:xoy平面、yoz平面、xoz平面,得到当前投影点云,o为待定位对象的中心位置,x、y和z分别为以o为坐标原点构建的坐标轴。
S402,获取当前投影点云与o的距离,且将距离归类至预设的距离区间中。
S403,根据每个距离区间中的当前点云中的点的数量,获取当前点云的属性特征。
上述S401中,本申请实施例中可以将当前点云投影至如下至少一个坐标平面:xoy平面、yoz平面、xoz平面,得到当前投影点云。其中,将当前点云投影可以为将当前点云中包括的每个点进行投影,以得到每个点对应的投影点,当前投影点云中可以包括每个点对应的投影点。本申请实施例中的o为待定位对象的中心位置,x、y和z分别为以o为坐标原点构建的坐标轴。其中,待定位对象的中心位置可以为激光雷达所在的位置,具体为激光雷达发射激光的位置。坐标平面可以为以待定位对象的中心位置为坐标原点,在平行于水平面的面上设置相互垂直的x轴和y轴,在垂直于水平面的竖直面上设置过坐标原点的z轴。应理解,本申请实施例中也可以按照其他方式设置坐标平面。
上述S402中,图5为本申请实施例中的当前点云投影至xoy平面的示意图。下述以将当前点云投影至xoy平面为例进行说明,在将当前点云头投影至xoy平面后,可以在xoy平面上得到当前投影点云,如图5所示,该xoy平面可以包括当前投影点云中的多个点。点以黑色圆点表示,三个连续的点表示为距离区间的省略表示。
本申请实施例中,可以获取当前投影点云与o的距离,且将距离归类至预设的距离区间中。其中,可以获取当前投影点云中每个点与o的距离,即每个点与坐标原点的距离。且根据每个点与坐标原点的距离,将距离划分至预设的距离区间中。预设的距离区间为预定义的,该预设的距离区间与待定位对象采集到的点云中的点的最远距离和最近距离相关。
示例性的,待定位对象采集到的点云中的点的最远距离为80m,最近距离为0m,则预设的距离可以为0-80m,则本申请实施例中可以将该预设距离划分为预设数量的距离区间。如该预设数量为20,则可以将0-4m划分为一个距离区间,将4m-8m划分为一个距离区间……,以此类推,将76m-80m划分为一个距离区间,进而得到20个距离区间。应理解,在划分距离区间时,每个距离区间对应的距离可以不同,如一个距离区间可以为0-4m,该距离区间对应的距离为4m,另一距离区间可以为4m-6m,该距离区间对应的距离为2m,本申请实施例对划分距离区间的方式不做限制,只要距离区间的个数满足预设数量即可。图中以r表示距离区间对应的距离。
上述S403中,本申请实施例中可以获取每个距离区间中的当前点云中的点的数量,以根据该每个距离区间中的当前点云中的点的数量,获取当前点云的属性特征(投影特征)。示例性的,如落入距离区间0-4m中的点的数量为6个,落入距离区间4m-8m中的点的数量为6个……,落入距离区间76m-80m中的点的数量为2个,则可以得到20个数量,因此可以将该20个数量按照区间顺序排列,得到一个一维向量。如该向量可以为(6,6……,2),本申请实施例中可以将一维向量作为当前点云的属性特征。
在一种可能的实现方式中,本申请实施例还可以将每个距离区间中的当前点云中的点的数量进行归一化处理,如同时缩小相同的倍数,使得数量均处于0-1的范围内。如可以将每个距离区间中的当前点云中的点的数量除以数量中的最大数量,得到归一化处理后的结果,进而根据该归一化处理后的结果生成一个一维向量。示例性的,如各距离区间中的当前点云中的点的数量中的最大数量为6,则可以将每个距离区间中的当前点云中的点的数量除以6,进而得到一维向量(1,1……,1/3)。
在获取当前点云的投影特征的方式中,本申请实施例中可以将当前点云分别投影至xoy平面、yoz平面和xoz平面中,进而得到相较于上述将当前点云分别投影至xoy平面更为详细的投影特征,因此在与点云地图中的属性特征进行匹配时可以得到更为准确的结果。其中,上述S401可以替换为S401':将当前点云分别投影至xoy平面、yoz平面和xoz平面,得到当前点云对应的三个当前投影点云。对应的,上述S402可以替换为S402':获取当前点云的每个当前投影点云与o的距离,且将距离归类至对应的坐标平面的预设的距离区间中。上述S403可以替换为S403':根据每个坐标平面的每个距离区间中的当前点云中的点的数量,得到当前点云的属性特征。
S401'中的投影方式可以参照上述相关描述,不同的是,此处将当前点云投影至三个坐标平面,进而可以得到当前点云的三个当前投影点云。
在S402'中,因为此处有三个当前投影点云,因此可以按照与上述S402相同的方式, 获取每个当前投影点云与o的距离,且将每个当前投影点云与o的距离分别归类至对应的坐标平面的预设的距离区间中。其中,获取每个当前投影点云与o的距离可以为获取每个当前投影点云中的每个点与o的距离,进而得到三个当前投影点云中每个点与o的距离。
与上述的描述不同的,本申请实施例中每个坐标平面(xoy平面、yoz平面和xoz平面)分别对应有预设的距离区间。示例性的,xoy平面的距离区间为:0-4m、4m-8m……,76m-80m,yoz平面的距离区间为:0-4m、4m-6m……,76m-80m、xoz平面的距离区间为:0-1m、1m-2m……,7m-8m。应注意,每个坐标平面对应的距离区间的预设数量可以不同或相同。
在S403'中,本申请实施例中获取每个坐标平面的每个距离区间中的当前点云中的点的数量,进而根据每个坐标平面的每个距离区间中的当前点云中的点的数量,得到当前点云的属性特征。其中,可以将每个坐标平面的每个距离区间中的当前点云中的点的数量按照坐标平面的顺序(如从xoy平面至yoz平面,再至xoz平面,可以预先约定),以及每个坐标平面的距离区间的顺序,得到一个一维向量,且将该一维向量作为当前点云的属性特征。如每个坐标平面对应的距离区间的预设数量均为20,xoy平面中落入距离区间的点的数量分别为6、6……,2,yoz平面中落入距离区间的点的数量分别为1、2……,3,xoz平面中落入距离区间的点的数量分别为2、4……,8,则该一维向量可以为(6,6……,2,1,2……,3,2,4……,8)。
其次结合图6对在当前点云中提取法向量特征进行说明,图6为本申请实施例提供的定位方法的另一实施例的流程示意图。如图6所示,上述S302可以替换为:
S601,确定第一点所属的第一扫描线,当前点云中的点均为第一点。
S602,在第一扫描线上,且在第一点的两侧分别获取距离第一点最近的第一临近点。
S603,在第二扫描线上获取距离第一点最近的第二临近点,第二扫描线与第一扫描线相邻。
S604,根据第一点、第一临近点和第二临近点,获取第一点的法向量。
S605,根据第一点的法向量,获取当前点云的属性特征。
上述S601中,本申请实施例中为了便于说明,将当前点云中包括的点均称为第一点,也就是说,本申请实施例中对第一点均执行S601-S604中的步骤。因为激光雷达在发射激光时,可以同时发射多个激光扫描线,每个激光扫描线可以触碰到室内的各对象而反射。示例性的,16线激光雷达可以同时发射16个激光扫描线。待定位对象在获取当前点云时,即可确定该当前点云中的点是哪个扫描线中的点,据此,本申请实施例中可以确定第一点所属的第一扫描线。
上述S602中,同一扫描线上可以包括多个点,对于第一点来说,本申请实施例中可以获取在第一扫描线上,且在第一点的两侧分别获取距离第一点最近的第一临近点。图7为本申请实施例提供的扫描线上的点的示意图。如图7所示,对于第一扫描线上的第一点a来说,该扫描线上第一点a的两侧分别获取距离第一点最近的第一临近点分别为点b和点c。应理解,若第一点a在第一扫描线上相邻的点只有一个,则可以将该一个点作为点a的第一临近点。
上述S603中,本申请实施例中还可以在与第一扫描线相邻的第二扫描线上获取距离 第一点最近的第二临近点。如图7所示,第二扫描线上的第二临近点分别为点d和点e。应理解,当第一扫描线只有一个相邻的第二扫描线时,可以在该第二扫描线上获取一个第二临近点。
上述S604中,本申请实施例中可以根据第一点、第一临近点和第二临近点,获取第一点的法向量。本申请实施例中可以获取第一点至第一临近点的第一向量,且获取第一点至第二临近点的第二向量,根据该第一向量和第二向量获取第一点的法向量。
在一种可能的实现方式中,如点a至点b、点c的第一向量分别为向量v1和向量v2,点a至点d、点e的第二向量分别为向量v3和向量v4,本申请实施例中将任一垂直于向量v1、向量v2、向量v3和向量v4的向量作为点a的法向量。
在一种可能的实现方式中,并不一定存在均垂直于向量v1、向量v2、向量v3和向量v4的向量,因此本申请实施例可以将在第一向量和第二向量中进行两两进行叉积计算,进而将叉积结果近似表征均垂直于向量v1、向量v2、向量v3和向量v4的向量。本申请实施例中可以将叉积计算的结果的均值作为第一点的法向量。示例性的,对向量v1和向量v2进行叉积计算,对向量v1和向量v3进行叉积计算……,对向量v3和向量v4进行叉积计算,进而得到多个叉积结果,本申请实施例中可以将该多个叉积计算的结果的均值作为第一点的法向量。
如第一点的法向量n i可以如下公式一进行计算:
Figure PCTCN2020124519-appb-000001
其中,i为当前点云中的任意一个第一点,m为第一点对应的第一向量和第二向量的个数的加和,v i为第一向量或第二向量,v i%4+1为与第一向量或第二向量进行叉积的向量。
应注意,点a可能不具有第一向量,或者具有一个第一向量,且点a可能具有一个第二向量,可以按照与上述相同的方式获取第一点a的法向量。示例性的,本申请实施例中的激光雷达可以为16线的激光雷达,可以同时发射16条激光(如16条激光的发射点可以为同一点,该16条激光组成扇形平面,相邻两条激光之间的角度可以相同)。本申请实施例中为了使得第一点均具有两个第一向量和两个第二向量,可以不计算16条激光中位于边缘位置的两条激光上的第一点,如16条激光是依次从上至下排列的,则可以不计算最上面和最下面的两条激光上的第一点的法向量。
上述S605中,可以根据第一点的法向量,获取当前点云的属性特征。在一种可能的实现方式中,本申请实施例中可以将第一点的法向量的集合作为当前点云的法向量特征。
在一种可能的实现方式中,本申请实施例可以获取第一点的法向量在至少一个坐标轴的投影的长度,且将长度归类至预设的长度区间中。此处以获取第一点的法向量在x轴的投影的长度为例进行说明,将第一点的法向量投影至x轴,则可以获取第一点的法向量在x轴的投影的长度,该长度等于第一点的法向量在x轴上的分量的数值。其中,若本申请实施例中获取第一点的法向量在一个坐标轴的投影的长度,则可以获取第一点的法向量在z坐标轴的投影的长度,因为在当前点云中,点云的高度值(对应z轴)相较于在水平面的长度值(对应x轴和y轴)具有更为明显的长度特征。因此,当获取第一点的法向量在一个坐标轴的投影的长度时,获取第一点的法向量在z坐标轴的投影的长度更有利于特征的匹配准确性,以及定位的准确性。
本申请实施例中在获取第一点的法向量在至少一个坐标轴的投影的长度后,可以将长度归类至预设的长度区间中。应理解,不同的坐标轴可以对应不同的预设的长度区间(与上述预设的距离区间类似)。因此,可以得到至少一个坐标轴对应的每个长度区间中的第一点的数量。与上述获取当前点云的投影特征类似的,本申请实施例中也可以根据每个长度区间中的第一点的数量,获取当前点云的属性特征(法向量特征)。应理解,本申请实施例中的预设的长度区间可以为经验值,其中,预设的长度区间与第一点的法向量的投影至坐标轴的长度相关。本申请实施例中还可以根据第一点的法向量的投影至坐标轴的长度确定预设的长度区间。示例性的,第一点投影至坐标轴的最大长度是10m,最小长度是0m,则可以确定将0-10m划分为预设的长度区间。如将0-10m划分成长度相同的20个长度区间,则每个预设的长度区间的长度均为0.5m。
示例性的,以每个坐标轴对应的长度区间的数量为20为例,x轴的每个长度区间的第一点的数量为3、4……,8,y轴的每个长度区间的第一点的数量为1、2……,4,z轴的每个长度区间的第一点的数量为3、4……,6,则该当前点云的属性特征可以为由每个周的每个长度区间的第一点的数量构成的一维向量(3,4……,8,1,2……,4,3,4……,6)。可选的,当本申请实施例中只获取了第一点的法向量在z坐标轴的投影的长度时,该当前点云的属性特征可以为(3,4……,6)。应理解,本申请实施例中在获取每个长度区间中的第一点的数量后,也可以进行归一化处理,得到当前点云的属性特征,具体方式可以参照上述对每个距离区间中的点的数量进行归一化处理的相关描述。
再次结合图8对在当前点云中提取曲率特征进行说明,图8为本申请实施例提供的定位方法的另一实施例的流程示意图。如图8所示,上述S302可以替换为:
S801,确定第一点所属的第一扫描线。
S802,根据第一点的位置、第一扫描线上第一点两侧的点的位置,获取第一点的曲率。
S803,将第一点的曲率归类至预设的曲率区间中。
S804,根据每个曲率区间中的第一点的数量,获取当前点云的属性特征。
应理解,本申请实施例中的S801的实施方式可以参照上述S601中的相关描述,再次不做赘述。
上述S802中,第一点两侧的点可以为第一点两侧的预设数量的点。如预设数量为3,则第一点两侧的点可以为第一点两侧分别3个点,加上第一点总共7个点。图9为本申请实施例提供的第一点和第一点两侧的点的示意图。如图9所示,在第一扫描线上,第一点为点a,点a两侧的点分别为点b、点c、点f、点g、点h、点k。本申请实施例中,可以根据第一点的位置、以及第一扫描线上第一点两侧的点的位置,获取第一点的曲率。其中,第一点的位置可以为第一点的空间三维坐标,第一点两侧的点的位置可以为第一点两侧的点的空间三维坐标。第一点的曲率c i可以按照如下公式二计算:
Figure PCTCN2020124519-appb-000002
其中,p i为第一点的位置,p j为第一点两侧的点中的任意一个点的位置,S为第一点两侧的点的预设数量。
上述S803中,在获取当前点云中第一点的曲率后,可以将第一点的曲率归类至预设 的曲率区间中。应理解,该预设的曲率区间也是预先定义的。预设的曲率区间的个数可以与上述距离区间、长度区间的个数相同,如均为20个。
上述S804中,在将第一点的曲率归类至预设的曲率区间中后,可以获取每个曲率区间中的第一点的数量,进而根据每个曲率区间中的第一点的数量,获取当前点云的属性特征(曲率特征)。应理解,根据每个曲率区间中的第一点的数量,获取当前点云的属性特征的过程与获取投影特征、法向量特征的过程类似,可以参照上述实施例中的相关描述。如曲率特征可以为向量(1,1……,1)。应理解,本申请实施例中在获取每个曲率区间中的第一点的数量后,也可以进行归一化处理,得到当前点云的属性特征,具体方式可以参照上述对每个距离区间中的点的数量进行归一化处理的相关描述。
应理解,上述分别介绍了提取当前点云中的投影特征、法向量特征和曲率特征过程。本申请实施例中的点云的属性特征可以包括投影特征、法向量特征和曲率特征中的至少一个。为了更为准确地获取当前点云的属性特征,可以在当前点云中提取投影特征、法向量特征和曲率特征,属性特征越准确,则待定位对象的定位也就越准确。
当点云的属性特征可以包括投影特征、法向量特征和曲率特征时,上述S302可以替换为S401-S403、S601-S605、S801-S804,对应的,当前点云的属性特征可以为投影特征、法向量特征和曲率特征对应的一维向量的重组,如当前点云的属性特征对应的向量可以为(6,6……,2,1,2……,3,2,4……,8,3,4……,6,1,1……,1),该当前点云的属性特征是一个长度为100的一维向量,其中,投影特征对应的向量的长度为60,法向量特征和曲率特征分别对应的向量的长度为20。
本申请实施例中,当前点云的属性特征可以包括如下至少一项:点云的投影特征、点云的法向量特征、点云的曲率特征,采用任意一个属性特征均可以有效区分相似场景,因此可以实现准确定位。在当前点云的属性特征包括投影特征、法向量特征和曲率特征,当前点云的属性特征描述地更为准确和全面,因此可以进一步提高待定位对象的定位准确性。
在上述实施例的基础上,本申请实施例中在根据当前点云的属性特征获取待定位对象的位姿后,可以结合ICP匹配进一步对待定位对象的位姿进行进一步精准确定,进而提高定位准确性。图10为本申请实施例提供的定位方法的另一实施例的流程示意图。如图10所示,本申请实施例中的定位方法可以包括:
S1001,采集当前点云,当前点云包括待定位对象的点云和待定位对象所处环境的点云。
S1002,提取当前点云的属性特征。
S1003,获取当前点云的属性特征和每个关键帧的点云的属性特征的相似度。
S1004,按照相似度从大到小的顺序,获取排名前预设数量个相似度对应的目标关键帧。
S1005,将每个目标关键帧对应位姿作为候选位姿。
S1006,在多个候选位姿中确定待定位对象的位姿。
应理解,本申请实施例中的S1001-S1002中的实施方式可以参照上述实施例S301-S302的相关描述,在此不做赘述。
上述S1003中,本申请实施例中,点云地图可以包括多个关键帧的点云的属性特征。 其中,关键帧可以为在采集点云时的帧,如采集的一帧点云就是关键帧的点云。也就是说,点云地图中包括多帧点云的属性特征,应理解,不同帧点云是待定位对象在不同位姿时采集的。本申请实施例中获取点云地图的过程可以参照下述图11、图12中的相关描述。
待定位对象在将当前点云的属性特征和点云地图中的点云的属性特征,获取待定位对象的位姿时,可以获取当前点云的属性特征和每个关键帧的点云的属性特征的相似度,也就是当前点云对应的一维向量和每个关键帧的点云对应的一维向量的相似度(如欧式距离)。
上述S1004中,本申请实施例在获取当前点云的属性特征和每个关键帧的点云的属性特征的相似度后,可以按照相似度从大到小的顺序,获取排名前预设数量个相似度对应的目标关键帧。如预设数量为10,则可以将相似度排名前10的点云地图中的关键帧作为目标关键帧。也就是说,本申请实施例中根据点云的属性的特征的匹配,先确定几个候选的关键帧。
上述S1005中,点云地图中的关键帧的点云是在不同的位姿时采集的,因此点云地图中的每个关键帧点云对应有一个位姿,相应的,目标关键帧也对应有位姿。本申请实施例中,待定位对象在确定与当前点云的属性特征相似度较高的目标关键帧,可以将每个目标关键帧对应位姿作为候选位姿。
上述S1006中,本申请实施例可以在多个候选位姿中确定待定位对象的位姿。在一种可能的实现方式中,本申请实施例中可以将该多个候选位姿的均值作为待定位对象的位姿。示例性的,如将位姿中的位置(空间三维坐标)的均值作为待定位对象的位置,将位姿中的姿态(朝向等)的均值作为待定位对象的姿态,进而得到待定位对象的位姿。
为了更为准确地确定待定位对象的位姿,在一种可能的实现方式中,本申请实施例还可以结合ICP算法,在多个候选位姿中确定待定位对象的位姿。其中,对于每个候选位姿都对应有转换关系(R,T),R为旋转矩阵,T为平移矩阵,即通过该转换关系可以将实际空间中的点云转换至点云地图坐中,也就是说,该转换关系可以理解为实际空间坐标系和点云地图中的坐标系的转换关系。
本申请实施例中,可以根据每个候选位姿对应的转换关系,将当前点云转换至点云地图中,得到每个候选位姿对应的转换点云。对于每个候选位姿来讲,可以在点云地图中,获取与每个候选位姿对应的转换点云距离最近的目标点云。示例性的,以候选位姿1对应的转换点云来说,该候选位姿1对应的转换点云中包括多个点,可以在点云地图中获取与该每个点最近的点,由与该每个点最近的点组成的点云即为目标点云。其中,目标点云中每个点与候选位姿对应的转换点云中的对应的点距离最近。
待定位对象在确定每个位姿对应的转换点云的目标点云后,可以将每个候选位姿对应的转换点云和对应的目标点云进行ICP匹配,得到匹配度最高的转换点云。其中,每个候选位姿对应的转换点云和对应的目标点云进行迭代最近点ICP匹配均可以得到一个匹配度,本申请实施例中可以确定匹配度最高的转换点云。进而,待定位对象可以根据匹配度最高的转换点云对应的位姿,以及ICP匹配的转换关系,获取待定位对象的位姿。其中,匹配度最高的转换点云对应的位姿可以为该转换点云对应的转换关系对应的位姿。应理解,ICP匹配的结果既可以包括上述的匹配度,还可以包括匹配的转换关系(即每个点和距离其最近的点得到的转换关系)。因此,本申请实施例中可以根据匹配度最高的转换点云对应的 位姿和ICP匹配的转换关系,得到待定位对象的位姿。应理解,本申请实施例中对ICP匹配算法不做赘述,可以参照目前的技术方案中的相关描述。
示例性的,匹配度最高的转换点云对应的位姿为P i,ICP匹配的转换关系为T。则待定位对象的位姿P可以由如下公式三计算:
P=T·P i  公式三
本申请实施例中,可以先根据点云的属性特征的匹配初步确定待定位对象的候选位姿,进而结合ICP算法,在候选位姿中确定待定位对象的位姿,相较于上述实施例,可以进一步提高待定位对象的位姿的准确性。
下表一是采用图10所示的定位方法、图1和图2所示的定位方法定位的准确率。
表一
方法 位置1 位置2 位置3 位置4
图10对应的方法 91.1% 93.2% 90.3% 95.2%
图1对应的方法 60.2% 83.7% 66.5% 50.1%
图2对应的方法 48.5% 77.2% 46.3% 78%
因为在上述实施例中未讲述点云地图的构建过程,此处结合图11先对点云地图的构建过程进行说明。图11为本申请实施例提供的构建点云地图的一实施例的流程示意图。如图11所示,本申请实施例提供的构建点云地图可以包括:
S1101,待定位对象在环境的移动过程中,采集待定位对象的各位姿对应的点云。
S1102,提取各位姿对应的点云的属性特征。
S1103,将各位姿对应的点云的属性特征、各位姿对应存储,构建点云地图。
应理解,本申请实施例中的S1102可以参照上述实施例S302的相关描述。应注意,点云地图中点云的属性特征与当前点云中提取的属性特征一致,如地图中点云的属性特征为投影特征,则在当前点云中提取的属性特征也为投影特征,具体提取哪些属性特征可以预先约定。
上述S1101中,待定位对象可以在所处环境中移动,在移动过程中可以采集待定位对象的各位姿对应的点云。其中,各位姿对应的点云可以为一个关键帧的点云。可选的,本申请实施例中,待定位对象可以在每移动预设距离,采集一个关键帧的点云。可选的,待定位对象也可以每个预设时长,采集一个关键帧的点云。
应理解,在待定位对象采集一个关键帧的点云时,可以将待定位对象的位姿与该关键帧相对应,进而确定一个位姿对应的关键帧的点云。本申请实施例中,待定位对象可以根据开源的同时定位与地图构建(simultaneous localization and mapping,SLAM)算法获取待定位对象的位姿,该过程可以参照目前的技术方案中的相关描述,在此不做赘述。
上述S1103中,在获取待定位对象的各位姿,以及各位姿对应的点云的属性特征,可以将各位姿对应的点云的属性特征、各位姿对应存储,进而构建点云地图。其中,点云地图中可以包括各位姿对应的点云的属性特征(各关键帧对应的点云的属性特征)、以及点云。
在一种可能的情况中,待定位对象所处环境的点云地图可能是随时变化的。示例性的,如在地下车库中,柱子、地面、灯等固定的对象是不变的,但地下车库中的车辆随时都可 能进出,进而会影响点云地图,进而影响依据点云地图进行匹配的准确性。因此,本申请实施例中可以在构建点云地图时,可以将变化的点云(如地下车库中的车辆的点云)删除,进而采用删除后的点云构建的点云地图中的点云是不变的,有利于提高定位的准确性。图12为本申请实施例提供的构建点云地图的另一实施例的流程示意图。如图12所示,在上述S1102之前还可以包括:
S1104,对各位姿对应的点云进行滤波处理。
S1105,对各位姿对应的点云进行聚类,得到环境中的各对象的点云簇。
S1106,在各对象的点云簇中,删除点云的最大z值小于预设值的点云簇。
相对应的,上述S1102可以替换为S1102':提取删除处理后的各位姿对应的点云的属性特征。
应理解,S1102'中提取删除处理后的各位姿对应的点云的属性特征可以参照上述S302的相关描述。
上述S1104中,本申请实施例中可以基于点云库(point cloud library,PCL)中的体素栅格滤波器对各位姿的点云进行滤波处理,将庞杂的点云进行简化但保留点云的特征,以减少后续处理的计算量。滤波处理的方式可以参照目前技术方案中的相关描述。
上述S1105中,在对各位姿对应的点云进行滤波处理后,为了删除待定位对象所处环境中变化的对象造成的可变化的点云,因此可以对各位姿对应的点云进行聚类,进而得到该环境中的各对象的点云簇。其中,本申请实施例中的聚类是将各位姿对应的点云中属于相同对象的点云聚成一类,形成一个点云簇。示例性的,待定位对象所处的环境为地下车库,对各位姿对应的点云进行聚类,可以得到各位姿对应的属于地面的点云簇、属于车辆的点云簇、属于柱子的点云簇等。
应理解,本申请实施例中对各位姿对应的点云进行聚类的方式可以为(以任一位姿对应的点云进行说明):将点云中的每个点投影至任一个坐标平面,以计算该坐标平面上的每个点的投影点与周围的投影点之间的角度,进而以该角度对点云中的点进行聚类。示例性的,如xoy坐标平面预先划分有矩阵,如对于16线的激光雷达,可以预先对xoy坐标平面划分的矩阵大小可以为16×1800,其中,1800与激光雷达的水平分辨率旋转角度相关。本申请实施例中,在将点云中的每个点投影至xoy坐标平面,即将每个点投影在xoy坐标平面的矩阵中后,可以获取xoy坐标平面上的每个点的投影点与距离该投影点最近的三个投影点之间的角度。图13为本申请实施例提供的获取投影点之间的角度的示意图。如图13所示,xoy坐标平面上包括投影点p和投影点b,本申请实施例中可以分别获取投影点p至坐标原点o的距离d 1,以及投影点b至坐标原点o的距离d 2,进而采用如下公式四获取投影点p和投影点b之间的角度angle。
Figure PCTCN2020124519-appb-000003
其中,α表示激光雷达的水平角分辨率或垂直角分辨率。
本申请实施例中在获取每个点的投影点与周围的三个投影点之间的角度后,若存在角度大于预设角度(如30度),则认为该周围的投影点对应的点与该投影点对应的点属于不同对象的点云簇。按照如上方式,遍历点云中所有的点,按照是否属于同一对象的点云簇进行划分,以得到不同对象的点云簇。
上述S1106中,因为地下车库中的车辆在地面上停放,地下车库中停放的车辆在空间 坐标系中有一个最大高度,且聚类获取各对象的点云簇能够获取多个点云簇,但并不能确定每个点云簇对应的对象。因此本申请实施例中在得到各位姿对应的点云簇后,可以根据各点云簇中包括的点的z值,删除车辆对应的点云簇,如本申请实施例中可以删除点云的最大z值小于预设值的点云簇(即车辆对应的点云簇)。示例性的,预设值可以为2.5m,本申请实施例中在各点云簇中,可以删除点云的最大高度小于2.5m的点云簇,进而得到除了车辆之外的其他对象的点云簇,进而能够获取待定位对象所处环境中的静态对象的点云,提高点云地图的准确性。
应注意的是,因为地下车库中点云的最大高度小于2.5m的点云簇可能是地下车库中静止的对象的点云簇,如地面、地面上的隔离墩、防撞墩等的点云簇。因此,本申请实施例中可以将该地下车库中静止的对象作为预设对象,在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇。如在各对象中除了地面、地面上的隔离墩、防撞墩等的点云簇之外的点云簇中,删除点云的最大z值小于预设值的点云簇。这样可以保留地下车库详细又完整的点云,提高了点云地图的准确性。
相对应的,在该种情况下,图14为本申请实施例提供的定位方法的另一实施例的流程示意图。如图14所示,在上述S1002之前还可以包括:
S1007,对当前点云进行滤波处理。
S1008,对当前点云进行聚类,得到环境中的各对象的点云簇。
S1009,在各对象的点云簇中,删除点云的最大z值小于预设值的点云簇。
相应的,S1002可以替换为S1002':提取删除处理后的当前点云的属性特征。
应理解,本申请实施例中的S1007-S1009中的实施方式可以参照上述S1104-S1106的相关描述。不同的是,上述S1104-S1106处理的是多个关键帧对应的点云,S1007-S1009中处理的是当前点云。
下表二是采用图14所示的定位方法、图1和图2所示的定位方法定位的准确率。
表二
方法 车辆数目0 车辆数目15 车辆数目30 车辆数目45
图14对应的方法 94.7% 95.6% 96.3% 94.9%
图1对应的方法 70.2% 83.4% 86.6% 77.9%
图2对应的方法 44.6% 58.2% 60.4% 66.7%
本申请实施例中,一方面将点云的属性特征的匹配和ICP算法相结合,可以提高在相似场景中的定位准确性,另一方面,在构建点云地图时可删除场景中位置变化的对象的点云,进而提高点云地图的准确性,能够提高定位准确性,且在提取当前点云中的属性特征之前,也删除场景中位置变化的对象的点云,能够进一步提高待定位对象的定位准确性。
图15为本申请实施例提供的定位装置的结构示意图。图15所示的定位装置可以为上述实施例中的待定位对象,可以执行上述图3、图4、图6、图8、图10-图12、图14中的定位方法。该定位装置1500可以包括:第一处理模块1501、第二处理模块1502和点云地图构建模块1503。
第一处理模块1501,用于采集当前点云,当前点云包括待定位对象的点云和待定位对象所处环境的点云。
第二处理模块1502,用于提取当前点云的属性特征,且根据当前点云的属性特征,以及待定位对象所处环境的点云地图中的点云的属性特征,获取待定位对象的位姿。
在一种可能的实现方式中,属性特征包括如下至少一项:点云的投影特征、点云的法向量特征、点云的曲率特征。
在一种可能的实现方式中,属性特征包括点云的投影特征,第二处理模块1502,具体用于将当前点云投影至如下至少一个坐标平面:xoy平面、yoz平面、xoz平面,得到当前投影点云,o为待定位对象的中心位置,x、y和z分别为以o为坐标原点构建的坐标轴;获取当前投影点云与o的距离,且将距离归类至预设的距离区间中;根据每个距离区间中的当前点云中的点的数量,获取当前点云的属性特征。
在一种可能的实现方式中,第二处理模块1502,具体用于将当前点云分别投影至xoy平面、yoz平面和xoz平面,得到当前点云对应的三个当前投影点云;获取当前点云的每个当前投影点云与o的距离,且将距离归类至对应的坐标平面的预设的距离区间中;根据每个坐标平面的每个距离区间中的当前点云中的点的数量,得到当前点云的属性特征。
在一种可能的实现方式中,预设的距离区间与待定位对象采集到的点云中的点的最远距离和最近距离相关。
在一种可能的实现方式中,属性特征包括点云的法向量特征,当前点云中的点均为第一点;对于当前点云中的每个第一点,第二处理模块1502,具体用于确定第一点所属的第一扫描线,在第一扫描线上,且在第一点的两侧分别获取距离第一点最近的第一临近点,以及在第二扫描线上获取距离第一点最近的第二临近点,第二扫描线与第一扫描线相邻;根据第一点、第一临近点和第二临近点,获取第一点的法向量;根据第一点的法向量,获取当前点云的属性特征。
在一种可能的实现方式中,第二处理模块1502,具体用于获取第一点至第一临近点的第一向量;获取第一点至第二临近点的第二向量;在第一向量和第二向量中进行两两进行叉积计算,且将叉积计算的结果的均值作为第一点的法向量。
在一种可能的实现方式中,第二处理模块1502,具体用于获取第一点的法向量在至少一个坐标轴的投影的长度,且将长度归类至预设的长度区间中;根据每个长度区间中的第一点的数量,获取当前点云的属性特征。
在一种可能的实现方式中,第二处理模块1502,具体用于获取第一点的法向量在z坐标轴的投影的长度。
在一种可能的实现方式中,属性特征包括点云的曲率特征,当前点云中的点均为第一点;对于当前点云中的每个第一点,第二处理模块1502,具体用于确定第一点所属的第一扫描线;根据第一点的位置、第一扫描线上第一点两侧的点的位置,获取第一点的曲率;将第一点的曲率归类至预设的曲率区间中;根据每个曲率区间中的第一点的数量,获取当前点云的属性特征。
在一种可能的实现方式中,点云地图包括多个关键帧的点云的属性特征;第二处理模块1502,具体用于获取当前点云的属性特征和每个关键帧的点云的属性特征的相似度;按照相似度从大到小的顺序,获取排名前预设数量个相似度对应的目标关键帧;将每个目标关键帧对应位姿作为候选位姿;在多个候选位姿中确定待定位对象的位姿。
在一种可能的实现方式中,第二处理模块1502,具体用于根据每个候选位姿对应的转 换关系,将当前点云转换至点云地图中,得到每个候选位姿对应的转换点云;在点云地图中,获取与每个候选位姿对应的转换点云距离最近的目标点云;将每个候选位姿对应的转换点云和对应的目标点云进行迭代最近点ICP匹配,得到匹配度最高的转换点云;根据匹配度最高的转换点云对应的位姿,以及ICP匹配的转换关系,获取待定位对象的位姿。
在一种可能的实现方式中,第二处理模块1502,还用于对当前点云进行聚类,得到环境中的各对象的点云簇;在各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇,以提取删除处理后的当前点云的属性特征。
在一种可能的实现方式中,待定位对象在环境的移动过程中,点云地图构建模块1503,用于采集待定位对象的各位姿对应的点云,提取各位姿对应的点云的属性特征,且将各位姿对应的点云的属性特征、各位姿对应存储,构建点云地图。
在一种可能的实现方式中,待定位对象在环境的移动过程中,点云地图构建模块1503,用于每隔预设距离,采集一个关键帧的点云,获取待定位对象在采集一个关键帧的点云的位姿,以得到各位姿对应的点云。
在一种可能的实现方式中,第二处理模块1502,还用于对各位姿对应的点云进行聚类,得到环境中的各对象的点云簇,在各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇,以提取删除处理后的各位姿对应的点云的属性特征。
本申请实施例提供的定位装置,可以执行上述方法实施例中待定位对象的动作,其实现原理和技术效果类似,在此不再赘述。
需要说明的是,应理解以上处理模块可以以软件通过处理元件调用的形式实现;也可以以硬件的形式实现。例如,处理模块可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上处理模块的功能。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
图16为本申请实施例提供的电子设备的结构示意图。该电子设备如上述所示的机器人、车辆等。如图16所示,该电子设备1600可以包括:处理器1601(例如CPU)、存储器1602和点云采集装置1603。点云采集装置1603可以耦合至处理器1601,处理器1601控制点云采集装置1603执行发射激光雷达或毫米波雷达的动作,以采集电子设备周围的点云,以使电子设备获取周围的点云。存储器1602可能包含高速随机存取存储器(random-access memory,RAM),也可能还包括非易失性存储器(non-volatile memory,NVM),例如至少一个磁盘存储器,存储器1602中可以存储各种指令,以用于完成各种 处理功能以及实现本申请的方法步骤。可选的,本申请涉及的电子设备1600还可以包括:电源1604、通信总线1605以及通信端口1606。上述通信端口1606用于实现电子设备与其他外设之间进行连接通信。
在本申请实施例中,上述存储器1602用于存储计算机可执行程序代码,程序代码包括指令;当处理器1601执行指令时,指令使电子设备的处理器1601执行上述方法实施例中的动作,其实现原理和技术效果类似,在此不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本文中的术语“多个”是指两个或两个以上。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系;在公式中,字符“/”,表示前后关联对象是一种“相除”的关系。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。可以理解的是,在本申请的实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。

Claims (19)

  1. 一种定位方法,其特征在于,包括:
    采集当前点云,所述当前点云包括待定位对象的点云和所述待定位对象所处环境的点云;
    提取所述当前点云的属性特征;
    根据所述当前点云的属性特征,以及所述待定位对象所处环境的点云地图中的点云的属性特征,获取所述待定位对象的位姿。
  2. 根据权利要求1所述的方法,其特征在于,所述属性特征包括如下至少一项:点云的投影特征、点云的法向量特征、点云的曲率特征。
  3. 根据权利要求2所述的方法,其特征在于,所述属性特征包括点云的投影特征,所述提取所述当前点云的属性特征,包括:
    将所述当前点云投影至如下至少一个坐标平面:xoy平面、yoz平面、xoz平面,得到当前投影点云,o为所述待定位对象的中心位置,x、y和z分别为以o为坐标原点构建的坐标轴;
    获取所述当前投影点云与所述o的距离,且将所述距离归类至预设的距离区间中;
    根据每个距离区间中的当前点云中的点的数量,获取所述当前点云的属性特征。
  4. 根据权利要求3所述的方法,其特征在于,所述将所述当前点云投影至如下至少一个坐标平面,包括:
    将所述当前点云分别投影至xoy平面、yoz平面和xoz平面,得到所述当前点云对应的三个当前投影点云;
    所述获取所述当前投影点云与所述o的距离,且将所述距离归类至预设的距离区间中,包括:
    获取所述当前点云的每个所述当前投影点云与所述o的距离,且将所述距离归类至对应的坐标平面的预设的距离区间中;
    所述根据每个距离区间中的当前点云中的点的数量,获取所述当前点云的属性特征,包括:
    根据每个所述坐标平面的每个距离区间中的当前点云中的点的数量,得到所述当前点云的属性特征。
  5. 根据权利要求3或4所述的方法,其特征在于,预设的距离区间与所述待定位对象采集到的点云中的点的最远距离和最近距离相关。
  6. 根据权利要求2-5中任一项所述的方法,其特征在于,所述属性特征包括点云的法向量特征,所述当前点云中的点均为第一点;
    对于所述当前点云中的每个第一点,所述提取所述当前点云的属性特征,包括:
    确定所述第一点所属的第一扫描线;
    在所述第一扫描线上,且在所述第一点的两侧分别获取距离所述第一点最近的第一临近点;
    在第二扫描线上获取距离所述第一点最近的第二临近点,所述第二扫描线与所述第一扫描线相邻;
    根据所述第一点、所述第一临近点和所述第二临近点,获取所述第一点的法向量;
    根据所述第一点的法向量,获取所述当前点云的属性特征。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述第一点、所述第一临近点和所述第二临近点,获取所述第一点的法向量,包括:
    获取所述第一点至所述第一临近点的第一向量;
    获取所述第一点至所述第二临近点的第二向量;
    在所述第一向量和所述第二向量中进行两两进行叉积计算,且将所述叉积计算的结果的均值作为所述第一点的法向量。
  8. 根据权利要求6或7所述的方法,其特征在于,所述根据所述第一点的法向量,获取所述当前点云的属性特征,包括:
    获取所述第一点的法向量在至少一个坐标轴的投影的长度,且将所述长度归类至预设的长度区间中;
    根据每个长度区间中的第一点的数量,获取所述当前点云的属性特征。
  9. 根据权利要求8所述的方法,其特征在于,所述获取所述第一点的法向量在至少一个坐标轴的投影的长度,包括:
    获取所述第一点的法向量在z坐标轴的投影的长度。
  10. 根据权利要求2-9中任一项所述的方法,其特征在于,所述属性特征包括点云的曲率特征,所述当前点云中的点均为第一点;
    对于所述当前点云中的每个第一点,所述提取所述当前点云的属性特征,包括:
    确定所述第一点所属的第一扫描线;
    根据所述第一点的位置、所述第一扫描线上所述第一点两侧的点的位置,获取所述第一点的曲率;
    将所述第一点的曲率归类至预设的曲率区间中;
    根据每个曲率区间中的第一点的数量,获取所述当前点云的属性特征。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述点云地图包括多个关键帧的点云的属性特征;所述根据所述当前点云的属性特征,以及所述待定位对象所处环境的点云地图中的点云的属性特征,获取所述待定位对象的位姿,包括:
    获取所述当前点云的属性特征和每个所述关键帧的点云的属性特征的相似度;
    按照相似度从大到小的顺序,获取排名前预设数量个相似度对应的目标关键帧;
    将每个所述目标关键帧对应位姿作为候选位姿;
    在多个候选位姿中确定所述待定位对象的位姿。
  12. 根据权利要求11所述的方法,其特征在于,所述在多个候选位姿中确定所述待定位对象的位姿,包括:
    根据每个候选位姿对应的转换关系,将所述当前点云转换至所述点云地图中,得到每个候选位姿对应的转换点云;
    在所述点云地图中,获取与每个候选位姿对应的转换点云距离最近的目标点云;
    将每个候选位姿对应的转换点云和对应的目标点云进行迭代最近点ICP匹配,得到匹配度最高的转换点云;
    根据所述匹配度最高的转换点云对应的位姿,以及ICP匹配的转换关系,获取所述待 定位对象的位姿。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述提取所述当前点云的属性特征之前,还包括:
    对所述当前点云进行聚类,得到所述环境中的各对象的点云簇;
    在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇;
    所述提取所述当前点云的属性特征,包括:
    提取删除处理后的当前点云的属性特征。
  14. 根据权利要求1-13中任一项所述的方法,其特征在于,所述采集当前点云之前,还包括:
    所述待定位对象在所述环境的移动过程中,采集所述待定位对象的各位姿对应的点云;
    提取各位姿对应的点云的属性特征;
    将所述各位姿对应的点云的属性特征、各位姿对应存储,构建所述点云地图。
  15. 根据权利要求14所述的方法,其特征在于,所述待定位对象在所述环境的移动过程中,采集各位姿对应的点云,包括:
    所述待定位对象在所述环境的移动过程中,每隔预设距离,采集一个关键帧的点云;
    获取所述待定位对象在采集所述一个关键帧的点云的位姿,以得到所述各位姿对应的点云。
  16. 根据权利要求14所述的方法,其特征在于,所述提取各位姿对应的点云的属性特征之前,还包括:
    对所述各位姿对应的点云进行聚类,得到所述环境中的各对象的点云簇;
    在所述各对象除了预设对象的点云簇中,删除点云的最大z值小于预设值的点云簇;
    所述提取各位姿对应的点云的属性特征,包括:
    提取删除处理后的各位姿对应的点云的属性特征。
  17. 一种定位装置,其特征在于,包括:
    第一处理模块,用于采集当前点云,所述当前点云包括待定位对象的点云和所述待定位对象所处环境的点云;
    第二处理模块,用于提取所述当前点云的属性特征,且根据所述当前点云的属性特征,以及所述待定位对象所处环境的点云地图中的点云的属性特征,获取所述待定位对象的位姿。
  18. 一种电子设备,其特征在于,包括:存储器、处理器和点云采集装置;
    所述处理器用于与所述存储器耦合,读取并执行所述存储器中的指令,以实现权利要求1-16中任一项所述的方法;
    所述点云采集装置,用于采集待定位对象周围的点云。
  19. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机指令,当所述计算机指令被计算机执行时,使得所述计算机执行权利要求1-16中任一项所述的方法。
PCT/CN2020/124519 2020-10-28 2020-10-28 定位方法、装置、电子设备和存储介质 WO2022087916A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202080004463.9A CN112543859B (zh) 2020-10-28 2020-10-28 定位方法、装置、电子设备和存储介质
CN202210726454.1A CN115143962A (zh) 2020-10-28 2020-10-28 定位方法、装置、电子设备和存储介质
PCT/CN2020/124519 WO2022087916A1 (zh) 2020-10-28 2020-10-28 定位方法、装置、电子设备和存储介质
EP20959079.3A EP4215874A4 (en) 2020-10-28 2020-10-28 POSITIONING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/124519 WO2022087916A1 (zh) 2020-10-28 2020-10-28 定位方法、装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022087916A1 true WO2022087916A1 (zh) 2022-05-05

Family

ID=75017370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124519 WO2022087916A1 (zh) 2020-10-28 2020-10-28 定位方法、装置、电子设备和存储介质

Country Status (3)

Country Link
EP (1) EP4215874A4 (zh)
CN (2) CN115143962A (zh)
WO (1) WO2022087916A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115856931A (zh) * 2023-03-01 2023-03-28 陕西欧卡电子智能科技有限公司 基于激光雷达的无人船停泊库位重定位方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990235B (zh) * 2021-05-06 2021-08-20 北京云圣智能科技有限责任公司 点云数据的处理方法、装置及电子设备
CN113343840B (zh) * 2021-06-02 2022-03-08 合肥泰瑞数创科技有限公司 基于三维点云的对象识别方法及装置
CN116229040A (zh) * 2022-07-15 2023-06-06 深圳市速腾聚创科技有限公司 目标区域的定位方法和目标区域的定位装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108917761A (zh) * 2018-05-07 2018-11-30 西安交通大学 一种无人车在地下车库中的精确定位方法
CN109709801A (zh) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 一种基于激光雷达的室内无人机定位系统及方法
CN109785389A (zh) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 一种基于哈希描述与迭代最近点的三维物体检测方法
CN110223379A (zh) * 2019-06-10 2019-09-10 于兴虎 基于激光雷达的三维点云重建方法
US20190340781A1 (en) * 2018-09-07 2019-11-07 Baidu Online Network Technology (Beijing) Co., Ltd. Obstacle detecting method and obstacle detecting apparatus based on unmanned vehicle, and device, and storage medium
CN111583369A (zh) * 2020-04-21 2020-08-25 天津大学 一种基于面线角点特征提取的激光slam方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700398A (zh) * 2014-12-31 2015-06-10 西安理工大学 一种点云场景物体提取方法
US20180247122A1 (en) * 2017-02-28 2018-08-30 VimAl Oy Method and system of providing information pertaining to objects within premises
US10895971B2 (en) * 2017-05-12 2021-01-19 Irobot Corporation Methods, systems, and devices for mapping, controlling, and displaying device status
CN109974742A (zh) * 2017-12-28 2019-07-05 沈阳新松机器人自动化股份有限公司 一种激光里程计算方法和地图构建方法
CN110458854B (zh) * 2018-05-02 2022-11-15 北京图森未来科技有限公司 一种道路边缘检测方法和装置
CN109974707B (zh) * 2019-03-19 2022-09-23 重庆邮电大学 一种基于改进点云匹配算法的室内移动机器人视觉导航方法
CN110084840B (zh) * 2019-04-24 2022-05-13 阿波罗智能技术(北京)有限公司 点云配准方法、装置、服务器和计算机可读介质
CN110705574B (zh) * 2019-09-27 2023-06-02 Oppo广东移动通信有限公司 定位方法及装置、设备、存储介质
CN111429574B (zh) * 2020-03-06 2022-07-15 上海交通大学 基于三维点云和视觉融合的移动机器人定位方法和系统
CN111311684B (zh) * 2020-04-01 2021-02-05 亮风台(上海)信息科技有限公司 一种进行slam初始化的方法与设备
CN111161347B (zh) * 2020-04-01 2020-09-29 亮风台(上海)信息科技有限公司 一种进行slam初始化的方法与设备
CN111627114A (zh) * 2020-04-14 2020-09-04 北京迈格威科技有限公司 室内视觉导航方法、装置、系统及电子设备
CN111505662B (zh) * 2020-04-29 2021-03-23 北京理工大学 一种无人驾驶车辆定位方法及系统
CN111596298B (zh) * 2020-05-13 2022-10-14 北京百度网讯科技有限公司 目标对象的定位方法、装置、设备及存储介质
CN111638528B (zh) * 2020-05-26 2023-05-30 北京百度网讯科技有限公司 定位方法、装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108917761A (zh) * 2018-05-07 2018-11-30 西安交通大学 一种无人车在地下车库中的精确定位方法
US20190340781A1 (en) * 2018-09-07 2019-11-07 Baidu Online Network Technology (Beijing) Co., Ltd. Obstacle detecting method and obstacle detecting apparatus based on unmanned vehicle, and device, and storage medium
CN109709801A (zh) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 一种基于激光雷达的室内无人机定位系统及方法
CN109785389A (zh) * 2019-01-18 2019-05-21 四川长虹电器股份有限公司 一种基于哈希描述与迭代最近点的三维物体检测方法
CN110223379A (zh) * 2019-06-10 2019-09-10 于兴虎 基于激光雷达的三维点云重建方法
CN111583369A (zh) * 2020-04-21 2020-08-25 天津大学 一种基于面线角点特征提取的激光slam方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4215874A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115856931A (zh) * 2023-03-01 2023-03-28 陕西欧卡电子智能科技有限公司 基于激光雷达的无人船停泊库位重定位方法

Also Published As

Publication number Publication date
EP4215874A1 (en) 2023-07-26
CN112543859A (zh) 2021-03-23
CN112543859B (zh) 2022-07-15
CN115143962A (zh) 2022-10-04
EP4215874A4 (en) 2023-11-29

Similar Documents

Publication Publication Date Title
WO2022087916A1 (zh) 定位方法、装置、电子设备和存储介质
JP6745328B2 (ja) 点群データを復旧するための方法及び装置
US11244189B2 (en) Systems and methods for extracting information about objects from scene information
TWI743645B (zh) 一種資訊處理方法及裝置、定位方法及裝置、電子設備和電腦可讀儲存媒介
KR102235827B1 (ko) 격자 지도를 생성하는 방법 및 장치
JP6893249B2 (ja) ターゲット追跡方法、装置、電子機器及び記憶媒体
Xu et al. Toward building and civil infrastructure reconstruction from point clouds: A review on data and key techniques
Lu et al. Visual navigation using heterogeneous landmarks and unsupervised geometric constraints
US11295522B2 (en) Three-dimensional (3D) model creation and incremental model refinement from laser scans
WO2022262160A1 (zh) 传感器标定方法及装置、电子设备和存储介质
Najdataei et al. Continuous and parallel lidar point-cloud clustering
CN113587933B (zh) 一种基于分支定界算法的室内移动机器人定位方法
CN112784873A (zh) 一种语义地图的构建方法及设备
CN111380510B (zh) 重定位方法及装置、机器人
CN113706480A (zh) 一种基于关键点多尺度特征融合的点云3d目标检测方法
CN114549738A (zh) 无人车室内实时稠密点云重建方法、系统、设备及介质
WO2024001969A1 (zh) 一种图像处理方法、装置、存储介质及计算机程序产品
CN113223078A (zh) 标志点的匹配方法、装置、计算机设备和存储介质
WO2023124676A1 (zh) 3d模型构建方法、装置和电子设备
Xu et al. Three-dimensional object detection with deep neural networks for automatic as-built reconstruction
CN114091515A (zh) 障碍物检测方法、装置、电子设备和存储介质
CN116642490A (zh) 基于混合地图的视觉定位导航方法、机器人及存储介质
CN116740160A (zh) 一种复杂交通场景中的毫秒级多平面实时提取方法及装置
Vu et al. Adaptive ground segmentation method for real-time mobile robot control
KR102314954B1 (ko) 반사 영상 제거 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20959079

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020959079

Country of ref document: EP

Effective date: 20230421

NENP Non-entry into the national phase

Ref country code: DE