WO2021253430A1 - 绝对位姿确定方法、电子设备及可移动平台 - Google Patents

绝对位姿确定方法、电子设备及可移动平台 Download PDF

Info

Publication number
WO2021253430A1
WO2021253430A1 PCT/CN2020/097198 CN2020097198W WO2021253430A1 WO 2021253430 A1 WO2021253430 A1 WO 2021253430A1 CN 2020097198 W CN2020097198 W CN 2020097198W WO 2021253430 A1 WO2021253430 A1 WO 2021253430A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
key frame
point cloud
local
point
Prior art date
Application number
PCT/CN2020/097198
Other languages
English (en)
French (fr)
Inventor
朱晏辰
李延召
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/097198 priority Critical patent/WO2021253430A1/zh
Priority to CN202080006249.7A priority patent/CN114080625A/zh
Publication of WO2021253430A1 publication Critical patent/WO2021253430A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • the embodiment of the present invention relates to the technical field of distance measurement, and more specifically, to a method for determining an absolute pose, an electronic device, and a movable platform.
  • GPS Global Positioning System
  • RTK Real-time kinematic, real-time dynamic carrier phase
  • Differential technology can obtain centimeter-level positioning accuracy, but it is expensive.
  • the above technologies can only obtain position information, but not posture information.
  • the first aspect of the embodiments of the present invention provides an absolute pose determination method based on lidar, including:
  • the basic map includes a plurality of key frame maps, the key frame map corresponds to the key frame pose, the key frame map includes the first lidar at the key frame position Information of the first point cloud data collected in the posture;
  • the second point cloud data is collected by the second lidar mounted on the movable platform, and a local map in the current pose is obtained according to the second point cloud data;
  • the partial map is matched with the multiple key frame maps to determine a key frame map that matches the partial map, and the second key frame pose is determined according to the key frame pose corresponding to the key frame map.
  • the current pose of the lidar is matched with the multiple key frame maps to determine a key frame map that matches the partial map, and the second key frame pose is determined according to the key frame pose corresponding to the key frame map.
  • a second aspect of the embodiments of the present invention provides an electronic device.
  • the electronic device includes a storage device and a processor, wherein the storage device is used to store program code; the processor is used to execute the program code.
  • the program code is executed, it is used for:
  • the basic map includes a plurality of key frame maps, the key frame map corresponds to the key frame pose, the key frame map includes the first lidar at the key frame position Information of the first point cloud data collected in the posture;
  • the second point cloud data is collected by the second lidar mounted on the movable platform, and a local map in the current pose is obtained according to the second point cloud data;
  • the partial map is matched with the multiple key frame maps to determine a key frame map that matches the partial map, and the second key frame pose is determined according to the key frame pose corresponding to the key frame map.
  • the current pose of the lidar is matched with the multiple key frame maps to determine a key frame map that matches the partial map, and the second key frame pose is determined according to the key frame pose corresponding to the key frame map.
  • a third aspect of the embodiment of the present invention provides a movable platform on which a lidar is mounted, and the movable platform further includes the electronic device provided in the second aspect of the embodiment of the present invention.
  • a fourth aspect of the embodiments of the present invention provides a computer storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the absolute pose determination method provided in the first aspect of the embodiments of the present invention are implemented.
  • the method for determining the absolute pose, the electronic device, and the movable platform of the embodiments of the present invention construct a basic map based on point cloud data in advance, and match the local map generated based on the point cloud data collected in real time with the basic map to determine the current absolute Pose, pose calculation accuracy is high, no need to rely on base station layout, and strong versatility.
  • FIG. 1 is a schematic flowchart of a method for determining an absolute pose based on lidar provided by an embodiment of the present invention
  • Fig. 2 is a schematic diagram of a scene for constructing a basic map according to an embodiment of the present invention
  • Figure 3 is a schematic block diagram of an electronic device provided by an embodiment of the present invention.
  • Figure 4 is a schematic block diagram of a movable platform provided by an embodiment of the present invention.
  • FIG. 5 is a schematic frame diagram of a distance measuring device related to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an embodiment in which the distance measuring device involved in the embodiment of the present invention adopts a coaxial optical path
  • Fig. 7 is a schematic diagram of a scanning pattern of a lidar according to an embodiment of the present invention.
  • FIG. 1 shows a schematic flowchart of a method 100 for determining an absolute pose based on lidar according to an embodiment of the present application.
  • the absolute pose determination method 100 includes the following steps:
  • step S110 load a pre-built basic map of the current scene, the basic map includes a plurality of key frame maps, the key frame map corresponds to the key frame pose, and the key frame map includes the first lidar Information of the first point cloud data collected under the key frame pose;
  • step S120 in the current pose, a second laser radar mounted on the movable platform collects second point cloud data, and obtains a local map in the current pose according to the second point cloud data;
  • step S130 the partial map is matched with the multiple key frame maps to determine a key frame map that matches the partial map, and the key frame pose corresponding to the key frame map is determined The current pose of the second lidar.
  • the absolute pose determination method 100 of the embodiment of the present invention completes the basic map construction by scanning the current scene with lidar in the map construction phase, and directly completes the current absolute pose determination through the algorithm according to the point cloud data collected in real time in the pose determination phase. , It has the characteristics of high accuracy of pose calculation, no need to rely on base station layout, and strong versatility. It is especially suitable for indoor environments without GPS or weak GPS signals, such as shopping malls, airports, warehouses, hotels and other scenes.
  • the method 100 of the embodiment of the present invention can be applied to a movable platform equipped with lidar.
  • the movable platform includes but is not limited to unmanned vehicles, robots, etc., and may further include indoor robots running in the aforementioned indoor scenes.
  • the lidar carried on the movable platform is the second lidar in step S120. After the second point cloud data is collected by the second lidar, the second point cloud data can be determined according to the second point cloud data and the pre-built basic map in the current scene. The current pose of the second lidar determines the current pose of the movable platform equipped with the second lidar.
  • the loaded basic map is constructed in advance based on the first point cloud data collected by the first lidar in the current scene.
  • the first lidar may be a mechanical lidar, or a solid or semi-solid lidar.
  • the mobile platform can carry the first lidar to move in the current scene, and the first lidar collects the first point cloud data during the movement, and then builds the basic map based on the first point cloud data.
  • the first lidar and the second lidar can be the same lidar, thereby improving the degree of matching between the local map and the base map.
  • the first lidar and the second lidar can also be different lidars.
  • the same or similar movable platforms can be used for basic map construction and absolute pose determination, thereby further improving the degree of matching between the local map and the basic map.
  • the first lidar actively emits laser pulses to the detected object, captures the laser echo signal, and calculates the distance of the detected object according to the time difference between laser emission and reception; based on the known emission direction of the laser, obtain The angle information of the measured object; the reflectance information is obtained according to the pulse width and distance.
  • the first lidar can detect the distance from the measured object to the lidar through Time-of-Flight (TOF).
  • TOF Time-of-Flight
  • the first lidar may also use other technologies to detect the distance from the measured object to the lidar, such as a method based on phase shift (phase shift) or frequency shift (frequency shift), which is not limited here.
  • Fig. 2 shows a schematic diagram of a scene for constructing a basic map according to an embodiment of the present invention.
  • the mobile platform is equipped with the first lidar to roam in the current scene, obtain the mapping trajectory through the SLAM (Simultaneous Localization and Mapping, real-time positioning and map construction) algorithm, and use the point cloud data collected during the roaming process Convert to the same coordinate system to obtain the first point cloud data.
  • the movable platform may be a warehousing logistics trolley, and the current scenario may be a warehouse where the warehousing logistics trolley runs.
  • the mapping trajectory of the movable platform should ensure that the first point cloud data contains as much point cloud data as possible in the current scene, including but not limited to the closed curve shown in FIG. 2.
  • the basic map constructed according to the above-mentioned first point cloud data includes a key frame map corresponding to the key frame pose at the key frame moment.
  • the key frame map is in the form of an image, that is, an object is represented by pixel values.
  • the smallest processing unit of the first point cloud data is called a frame.
  • the first lidar with an output frequency of 10 Hz the first point cloud data collected every 100 ms is called a frame.
  • one frame is determined every predetermined time as the above-mentioned key frame, and the time interval between adjacent key frames can be dynamically set according to the size of the current scene and the movement speed of the movable platform.
  • the last frame of each time period may be determined as a key frame.
  • the key frame may also be the first frame or a certain frame in the middle of each time period.
  • the first lidar is in the key frame pose (ie position and attitude), and its covered field of view (FOV) is the field of view under the key frame pose.
  • the first point cloud data collected at this time includes the point cloud covered by the angle of view.
  • the key frame map is generated based on the point cloud data at the key frame time, the key frame map is associated with the key frame pose at the key frame time.
  • the key frame map can not only include the point cloud data collected at the key frame time, but also can superimpose the first point cloud data within a period of time before and after the key frame time on the key frame time. And generate a key frame map based on the first point cloud data superimposed on the key frame moment. By superimposing the point cloud data in a period of time before and after the key frame time on the key frame time, the point cloud data at the key frame time can be made more dense.
  • the key frame map generated based on the superimposed first point cloud data thus contains the key frame More spatial information under the pose.
  • the difference in the field of view covered by the lidar is relatively small, so the target objects contained in the first point cloud data within a certain period of time have a small difference.
  • the first point cloud data is converted to the point cloud coordinate system at the key frame time to realize the point cloud superposition. Converting the first point cloud data at other times to the coordinate system of the key frame time is equivalent to making the first point cloud data collected at other times equal to the point cloud data collected under the key frame pose, so the superimposed first point cloud data A bit of cloud data still corresponds to the key frame pose.
  • "superimposing the first point cloud data within a period of time before and after the key frame time on the key frame time” may include superimposing the first point cloud data within a period of time before the key frame time on the key frame time, and adding the key frame time.
  • all the point cloud frames between two adjacent key frames can be superimposed on one of the key frame moments, so that the point cloud data superimposed on the key frame moments contains as much information as possible, which improves the follow-up The accuracy of the match.
  • part of the point cloud frames between adjacent key frames can also be superimposed on the key frame moments, thereby reducing the amount of calculation.
  • the last frame at intervals may be determined as a key frame, and the first point cloud data in the period of time before the key frame are all superimposed on the key frame time.
  • Superimposing the first point cloud data within a period of time before the key frame on the key frame time can avoid the situation that the first point cloud data cannot be superimposed because there are no other point cloud frames after the last key frame.
  • the key frame map corresponding to each key frame pose includes at least one of a key frame reflectivity map and a key frame depth map.
  • the key frame reflectivity map is constructed based on the reflectivity information of the first point cloud data superimposed at the key frame time;
  • the key frame depth map is constructed based on the depth information of the first point cloud data superimposed at the key frame time of.
  • Both reflectance information and depth information can characterize the characteristics of the target object, thereby forming a unique key frame map corresponding to each key frame pose.
  • the method of constructing a key frame reflectance map based on reflectance information or constructing a key frame depth map based on depth information includes: converting the point cloud points superimposed in the first point cloud data at the key frame moment to a spherical coordinate system, The azimuth and elevation angles correspond to the XY directions of the map respectively, and the pixel values, such as pixel gray values, are determined according to the reflectance value or the depth value, so that the point cloud image can be adjusted to a certain resolution (for example, 0.1 degree * 0.1 degree) Convert to grayscale image.
  • the key frame map can be matched by image processing in the subsequent steps to reduce the difficulty of matching.
  • the key frame map also carries the location information of the cloud data, thus ensuring absolute The accuracy of the pose calculation.
  • the pre-built base map may further include a global point cloud map in the current scene generated according to the first point cloud data.
  • the global point cloud map is in the form of a point cloud map, which is composed of a large number of point cloud points.
  • the mobile platform carries the first lidar to roam in the current scene, and after collecting the first point cloud data in different poses during the roaming process, the first point cloud data of each frame is converted to world coordinates It is superimposed under the system. Since the superimposed first point cloud data contains the global point cloud in the current scene, the generated point cloud map is called the global point cloud map.
  • the global point cloud map can be constructed as a global planar point map and a global edge point map, respectively.
  • the global plane point map includes the plane points in the point cloud, which correspond to the points on a plane in the real scene;
  • the global edge point map includes the edge points in the point cloud, which correspond to the plane points, objects, and objects in the real scene.
  • a point on the edge of a thin rod is extracted in each frame of point cloud of the first point cloud data.
  • edge points and plane points are extracted respectively, and finally all edge points and plane points are converted to the world coordinate system to form a global edge point map.
  • the global flat point map is generated by the global flat point map.
  • the first laser radar used in the embodiment of the present invention includes not only a mechanical laser radar, but also a solid or semi-solid laser radar.
  • the point cloud scanning pattern of mechanical lidar is regular, and the extraction of plane points and edge points is relatively simple, while the point cloud scanning pattern of semi-solid or solid-state lidar is irregular.
  • the feature extraction methods used in the past are accurate in extracting plane points and edge points. Therefore, the embodiment of the present invention adopts the following method to extract plane points and edge points.
  • the current frame point cloud data in the first point cloud data is preprocessed first.
  • the preprocessing may include the following steps: First, traverse the point cloud data of the current frame to obtain and mark the zero point in the point cloud data of the current frame.
  • the zero point may be a point in the blind zone, a point at infinity, etc.
  • the point cloud data of the current frame is traversed to determine the noise points, and the plane points and edge points will be extracted from the point cloud points other than the noise points.
  • the method for determining the noise point includes: calculating the distance between each point cloud point and two points before and after it, and if both of them exceed a certain threshold, marking the point cloud point as a noise point.
  • the plane points are extracted from the first point cloud data after the preprocessing.
  • the extraction of plane points can be realized by the sliding window method.
  • the method of extracting plane points includes: first, according to the sliding window size determined in the preprocessing step (for example, the sliding window size is 5), the first predetermined number is obtained from the first point cloud data of the current frame according to the time sequence.
  • a set of point cloud points (for example, 5 point cloud points) of the obtained point cloud points, and determine whether the obtained set of point cloud points meet the first preset condition.
  • the first preset condition includes: the spatial distribution of the group of point cloud points is approximately a straight line, and the group of point cloud points is approximately centrally symmetric when the center point is the center.
  • a principal component analysis method may be used to determine whether the acquired set of point cloud points meets the first preset condition.
  • the group of point cloud points is determined as the plane point candidate points. After that, the sliding window moves backward to obtain the same number of point cloud points of the next group for judgment, and the next group of point cloud points includes at least one point cloud point in the previous group of point cloud points. After the sliding window traverses all the point cloud points and extracts all the plane point candidate points that meet the first preset condition, the final plane point extraction result can be determined among the plane point candidate points.
  • the point cloud data of the current frame can be divided into several regions, and the plane point candidate points in each region can be sorted according to the degree of satisfaction of the first preset condition, and based on the sorting result from Part of the candidate points of the plane point selected in each area are used as the final plane point extraction result.
  • Edge points can be extracted based on the extraction results of plane points.
  • the embodiment of the present invention divides the edge points into three types during the feature extraction process: the edge points where the faces intersect, the jumping edge points, and the edge points of small objects.
  • the intersection edge point of the face corresponds to the point on the boundary line of the intersecting faces in the three-dimensional space
  • the jumping edge point corresponds to the point on the edge of the isolated face in the three-dimensional space
  • the edge point of the small object corresponds to the point of the small object in the three-dimensional space. The point on the edge.
  • the extraction of the edge point of the intersection of the surface and the surface includes at least the following steps:
  • the extraction of jumping edge points includes the following steps:
  • the current point cloud point is determined as the candidate jumping edge point.
  • the point cloud point pair formed by the candidate jumping edge point and the near side point is defined as the near side group
  • the point cloud point pair formed by the candidate jumping edge point and the far side point is defined as the far side group
  • the near side group is judged Whether the following conditions are met: the proximal group satisfies the first preset condition, the angle between the direction vector formed by the proximal group and the direction vector formed by the distal group satisfies the fourth threshold range, and the distance between any two points in the proximal group
  • the maximum value of satisfies the fifth threshold range, and the angle between the direction vector formed by the near side group and the exit direction of the candidate jumping edge point satisfies the sixth threshold range; optionally, it can also be determined whether the far side point is a non-zero point or It is the zero point of the non-blind zone.
  • the candidate jumping edge point can be determined as the final jumping edge point.
  • the extraction method for the edge points of the small objects is additionally used to extract the edge points of the small objects.
  • the second predetermined number of consecutive isolated point cloud clusters in the point cloud points are first determined as candidate edge points of small objects, where the maximum value of the distance between any two points in the point cloud cluster satisfies the seventh The threshold range, the second predetermined number (for example, two or three) is smaller than the first predetermined number (for example, five).
  • edge point of the candidate small object and the edge point in the previous frame together constitute a slender edge; if the edge point of the candidate small object and the edge point in the previous frame constitute a slender edge, the The edge point of the candidate small object is determined as the final edge point of the small object.
  • edge point and plane point extraction method it has strong adaptability to the scanning pattern of lidar.
  • it can also be used for the scanning pattern of solid-state and semi-solid lidar. Better extraction effect.
  • step S120 in the current pose, the second laser radar mounted on the movable platform collects second point cloud data, and obtains a local map in the current pose according to the second point cloud data.
  • step S130 the partial map is matched with the multiple key frame maps to determine a key frame map that matches the partial map, and the key frame map corresponding to the key frame map is determined according to the key frame map corresponding to the key frame map.
  • the pose determines the current pose of the second lidar.
  • the car that needs to enter the warehouse to perform tasks will first load the basic map, turn on the second lidar, and start point cloud data collection and absolute pose determination.
  • the second lidar remains stationary in the current pose and continues to collect the second point cloud data for a period of time (for example, 1s). If the second lidar is a non-repetitive scanning lidar, within the time range of collecting the second point cloud data, the coverage rate of the point cloud in its FOV will continue to increase, so it can collect more abundant information for better completion Confirm the current pose.
  • the local map in the current pose is also in the form of an image.
  • the match between the local map and the key frame map is the match between the images. Therefore, the type of the local map in the current pose should be the same as the above Corresponding to the type of keyframe map. That is to say, if the key frame map includes the key frame depth map, the local map in the current pose includes the local depth map, and the matching between the local map and the key frame map includes matching the local depth map with the key frame depth map; If the key frame map includes the key frame reflectivity map, the local map in the current pose includes the local reflectivity map, and the matching between the local map and the key frame map includes matching the local reflectivity map with the key frame reflectivity map. It can be understood that, in order to improve the accuracy of matching, the matching may include performing the above two types of matching separately.
  • the construction of the local depth map and the local reflectivity map can be similar to the construction of the key frame depth map and the key frame reflectivity map.
  • the pixel value is determined according to the reflectivity value or the depth value, so as to convert the point cloud image into For grayscale images, please refer to the above for details.
  • the current pose can be determined only by matching the local map with the dense key frame map.
  • the local map and the key frame map are in the form of images, any suitable image matching method can be used to achieve the matching between the two.
  • the feature matching method can be used to match the local map with the key frame map.
  • the image features of the local map and the key frame map are extracted for matching. If the matching degree is greater than a preset threshold, or the residual of the matching is less than a certain Threshold, it is determined that the local map matches the key frame map.
  • the image features extracted from the local map and the key frame map include HOG features, SIFT features, SURF features, ORB features, LBP features, HAAR features or any other suitable image features.
  • the gray-scale matching method can also be used to achieve the matching between the local map and the key frame map.
  • the matching degree between the local depth map and the key frame depth map and the matching degree between the local reflectivity map and the key frame reflectivity map can be greater than their respective presets.
  • the threshold it is determined that the local map matches the key frame map, so as to ensure the accuracy of the matching.
  • at least one of the matching degree between the local depth map and the key frame depth map and the matching degree between the local reflectivity map and the key frame reflectivity map is greater than the respective preset threshold, it is determined that the local map matches the key frame map .
  • the local map matches a key frame map successfully, it means that the current pose is similar to the key frame pose, but there is usually a certain gap between the two. Therefore, after finding the matching key frame map and local map, it also includes extracting matching feature point pairs from the local map and the matched key frame map, using ICP (Iterative Nearest Point) solution, etc., according to The three-dimensional spatial information of the feature point pairs calculates the pose transformation relationship of the local map relative to the key frame map, and determines the current pose according to the resolved pose transformation relationship and the key frame pose corresponding to the key frame map, thereby Ensure the accuracy of the pose calculation.
  • ICP Intelligent Nearest Point
  • the key frame map and the local map in the embodiment of the present invention are generated based on point cloud data, after obtaining the matching feature point pair, the three-dimensional spatial information of the feature point pair can be directly obtained for absolute pose
  • the solution is more convenient than the way of collecting images through a camera, and it can improve the accuracy of the pose calculation.
  • the pre-built basic map further includes a global point cloud map in the current scene generated according to the first point cloud data
  • the method 100 of the embodiment of the present invention may further include:
  • the second point cloud data generates a local point cloud map under the current perspective; the local point cloud map is matched with the global point cloud map to determine the current pose.
  • the local point cloud map includes a global plane point map and a global edge point map
  • the local point cloud map also includes a local plane point map and a local edge point map
  • the local point cloud map is matched with the global point cloud map Including: matching the local flat point map with the global flat point map, and matching the local edge point map with the global edge point map.
  • the local plane point map is constructed by extracting plane points from the second point cloud data
  • the local edge point map is constructed by extracting edge points from the second point cloud data.
  • constructing the local planar point map includes: acquiring a first predetermined number of point cloud points that meet a first preset condition from the second point cloud data in a time series to serve as planar point candidate points;
  • the determined plane point candidate points obtain the final plane point extraction result of the point cloud data of the current frame;
  • the first preset condition includes: the spatial distribution of the set of point cloud points is approximately a straight line, and A group of point cloud points are approximately symmetrical when centered on the middle point.
  • Constructing the local edge point map includes: extracting edge points from the second point cloud data to construct the local edge point map, where the edge points include face-to-face intersection edge points and jumping edge points, and the face-to-face
  • the intersection edge point corresponds to a point on the boundary line of the intersecting faces in the three-dimensional space
  • the jump edge point corresponds to a point on the edge of the isolated face in the three-dimensional space.
  • edge points and plane points from the second point cloud data please refer to the relevant description of constructing a global plane point map and a global edge point map above, and will not be repeated here.
  • local point cloud maps and global point cloud maps are in the form of point cloud maps.
  • the local point cloud map and the global point cloud map can be supplemented on the basis of the key frame map and the local map to achieve more refined matching.
  • a rough search can be performed first to determine multiple key frame maps that initially match the local map, and on this basis, a refined search is performed based on the local point cloud map and the global point cloud map, so as to achieve a coarse-to-fine search.
  • Search method In one example, in the case where the key frame map whose matching degree is higher than the preset threshold cannot be searched by the above-mentioned method of matching only the partial map with the key frame map, a coarse-to-fine search method can be used to perform the pose Sure. Of course, the search method from coarse to fine can also be realized separately.
  • the coarse search step mainly includes: first, multiple candidate key frame maps are selected from multiple key frame maps.
  • candidate key frames can be selected based on objects with prominent features in the local map and the key frame map. For example, if there is a highly reflective object in the current local map, the key frame map with the highly reflective object is used as the candidate key frame map.
  • the evaluation methods of image similarity include, but are not limited to, histogram matching, high-dimensional feature matching, or bag-of-words methods.
  • the method of judging based on highly reflective objects may not be used, and the candidate key frame map may be determined directly based on the similarity.
  • the local point cloud map is converted to the world coordinate system according to the key frame pose corresponding to the candidate key frame map. After that, the converted local point cloud map and the global point cloud map are used for matching, and the target key frame map is selected from the candidate key frame maps according to the matching result.
  • matching the local point cloud map with the global point cloud map includes matching the local edge point map with the global edge point map, and matching the local flat point map with the global flat point map. Since the local point cloud map and the global point cloud map are in the form of point cloud maps, the matching between the two can be done by point cloud matching, such as calculating the distance from point to line, distance from point to surface, distance from point to point etc. After that, the size of the sum of all the feature distances calculated after using the key frame pose corresponding to each candidate key frame map can be sorted, and the candidate key frame map with the smallest sum of feature distances can be used as the target key frame map.
  • the refined registration includes: converting the local point cloud map to the world coordinate system according to the key frame pose information corresponding to the target key frame map, and according to the difference between the local point cloud map and the global point cloud map
  • the feature distance optimizes the key frame pose corresponding to the target key frame map to determine the current pose.
  • the edge points and plane points in the local edge point map and the local plane point map can be converted to the world coordinate system, and the distance from the point to the line,
  • the point-to-surface distance or the sum of other characteristic distances is the loss function
  • the key frame pose information corresponding to the target key frame map is continuously adjusted and optimized until the loss function drops below the preset threshold, thereby solving the accurate current position. posture.
  • the current pose of the second lidar can be obtained, and then the current pose of the movable platform can be obtained.
  • the warehousing logistics trolley can obtain its current posture C in the warehouse.
  • the logistics task of the trolley is the material transportation task from A to B
  • the path planning from C to A and A to B can be carried out according to the position of the pose C and the positions of points A and B.
  • the trolley is controlled to move, and the subsequent poses of the trolley are updated on the basis of the initial pose determined above.
  • the subsequent pose determination may adopt a method of fusing the SLAM (Instant Positioning and Map Construction) algorithm with the absolute pose determination method of the embodiment of the present invention to determine the pose information incrementally.
  • the base map can be updated based on the second point cloud data, the map status can be maintained, the information of the base map can be improved, and the base map can be corrected in time when the current scene changes.
  • the method for determining the absolute pose of the embodiment of the present invention constructs a basic map based on point cloud data in advance, and matches the local map generated according to the point cloud data collected in real time with the basic map to determine the current absolute pose. High precision, no need to rely on base station layout, and strong versatility.
  • the electronic device includes a storage device and a processor.
  • the storage device is used to store program code; the processor is used to execute the program code.
  • the program code is executed, it is used to implement the absolute pose determination method described above.
  • the electronic device of the embodiment of the present invention may be mounted on a movable platform, or may be independent of the movable platform.
  • the electronic device is communicatively connected with the second lidar mounted on the movable platform to receive the second point cloud data.
  • the electronic device is also used to load a pre-built basic map to realize the absolute pose determination of the second lidar based on the basic map and the second point cloud data.
  • FIG. 3 shows a schematic block diagram of an electronic device 300 in an embodiment of the present invention.
  • the electronic device 300 includes one or more processors 320 and one or more storage devices 310.
  • the electronic device 300 may further include at least one of an input device (not shown), an output device (not shown), and an image sensor (not shown), and these components are connected through a bus system and/or other forms The mechanisms (not shown) are interconnected.
  • the components and structure of the electronic device 300 shown in FIG. 3 are only exemplary and not restrictive. According to needs, the electronic device 300 may also have other components and structures, for example, may also include transceivers for transmitting and receiving signals. Device.
  • the storage device 310 is also a memory for storing processor-executable instructions, for example, for storing corresponding steps and program instructions in the absolute pose determination method according to the embodiment of the present invention. It may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • the input device may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, and a touch screen.
  • the output device may output various information (for example, images or sounds) to the outside (for example, a user), and may include one or more of a display, a speaker, and the like.
  • the communication interface (not shown) is used for communication between the electronic device 300 and other devices, including wired or wireless communication.
  • the electronic device 300 can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof.
  • the communication interface receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication interface further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the processor 320 may be a central processing unit (CPU), an image processing unit (GPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other forms with data processing capabilities and/or instruction execution capabilities
  • the processing unit of the electronic device 300 can be used to control other components in the electronic device 300 to perform desired functions.
  • the processor can execute the instructions stored in the storage device 310 to execute the absolute pose determination method described herein.
  • the processor 320 can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware finite state machines (FSM), digital signal processors (DSP), or combinations thereof.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 320 may run the program instructions stored in the storage device 310 to implement the embodiments of the present invention described herein (implemented by the processor). ) And/or other desired functions, for example, to perform the corresponding steps of the absolute pose determination method according to the embodiment of the present invention.
  • Various application programs and various data such as various data used and/or generated by the application program, can also be stored in the computer-readable storage medium.
  • Another aspect of the embodiments of the present invention provides a movable platform on which a lidar 410 is mounted, that is, the second lidar described above.
  • the movable platform further includes an electronic device 420, and the electronic device 420 is communicatively connected with the second lidar.
  • the electronic device 420 may refer to the electronic device 300 described in FIG. 3, which will not be repeated here.
  • the above point cloud point may be any point cloud point in the point cloud data obtained by the lidar.
  • the lidar 410 may be a traditional mechanical lidar, a solid-state or semi-solid lidar, or any other suitable laser ranging equipment.
  • lidar is used to sense external environmental information, for example, distance information, orientation information, reflection intensity information, speed information, etc. of environmental targets.
  • a point cloud point may include at least one of the external environment information measured by the lidar.
  • the lidar can detect the distance from the lidar to the lidar by measuring the time of light propagation between the lidar and the probe, that is, the time-of-flight (TOF).
  • TOF time-of-flight
  • lidar can also use other technologies to detect the distance between the detected object and lidar, such as a ranging method based on phase shift measurement, or a ranging method based on frequency shift measurement. Do restrictions.
  • lidar is only used as an example, and other suitable lidars can also be applied to this application.
  • the laser radar mentioned in this article is described as an example in conjunction with the laser radar 500 shown in FIG. 5.
  • the lidar 500 includes a transmitting circuit 510, a receiving circuit 520, a sampling circuit 530, and an arithmetic circuit 540.
  • the transmitting circuit 510 may emit a light pulse sequence (for example, a laser pulse sequence).
  • the receiving circuit 520 can receive the light pulse sequence reflected by the object to be detected, and perform photoelectric conversion on the light pulse sequence to obtain an electrical signal. After processing the electrical signal, the electrical signal can be output to the sampling circuit 530.
  • the sampling circuit 530 may sample the electrical signal to obtain the sampling result.
  • the arithmetic circuit 540 may determine the distance between the lidar 500 and the detected object, that is, the depth, based on the sampling result of the sampling circuit 530.
  • the lidar 500 may further include a control circuit 550, which can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • a control circuit 550 can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • the lidar shown in FIG. 5 includes a transmitting circuit, a receiving circuit, a sampling circuit, and an arithmetic circuit for emitting a beam for detection
  • the transmitting circuit, The number of any one of the receiving circuit, the sampling circuit, and the arithmetic circuit can also be at least two, which are used to emit at least two light beams in the same direction or in different directions; wherein, the at least two light beams can be emitted at the same time. , It can also be shot at different times.
  • the light-emitting chips in the at least two transmitting circuits are packaged in the same module.
  • each emitting circuit includes a laser emitting chip, and the dies in the laser emitting chips in the at least two emitting circuits are packaged together and housed in the same packaging space.
  • the lidar 500 may further include a scanning module for changing the propagation direction of at least one laser pulse sequence emitted by the transmitting circuit to scan the field of view.
  • a module including a transmitting circuit 510, a receiving circuit 520, a sampling circuit 530, and a calculation circuit 540 or a module including a transmitting circuit 510, a receiving circuit 520, a sampling circuit 530, a calculation circuit 540, and a control circuit 550 may be referred to as a measurement Distance module, the distance measurement module can be independent of other modules, for example, the scanning module.
  • the coaxial optical path can be used in the lidar, that is, the light beam emitted by the lidar and the reflected light beam share at least part of the optical path in the lidar.
  • the lidar can also use an off-axis optical path, that is, the light beam emitted by the lidar and the reflected light beam are respectively transmitted along different optical paths in the lidar.
  • Fig. 6 shows a schematic diagram of an embodiment in which the laser radar of the present invention adopts a coaxial optical path.
  • the lidar 600 includes a ranging module 610, which includes a transmitter 603 (which may include the above-mentioned transmitting circuit), a collimating element 604, a detector 605 (which may include the above-mentioned receiving circuit, sampling circuit, and arithmetic circuit), and an optical path Change element 606.
  • the ranging module 610 is used to emit a light beam, receive the return light, and convert the return light into an electrical signal.
  • the transmitter 603 can be used to transmit a light pulse sequence.
  • the transmitter 603 can emit a sequence of laser pulses.
  • the laser beam emitted by the transmitter 603 is a narrow-bandwidth beam with a wavelength outside the visible light range.
  • the collimating element 604 is arranged on the exit light path of the emitter, and is used to collimate the light beam emitted from the emitter 603, and collimate the light beam emitted from the emitter 603 into parallel light and output to the scanning module.
  • the collimating element is also used to condense at least a part of the return light reflected by the probe.
  • the collimating element 604 may be a collimating lens or other elements capable of collimating a light beam.
  • the light path changing element 606 is used to combine the transmitting light path and the receiving light path in the lidar before the collimating element 604, so that the transmitting light path and the receiving light path can share the same collimating element, making the light path more compact.
  • the emitter 603 and the detector 605 use their respective collimating elements, and the optical path changing element 606 is arranged on the optical path behind the collimating element.
  • the optical path changing element can use a small-area reflector to transmit The light path and the receiving light path are merged.
  • the light path changing element may also use a reflector with a through hole, where the through hole is used to transmit the emitted light of the emitter 603, and the reflector is used to reflect the return light to the detector 605. This can reduce the blocking of the return light by the bracket of the small mirror in the case of using a small mirror.
  • the optical path changing element is deviated from the optical axis of the collimating element 604. In some other implementation manners, the optical path changing element may also be located on the optical axis of the collimating element 604.
  • the lidar 600 also includes a scanning module 602.
  • the scanning module 602 is placed on the exit light path of the distance measuring module 610.
  • the scanning module 602 is used to change the transmission direction of the collimated beam 619 emitted by the collimating element 604 and project it to the external environment, and project the return light to the collimating element 604 .
  • the returned light is collected on the detector 605 via the collimating element 604.
  • the scanning module 602 may include at least one optical element for changing the propagation path of the light beam, wherein the optical element may change the propagation path of the light beam by reflecting, refracting, or diffracting the light beam.
  • the scanning module 602 includes a lens, a mirror, a prism, a galvanometer, a grating, a liquid crystal, an optical phased array (Optical Phased Array), or any combination of the foregoing optical elements.
  • at least part of the optical elements are moving.
  • a driving module is used to drive the at least part of the optical elements to move.
  • the moving optical elements can reflect, refract, or diffract the light beam to different directions at different times.
  • the multiple optical elements of the scanning module 602 can rotate or vibrate around a common axis 609, and each rotating or vibrating optical element is used to continuously change the propagation direction of the incident light beam.
  • the multiple optical elements of the scanning module 602 may rotate at different speeds or vibrate at different speeds.
  • at least part of the optical elements of the scanning module 602 may rotate at substantially the same rotation speed.
  • the multiple optical elements of the scanning module may also rotate around different axes.
  • the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction, or vibrate in different directions, which is not limited herein.
  • the scanning module 602 includes a first optical element 614 and a driver 616 connected to the first optical element 614.
  • the driver 616 is used to drive the first optical element 614 to rotate around the rotation axis 609 to change the first optical element 614.
  • the direction of the beam 619 is collimated.
  • the first optical element 614 projects the collimated beam 619 to different directions.
  • the angle between the direction of the collimated beam 619 after being changed by the first optical element and the rotation axis 609 changes with the rotation of the first optical element 614.
  • the first optical element 614 includes a pair of opposing non-parallel surfaces through which the collimated light beam 619 passes.
  • the first optical element 614 includes a prism whose thickness varies along at least one radial direction.
  • the first optical element 614 includes a wedge angle prism to collimate the beam 619 for refracting.
  • the scanning module 602 further includes a second optical element 615, the second optical element 615 rotates around the rotation axis 609, and the rotation speed of the second optical element 615 is different from the rotation speed of the first optical element 614.
  • the second optical element 615 is used to change the direction of the light beam projected by the first optical element 614.
  • the second optical element 615 is connected to another driver 617, and the driver 617 drives the second optical element 615 to rotate.
  • the first optical element 614 and the second optical element 615 can be driven by the same or different drivers, so that the rotation speed and/or rotation of the first optical element 614 and the second optical element 615 are different, so that the collimated light beam 619 is projected to the outside space.
  • the controller 618 controls the drivers 616 and 617 to drive the first optical element 614 and the second optical element 615, respectively.
  • the rotational speeds of the first optical element 614 and the second optical element 615 can be determined according to the expected scanning area and pattern in actual applications.
  • the drivers 616 and 617 may include motors or other drivers.
  • the second optical element 615 includes a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 615 includes a prism whose thickness varies along at least one radial direction. In one embodiment, the second optical element 615 includes a wedge prism.
  • the scanning module 602 further includes a third optical element (not shown) and a driver for driving the third optical element to move.
  • the third optical element includes a pair of opposite non-parallel surfaces, and the light beam passes through the pair of surfaces.
  • the third optical element includes a prism whose thickness varies in at least one radial direction.
  • the third optical element includes a wedge prism. At least two of the first, second, and third optical elements rotate at different rotation speeds and/or rotation directions.
  • FIG. 7 is a schematic diagram of a scanning pattern of the lidar 600.
  • the scanning density of the scanning module changes with the accumulation of the integration time on the time axis. Since the scanning trajectory of the ranging device changes with time, the scanning density gradually increases as the integration time accumulates. Therefore, the point cloud data of different frames collected by the above-mentioned lidar will be scanned to points belonging to different parts of the target object. cloud.
  • the detection object 601 When the light projected by the scanning module 602 hits the detection object 601, a part of the light is reflected by the detection object 601 to the lidar 600 in a direction opposite to the projected light.
  • the return light 612 reflected by the detection object 601 is incident on the collimating element 604 after passing through the scanning module 602.
  • the detector 605 and the transmitter 603 are placed on the same side of the collimating element 604, and the detector 605 is used to convert at least part of the return light passing through the collimating element 604 into an electrical signal.
  • an anti-reflection coating is plated on each optical element.
  • the thickness of the antireflection film is equal to or close to the wavelength of the light beam emitted by the emitter 603, which can increase the intensity of the transmitted light beam.
  • a filter layer is plated on the surface of an element located on the beam propagation path in the lidar, or a filter is provided on the beam propagation path for transmitting at least the wavelength band of the beam emitted by the transmitter and reflecting Other bands to reduce the noise caused by ambient light to the receiver.
  • the transmitter 603 may include a laser diode through which nanosecond laser pulses are emitted.
  • the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or the falling edge time of the electrical signal pulse.
  • the lidar 600 can calculate the TOF using the pulse receiving time information and the pulse sending time information, so as to determine the distance between the detection object 601 and the lidar 600.
  • the movable platform 400 further includes a movable platform body.
  • the movable platform includes at least one of an unmanned car, a remote control car, and a robot.
  • the body of the movable platform is the body of the unmanned vehicle.
  • the movable platform is a remote control car
  • the body of the movable platform is the body of the remote control car.
  • the movable platform body is a robot body.
  • the embodiment of the present invention also provides a computer storage medium on which a computer program is stored.
  • the computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and a portable compact disk.
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • CD-ROM Read only memory
  • USB memory or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • the absolute pose determination method, electronic equipment, mobile platform, and computer storage medium of the embodiments of the present invention construct a basic map based on point cloud data in advance, and generate a local map based on the point cloud data collected in real time. Matching with the basic map to determine the current absolute pose, the pose resolution is highly accurate, does not need to rely on the layout of the base station, and has strong versatility.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Abstract

一种绝对位姿确定方法、电子设备及可移动平台,该方法包括:载入预先构建的当前场景下的基础地图,基础地图包括多个关键帧地图,关键帧地图与关键帧位姿相对应,关键帧地图包含第一激光雷达在关键帧位姿下采集的第一点云数据的信息;在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据第二点云数据得到当前位姿下的局部地图;将局部地图与多个关键帧地图进行匹配,以确定与局部地图相匹配的关键帧地图,并根据与关键帧地图相对应的关键帧位姿确定第二激光雷达的当前位姿。本发明将根据实时采集的点云数据生成的局部地图与预先构建的基础地图进行匹配以确定当前的绝对位姿,位姿解算精度高、通用性强。

Description

绝对位姿确定方法、电子设备及可移动平台 技术领域
本发明实施例涉及测距技术领域,并且更具体地,涉及一种绝对位姿确定方法、电子设备及可移动平台。
背景技术
现有的应用于室外的绝对定位技术通常利用卫星定位技术来实现,其中GPS(Global Positioning System,全球定位系统)只能获得米级的定位精度,RTK(Real-time kinematic,实时动态)载波相位差分技术能够获得厘米级的定位精度,但价格昂贵。并且,以上技术仅能获得位置信息,而不能获得姿态信息。
在应用于室内的绝对定位技术中,绝大多数都依赖于对环境的布置,需要在室内环境中额外添加定位基站,通用性差。而对于一些基于蓝牙、WiFi信号的定位方法,其定位精度差,抗干扰能力也较弱。
发明内容
在发明内容部分中引入了一系列简化形式的概念,这将在具体实施方式部分中进一步详细说明。本发明的发明内容部分并不意味着要试图限定出所要求保护的技术方案的关键特征和必要技术特征,更不意味着试图确定所要求保护的技术方案的保护范围。
针对现有技术的不足,本发明实施例第一方面提供了一种基于激光雷达的绝对位姿确定方法,包括:
载入预先构建的当前场景下的基础地图,所述基础地图包括多个关键帧地图,所述关键帧地图与关键帧位姿相对应,所述关键帧地图包含第一激光雷达在关键帧位姿下采集的第一点云数据的信息;
在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据所述第二点云数据得到当前位姿下的局部地图;
将所述局部地图与所述多个关键帧地图进行匹配,以确定与所述局部地图 相匹配的关键帧地图,并根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿。
本发明实施例第二方面提供了一种电子设备,所述电子设备包括存储装置和处理器,其中,所述存储装置用于存储程序代码;所述处理器用于执行所述程序代码,当所述程序代码执行时,用于:
载入预先构建的当前场景下的基础地图,所述基础地图包括多个关键帧地图,所述关键帧地图与关键帧位姿相对应,所述关键帧地图包含第一激光雷达在关键帧位姿下采集的第一点云数据的信息;
在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据所述第二点云数据得到当前位姿下的局部地图;
将所述局部地图与所述多个关键帧地图进行匹配,以确定与所述局部地图相匹配的关键帧地图,并根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿。
本发明实施例第三方面提供一种可移动平台,所述可移动平台上搭载有激光雷达,所述可移动平台还包括本发明实施例第二方面提供的电子设备。
本发明实施例第四方面提供一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本发明实施例第一方面提供的绝对位姿确定方法的步骤。
本发明实施例的绝对位姿确定方法、电子设备及可移动平台预先根据点云数据构建基础地图,并将根据实时采集的点云数据所生成的局部地图与基础地图进行匹配以确定当前的绝对位姿,位姿解算精度高、无需依赖基站布置、通用性强。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例所提供的基于激光雷达的绝对位姿确定方法的一个示意性流程图;
图2是根据本发明实施例的构建基础地图的场景的示意图;
图3是本发明实施例所提供的电子设备的示意性框图;
图4是本发明实施例所提供的可移动平台的示意性框图;
图5是本发明实施例所涉及的一种测距装置的示意性框架图;
图6是本发明实施例所涉及的测距装置采用同轴光路的一种实施例的示意图;
图7是根据本发明实施例的激光雷达的一种扫描图案的示意图。
具体实施方式
为了使得本发明的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本发明的示例实施例。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是本发明的全部实施例,应理解,本发明不受这里描述的示例实施例的限制。基于本发明中描述的本发明实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本发明的保护范围之内。
在下文的描述中,给出了大量具体的细节以便提供对本发明更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本发明可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本发明发生混淆,对于本领域公知的一些技术特征未进行描述。
应当理解的是,本发明能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本发明的范围完全地传递给本领域技术人员。
在此使用的术语的目的仅在于描述具体实施例并且不作为本发明的限制。在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其它的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。
为了彻底理解本发明,将在下列的描述中提出详细的结构,以便阐释本发 明提出的技术方案。本发明的可选实施例详细描述如下,然而除了这些详细描述外,本发明还可以具有其他实施方式。
本发明实施例提出了一种基于激光雷达的绝对位姿确定方法,用于确定激光雷达的绝对位姿,进而可以确定搭载有激光雷达的可移动平台的绝对位姿。图1示出了根据本申请实施例的基于激光雷达的绝对位姿确定方法100的示意性流程图。如图1所示,绝对位姿确定方法100包括以下步骤:
在步骤S110,载入预先构建的当前场景下的基础地图,所述基础地图包括多个关键帧地图,所述关键帧地图与关键帧位姿相对应,所述关键帧地图包含第一激光雷达在关键帧位姿下采集的第一点云数据的信息;
在步骤S120,在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据所述第二点云数据得到当前位姿下的局部地图;
在步骤S130,将所述局部地图与所述多个关键帧地图进行匹配,以确定与所述局部地图相匹配的关键帧地图,并根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿。
本发明实施例的绝对位姿确定方法100在地图构建阶段通过激光雷达扫描当前场景完成基础地图构建,在位姿确定阶段可以根据实时采集的点云数据、通过算法直接完成当前绝对位姿的确定,具有位姿解算精度高、无需依赖基站布置、通用性强等特点,尤其适用于无GPS或GPS信号弱的室内环境,例如商场、机场、仓库、酒店等场景。
本发明实施例的方法100可以应用于搭载有激光雷达的可移动平台,所述可移动平台包括但不限于无人车、机器人等,进一步可以包括运行于上述室内场景中的室内机器人。可移动平台上搭载的激光雷达为步骤S120中的第二激光雷达,通过第二激光雷达采集第二点云数据后,可以根据第二点云数据和预先构建的当前场景下的基础地图确定第二激光雷达的当前位姿,进而确定了搭载有第二激光雷达的可移动平台的当前位姿。
步骤S110中,所载入的基础地图是根据第一激光雷达在当前场景下采集的第一点云数据预先构建的。第一激光雷达可以是机械式激光雷达,也可以是固态或半固态的激光雷达。在地图构建阶段,可以由可移动平台搭载第一激光雷达在当前场景下移动,并在移动过程中由第一激光雷达采集第一点云数据,之后根据第一点云数据构建基础地图。其中,第一激光雷达和第二激光雷达可 以是相同的激光雷达,从而提高局部地图与基础地图的匹配程度。当然,第一激光雷达和第二激光雷达也可以是不同的激光雷达。类似地,可以采用相同或相似的可移动平台进行基础地图构建和绝对位姿确定,从而进一步提高局部地图与基础地图的匹配程度。
示例性地,第一激光雷达主动对被探测物体发射激光脉冲,捕捉激光回波信号,并根据激光发射和接收之间的时间差计算出被测对象的距离;基于激光的已知发射方向,获得被测对象的角度信息;根据脉宽和距离得到反射率信息。一种实现方式中,第一激光雷达可以通过光飞行时间(Time-of-Flight,TOF)来探测被测物到激光雷达的距离。或者,第一激光雷达也可以通过其他技术来探测被测物到激光雷达的距离,例如基于相位移动(phase shift)或者基于频率移动(frequency shift)的方法等,在此不做限制。
图2示出了根据本发明一个实施例的构建基础地图的场景的示意图。如图2所示,可移动平台搭载第一激光雷达在当前场景下漫游,通过SLAM(Simultaneous Localization and Mapping,即时定位与地图构建)算法获得建图轨迹,并将漫游过程中采集的点云数据转换到同一个坐标系下,以获得第一点云数据。在图2的例子中,可移动平台可以是仓储物流小车,当前场景可以为仓储物流小车运行的仓库。可移动平台的建图轨迹应保证第一点云数据包含当前场景下的尽可能丰富的点云数据,包括但不限于图2中所示的封闭曲线。
根据上述第一点云数据所构建的基础地图包括与关键帧时刻的关键帧位姿相对应的关键帧地图。关键帧地图为图像的形式,即通过像素值来表征物体。在构建地图的过程中,第一点云数据的最小的处理单元称之为帧,以第一激光雷达采用10Hz输出频率为例,每100ms采集的第一点云数据称之为一帧。通过采用SLAM算法,可以输出每一帧相对于基准坐标系的位置和姿态。在所有点云帧中,每隔预定时间确定一帧以作为上述关键帧,相邻关键帧之间的时间间隔可以根据当前场景的大小以及可移动平台的运动速度来动态设定。在一个示例中,如图2所示,可以将每个时间段的最后一帧确定为关键帧。在其他示例中,关键帧也可以是每个时间段中的第一帧或中间的某一帧。
如图2所示,在关键帧时刻,第一激光雷达处于关键帧位姿(即位置和姿态)下,其覆盖的视场角(FOV)为处于该关键帧位姿下的视场角,此时采集到的第一点云数据包括该视场角所覆盖的点云。在根据关键帧时刻的点云数据 生成关键帧地图以后,关键帧地图与关键帧时刻的关键帧位姿相关联。
进一步地,由于单帧点云较为稀疏,因而关键帧地图不仅可以包括关键帧时刻所采集的点云数据,还可以将关键帧时刻前后一段时间内的第一点云数据叠加在关键帧时刻,并根据叠加在关键帧时刻的第一点云数据生成关键帧地图。通过将关键帧时刻前后一段时间内的点云数据叠加在关键帧时刻,可以使关键帧时刻的点云数据更加稠密,根据叠加后的第一点云数据生成的关键帧地图因而包含了关键帧位姿下的更多的空间信息。
由于在时间间隔较短的情况下,激光雷达所覆盖的视场角相差较小,因而该一段时间内的第一点云数据所包含的目标物体差别较小,可以通过将该一段时间内的第一点云数据转换到关键帧时刻的点云坐标系下来实现点云的叠加。将其他时刻的第一点云数据转换到关键帧时刻的坐标系下,相当于使其他时刻所采集的第一点云数据等同于关键帧位姿下采集的点云数据,因而叠加后的第一点云数据仍然对应于关键帧位姿。
示例性地,“将关键帧时刻前后一段时间内的第一点云数据叠加在关键帧时刻”可以包括将关键帧时刻之前一段时间内的第一点云数据叠加在关键帧时刻、将关键帧时刻之后一端时间内的点云数据叠加在关键帧时刻,或者将关键帧之前和之后一段时间内的点云数据叠加在关键帧时刻。在一个实施例中,可以将相邻两个关键帧之间的全部的点云帧叠加在其中一个关键帧时刻,从而使叠加在关键帧时刻的点云数据包含尽可能多的信息,提高后续匹配的精度。在其他示例中,也可以取相邻关键帧之间的部分点云帧叠加在关键帧时刻,从而降低计算量。
参见图2,在一个具体的实施例中,可以将每隔一段时间的最后一帧确定为关键帧,并将该关键帧之前的所述一段时间内的第一点云数据全部叠加在关键帧时刻。将关键帧之前一段时间内的第一点云数据叠加在关键帧时刻,可以避免出现由于最后一个关键帧之后没有其他点云帧而无法进行第一点云数据的叠加的情况。
在一些实施例中,与每个关键帧位姿相对应的关键帧地图包括关键帧反射率地图和关键帧深度地图中的至少一个。其中,关键帧反射率地图是根据叠加在关键帧时刻的第一点云数据的反射率信息所构建的;关键帧深度地图是根据叠加在关键帧时刻的第一点云数据的深度信息所构建的。反射率信息和深度信 息都能够表征目标物体的特征,从而形成与各个关键帧位姿相对应的独特的关键帧地图。通过构建两种不同的关键帧地图,可以利用多维度信息,优化了位姿确定的效果和鲁棒性。
作为示例,根据反射率信息构建关键帧反射率地图或根据深度信息构建关键帧深度地图的方法包括:将叠加在关键帧时刻的第一点云数据中的点云点转换到球坐标系下,分别以方位角和俯仰角对应于地图的XY方向,根据反射率值或深度值的大小确定像素值,例如像素灰度值,从而将点云图按照一定的分辨率(例如0.1度*0.1度)转化为灰度图像。将点云图转化为灰度图像以后,即可以在后续步骤中通过图像处理的方式进行关键帧地图的匹配,降低匹配的难度,同时关键帧地图还携带有点云数据的位置信息,从而保证了绝对位姿解算的精度。
在一些实施例中,预先构建的基础地图还可以包括根据所述第一点云数据生成的当前场景下的全局点云地图。全局点云地图为点云图的形式,即由海量的点云点构成。示例性地,由可移动平台搭载第一激光雷达在当前场景下漫游,并在漫游过程中采集不同位姿下的第一点云数据以后,将每一帧第一点云数据转换到世界坐标系下进行叠加,由于叠加后的第一点云数据包含当前场景下的全局点云,因而所生成的点云地图称为全局点云地图。
进一步地,为了便于进行后续的特征提取和匹配,可以将全局点云地图分别构建为全局平面点地图和全局边沿点地图。
其中,全局平面点地图包括点云中的平面点,其对应于真实场景中位于一个平面上的点;全局边沿点地图包括点云中的边沿点,其对应于真实场景中位于平面、物体、细杆等边沿上的点。具体地,在第一点云数据的每一帧点云中,分别进行边沿点和平面点的提取,最终将所有的边沿点和平面点转换到世界坐标系下,从而分别构成全局边沿点地图和全局平面点地图。
如上所述,本发明实施例所采用的第一激光雷达不仅包括机械式激光雷达,还可以是固态或半固态的激光雷达。机械式激光雷达的点云扫描图案规则,平面点和边沿点的提取较为简单,而半固态或固态激光雷达的点云扫描图案不规则,以往使用的特征提取方法提取平面点和边沿点准确性较低,因而,本发明实施例采用如下所述的方式进行平面点和边沿点的提取。
具体地,首先对第一点云数据中的当前帧点云数据进行预处理。预处理可 以包括以下步骤:首先,遍历当前帧点云数据以获取并标记当前帧点云数据中的零点。零点可能为盲区内的点、无穷远处的点等。接着,对当前帧点云数据按照深度值进行排序,选择中位值作为场景尺度阈值,并基于场景尺度阈值确定滑窗的大小。最后,遍历当前帧点云数据以确定其中的噪点,后续将从噪点以外的点云点中提取平面点和边沿点。示例性地,确定噪点的方法包括:计算每个点云点与其前后两个点的距离,若均超过一定阈值,则标记该点云点为噪点。
预处理完成之后,在经过预处理后的第一点云数据中提取平面点。平面点的提取可以采用滑窗法来实现。
具体地,提取平面点的方法包括:首先,根据预处理步骤中所确定的滑窗大小(例如,滑窗大小为5),从当前帧的第一点云数据中按照时序获取第一预定数目的一组点云点(例如5个点云点),并判断所获取的一组点云点是否满足第一预设条件。其中,第一预设条件包括:该组点云点的空间分布近似为一条直线,并且该组点云点以中间点为中心时近似中心对称。示例性地,可以采用主成分分析方法确定所获取的一组点云点是否满足第一预设条件。
若满足第一预设条件,则将该组点云点确定为平面点候选点。之后,滑窗向后移动,以获取同等数目下一组点云点以进行判断,下一组点云点至少包括上一组点云点中的一个点云点。滑窗遍历所有点云点,并提取所有满足第一预设条件的平面点候选点以后,则可以在平面点候选点之中确定最终平面点提取结果。示例性地,为了防止特征聚集,可以将当前帧点云数据划分为若干个区域,对各区域中的平面点候选点按照对第一预设条件的满足程度进行排序,并将基于排序结果从各区域中选择的部分平面点候选点作为最终平面点提取结果。
边沿点的提取可以基于平面点的提取结果来进行。对于边沿点来说,本发明实施例在特征提取过程中将边沿点分为三类:面面相交边沿点,跳跃边沿点和细小物体边沿点。其中,面面相交边沿点对应于三维空间中相交的面的交界线上的点,跳跃边沿点对应于三维空间中孤立面的边沿上的点,细小物体边沿点对应于三维空间中细小物体的边沿上的点。
由于面面相交边沿点为两相交平面的交接线上的点,因而面面相交边沿点的提取至少包括如下步骤:
首先,判断当前点云点所在的前后两组点云点是否同时满足平面点的判定,也就是说,判断当前点云点是否同时位于两个平面上;若当前点云点所在的前后两组点云点同时满足平面点的判定,则进而判断是否满足以下条件:所述前后两组点云点中每组点云点中任意两点之间的距离的最大值满足第一阈值范围,所述前后两组点云点各自构成的方向向量的夹角满足第二阈值范围,并且所述前后两组点云点各自构成的方向向量与所述当前点云点的出射方向的夹角满足第三阈值范围。若满足以上条件,则说明所述前后两组点云点不位于同一平面上,因而可以判断当前点云点为面面相交边沿点。
示例性地,跳跃边沿点的提取包括如下步骤:
首先,判断当前点云点与其前后两个点的距离的差值是否大于预定阈值,其中,所述前后两个点中距离当前点云点较近的一个点定义为近侧点,距离当前点云点较远的一个点定义为远侧点。当当前点云点与其前后两个点的距离的差值大于预定阈值时,将当前点云点确定为候选跳跃边沿点。
接着,在候选跳跃边沿点中确定最终的跳跃边沿点。具体地,将候选跳跃边沿点与近侧点构成的点云点对定义为近侧组,将候选跳跃边沿点与远侧点构成的点云点对定义为远侧组,并判断近侧组是否满足以下条件:近侧组满足第一预设条件,近侧组构成的方向向量与远侧组构成的方向向量的夹角满足第四阈值范围,近侧组中任意两点之间的距离的最大值满足第五阈值范围,近侧组构成的方向向量与所述候选跳跃边沿点的出射方向的夹角满足第六阈值范围;可选地,还可以判断远侧点是否为非零点或者为非盲区零点。当满足上述条件时,则可以将候选跳跃边沿点确定为最终的跳跃边沿点。
在当前场景下出现细小物体时,难以通过以上方式进行边沿点的提取,因而额外采用针对细小物体边沿点的提取方式提取细小物体边沿点。
具体地,区别于噪点,首先将点云点中连续的第二预定数目的孤立点云团确定为候选细小物体边沿点,其中点云团中任意两点之间的距离的最大值满足第七阈值范围,第二预定数目(例如两个或三个)小于第一预定数目(例如五个)。
基于上述的边沿点提取结果,确定候选细小物体边沿点是否与先前帧中的边沿点共同构成细长边沿;若候选细小物体边沿点与先前帧中的边沿点共同构成细长边沿,则将该候选细小物体边沿点确定为最终的细小物体边沿点。
根据如上所述的边沿点和平面点提取方法,对激光雷达扫描图案具有较强的适应性,除了传统机械式激光雷达的扫描图案外,对于固态、半固态等激光雷达的扫描图案也能有较好的提取效果。
在步骤S120,在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据第二点云数据得到当前位姿下的局部地图。之后,在步骤S130中,将所述局部地图与所述多个关键帧地图进行匹配,以确定与所述局部地图相匹配的关键帧地图,并根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿。
继续以仓储物流小车为例,预先构建好基础地图以后,需要进入仓库执行任务的小车,首先载入基础地图,开启其搭载的第二激光雷达,开始进行点云数据采集以及绝对位姿确定。其中,第二激光雷达在当前位姿下保持静止,并持续采集一段时间(例如1s)的第二点云数据。若第二激光雷达为非重复扫描式的激光雷达,则在采集第二点云数据的时间范围内,其FOV内点云覆盖率不断提升,因而能够采集更加丰富的信息,以更好地完成当前位姿的确定。
与关键帧地图类似,当前位姿下的局部地图同样为图像的形式,局部地图与关键帧地图之间的匹配为图像之间的匹配,因而当前位姿下的局部地图的类型应与上述的关键帧地图的类型相对应。也就是说,若关键帧地图包括关键帧深度地图,则当前位姿下的局部地图包括局部深度地图,局部地图与关键帧地图之间的匹配包括将局部深度地图与关键帧深度地图进行匹配;若关键帧地图包括关键帧反射率地图,则当前位姿下的局部地图包括局部反射率地图,局部地图与关键帧地图之间的匹配包括将局部反射率地图与关键帧反射率地图进行匹配。可以理解的是,为了提高匹配的准确率,所述匹配可以包括分别进行以上两种匹配。
局部深度地图和局部反射率地图的构建可以采用与关键帧深度地图和关键帧反射率地图的构建类似的方式,例如,根据反射率值或深度值的大小确定像素值,从而将点云图转化为灰度图像,具体可以参照上文。
作为一种实现方式,可以仅通过将局部地图与稠密的关键帧地图进行匹配来实现当前位姿的确定。由于局部地图与关键帧地图为图像的形式,因而可以采用任意合适的图像匹配方法实现二者之间的匹配。例如,可以采用特征匹配的方式将局部地图与关键帧地图进行匹配,具体地,提取局部地图和关键帧地 图的图像特征以进行匹配,若匹配度大于预设阈值,或匹配的残差小于一定阈值,则确定局部地图匹配到所述关键帧地图。其中,从局部地图和关键帧地图中提取的图像特征包括HOG特征、SIFT特征、SURF特征、ORB特征、LBP特征、HAAR特征或其他任何合适的图像特征。或者,还可以采用灰度匹配的方式实现局部地图与关键帧地图之间的匹配。
当局部地图和关键帧地图分别包括深度地图和反射率地图时,可以在局部深度地图与关键帧深度地图的匹配度和局部反射率地图与关键帧反射率地图的匹配度均大于各自的预设阈值时,确定局部地图匹配到关键帧地图,从而保证匹配的准确性。或者,也可以在局部深度地图与关键帧深度地图的匹配度和局部反射率地图与关键帧反射率地图的匹配度中的至少一个大于各自的预设阈值时,确定局部地图匹配到关键帧地图。
若局部地图与某一关键帧地图匹配成功,说明当前位姿与关键帧位姿近似,但二者通常仍存在一定的差距。因此,寻找到相匹配的关键帧地图和局部地图以后,还包括在局部地图和与之匹配的关键帧地图中提取相匹配的特征点对,使用ICP(迭代最近点)解算等方式,根据特征点对的三维空间信息解算出所述局部地图相对于关键帧地图的位姿变换关系,并根据解算出的位姿变换关系和与关键帧地图对应的关键帧位姿确定当前位姿,从而保证位姿解算的精度。
由于本发明实施例的关键帧地图和局部地图是根据点云数据所生成的,因而在获得相匹配的特征点对以后,可以直接获得特征点对的三维空间信息,以用于进行绝对位姿的解算,相比于通过相机采集图像的方式而言更加便利,并且能够提高位姿解算的精度。
如上所述,在一些实施例中,预先构建的基础地图还包括根据所述第一点云数据生成的当前场景下的全局点云地图,则本发明实施例的方法100还可以包括:根据第二点云数据生成当前视角下的局部点云地图;将局部点云地图与全局点云地图进行匹配,以确定当前位姿。
其中,若全局点云地图包括全局平面点地图和全局边沿点地图,则相应地,局部点云地图同样包括局部平面点地图和局部边沿点地图,将局部点云地图与全局点云地图进行匹配包括:将局部平面点地图与全局平面点地图进行匹配,将局部边沿点地图与全局边沿点地图进行匹配。其中,局部平面点地图是通过在第二点云数据中提取平面点来构建的,局部边沿点地图是通过在第二点云数 据中提取边沿点来构建的。
具体地,构建所述局部平面点地图包括:从所述第二点云数据中按照时序获取满足第一预设条件的第一预定数目的一组点云点,以作为平面点候选点;基于所确定的平面点候选点获取所述当前帧点云数据的最终平面点提取结果;其中所述第一预设条件包括:所述一组点云点的空间分布近似为一条直线,并且所述一组点云点以中间点为中心时近似中心对称。
构建所述局部边沿点地图包括:从所述第二点云数据中提取边沿点,以构建所述局部边沿点地图,所述边沿点包括面面相交边沿点和跳跃边沿点,所述面面相交边沿点对应于三维空间中相交的面的交界线上的点,所述跳跃边沿点对应于三维空间中孤立面的边沿上的点。
在第二点云数据中提取边沿点和平面点的其他具体细节可以参见上文构建全局平面点地图和全局边缘点地图的相关描述,在此不做赘述。
不同于关键帧地图和局部地图,局部点云地图和全局点云地图为点云图的形式。局部点云地图与全局点云地图可以在关键帧地图和局部地图的基础上作为补充,以实现更为精细化的匹配。具体地,可以首先进行粗搜索,确定与局部地图初步匹配的多个关键帧地图,并在此基础上,根据局部点云地图与全局点云地图进行细化搜索,从而实现由粗到精的搜索方式。在一个示例中,可以在上述仅通过将局部地图与关键帧地图进行匹配的方式无法搜索到匹配度高于预设阈值的关键帧地图的情况下,采用由粗到精的搜索方式进行位姿确定。当然,由粗到精的搜索方式也可以单独实现。
具体地,粗搜索步骤主要包括:首先,在多个关键帧地图中选择多个候选关键帧地图。
示例性地,作为一种比较简单的方式,可以根据局部地图和关键帧地图中具有突出特征的物体选择候选关键帧。例如,若当前局部地图中存在高反射物体,则将存在高反射物体的关键帧地图作为候选关键帧地图。
然而,由于高反射物体或其他具有突出特征的物体具有随机性,因而作为示例,在根据高反射物体提取候选关键帧地图以后,还可以将局部地图与关键帧地图进行相似度匹配,选择其中相似度评分超过阈值的候选关键帧作为候选关键帧。其中,图像相似度的评价方法包括但不限于直方图匹配、高维特征匹配或词袋方法等。在一些实施例中,也可以不采用根据高反射物体进行判断的 方式,直接根据相似度确定候选关键帧地图。
接着,对于每一个候选关键帧,根据与该候选关键帧地图相对应的关键帧位姿,将局部点云地图转换到世界坐标系下。之后,使用转换后的局部点云地图与全局点云地图进行匹配,根据匹配结果在所述候选关键帧地图中选择目标关键帧地图。
其中,将局部点云地图与全局点云地图进行匹配包括将局部边沿点地图与全局边沿点地图进行匹配,以及将局部平面点地图与全局平面点地图进行匹配。由于局部点云地图和全局点云地图为点云图的形式,因而二者之间的匹配可以采用点云匹配的方式,例如计算点到线的距离、点到面的距离、点到点的距离等等。之后,可以将关于采用每个候选关键帧地图对应的关键帧位姿进行转换后计算得到的所有特征距离总和的大小进行排序,将特征距离总和最小的候选关键帧地图作为目标关键帧地图。
通过粗搜索寻找到目标关键帧地图之后,则可以根据目标关键帧地图及其关键帧位姿信息进行进一步的精细化配准。
具体地,精细化配准包括:根据与目标关键帧地图相对应的关键帧位姿信息,将局部点云地图转换到世界坐标系下,并根据局部点云地图与全局点云地图之间的特征距离优化与所述目标关键帧地图相对应的关键帧位姿,以确定当前位姿。具体地,可以根据与目标关键帧地图相对应的关键帧位姿信息,将局部边沿点地图和局部平面点地图中的边沿点和平面点转换至世界坐标系下,以点到线的距离、点到面的距离或其他特征距离之和为损失函数,不断调整优化与目标关键帧地图相对应的关键帧位姿信息,直到损失函数降低至预设阈值以下,由此解算出精确的当前位姿。
完成上述步骤S110至步骤S130以后,即可以获得第二激光雷达的当前位姿,进而获得可移动平台的当前位姿。继续以仓储物流小车执行物流任务为例,仓储物流小车可以获得其在仓库中的当前位姿C。假设小车的物流任务为从A到B的物资运送任务,则在获得当前位姿C以后,可以根据位姿C以及A点、B点的位置进行C到A、A到B的路径规划,在根据位姿信息控制小车移动同时,在上文确定的初始位姿的基础上更新小车的后续位姿。
其中,后续位姿的确定可以采用SLAM(即时定位与地图构建)算法与本发明实施例的绝对位姿确定方法相融合的方式,增量式地确定位姿信息。进一 步地,还可以根据第二点云数据对基础地图进行更新,维护地图状态,完善基础地图的信息,并在当前场景发生变化时及时对基础地图进行更正。
本发明实施例的绝对位姿确定方法预先根据点云数据构建基础地图,并将根据实时采集的点云数据所生成的局部地图与基础地图进行匹配以确定当前的绝对位姿,位姿解算精度高、无需依赖基站布置、通用性强。
本发明实施例另一方面还提供一种电子设备,所述电子设备包括存储装置和处理器,其中,所述存储装置用于存储程序代码;所述处理器用于执行所述程序代码,当所述程序代码执行时,用于实现上文所述的绝对位姿确定方法。本发明实施例的电子设备可以搭载在可移动平台上,也可以独立于可移动平台以外。所述电子设备与搭载在可移动平台上的第二激光雷达通信连接,以接收第二点云数据。电子设备还用于载入预先构建的基础地图,以根据基础地图和第二点云数据实现对于第二激光雷达的绝对位姿确定。图3示出了本发明一个实施例中的电子设备300的示意性框图。
如图3所示,电子设备300包括一个或多个处理器320以及一个或多个存储装置310。可选地,电子设备300还可以包括输入装置(未示出)、输出装置(未示出)以及图像传感器(未示出)中的至少一个,这些组件通过总线系统和/或其它形式的连接机构(未示出)互连。应当注意,图3所示的电子设备300的组件和结构只是示例性的,而非限制性的,根据需要,电子设备300也可以具有其他组件和结构,例如还可以包括用于收发信号的收发器。
所述存储装置310也即存储器用于存储处理器可执行指令的存储器,例如用于存在用于实现根据本发明实施例的绝对位姿确定方法中的相应步骤和程序指令。可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
所述输入装置可以是用户用来输入指令的装置,并且可以包括键盘、鼠标、麦克风和触摸屏等中的一个或多个。
所述输出装置可以向外部(例如用户)输出各种信息(例如图像或声音),并且可以包括显示器、扬声器等中的一个或多个。
通信接口(未示出)用于电子设备300和其他设备之间进行通信,包括有线或者无线方式的通信。电子设备300可以接入基于通信标准的无线网络,如WiFi、2G、3G、4G、5G或它们的组合。在一个示例性实施例中,通信接口经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信接口还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
所述处理器320可以是中央处理单元(CPU)、图像处理单元(GPU)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备300中的其它组件以执行期望的功能。所述处理器能够执行所述存储装置310中存储的所述指令,以执行本文描述的绝对位姿确定方法。例如,处理器320能够包括一个或多个嵌入式处理器、处理器核心、微型处理器、逻辑电路、硬件有限状态机(FSM)、数字信号处理器(DSP)或它们的组合。
在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器320可以运行存储装置310存储的所述程序指令,以实现本文所述的本发明实施例中(由处理器实现)的功能以及/或者其它期望的功能,例如以执行根据本发明实施例的绝对位姿确定方法的相应步骤。在所述计算机可读存储介质中还可以存储各种应用程序和各种数据,例如所述应用程序使用和/或产生的各种数据等。
本发明实施例又一方面提供一种可移动平台,所述可移动平台上搭载有激光雷达410,即上文中所述的第二激光雷达。所述可移动平台还包括电子设备420,所述电子设备420与第二激光雷达通信连接。
其中,所述电子设备420可以参照图3所描述的电子设备300,在此不做赘述。一些示例中,上文中的点云点可以是激光雷达所获取到的点云数据中的任意一个点云点。激光雷达410可以是传统的机械式激光雷达,也可以是固态或半固态激光雷达,或其他任何合适的激光测距设备。在一种实施方式中,激光雷达用于感测外部环境信息,例如,环境目标的距离信息、方位信息、反射强度信息、速度信息等。一个点云点可以包括激光雷达所测到的外部环境信息中的至少一种。
一种实现方式中,激光雷达可以通过测量激光雷达和探测物之间光传播的时间,即光飞行时间(Time-of-Flight,TOF),来探测探测物到激光雷达的距离。或者,激光雷达也可以通过其他技术来探测探测物到激光雷达的距离,例如基于相位移动(phase shift)测量的测距方法,或者基于频率移动(frequency shift)测量的测距方法,在此不做限制。
为了便于理解,下面参考图5和图6对本发明实施例所涉及的一种激光雷达的结构做详细的示例性地描述。所述激光雷达仅作为示例,对于其他适合的激光雷达也可以应用于本申请。
首先,结合图5所示的激光雷达500对本文提到的激光雷达进行举例描述。
如图5所示,激光雷达500包括发射电路510、接收电路520、采样电路530和运算电路540。
发射电路510可以发射光脉冲序列(例如激光脉冲序列)。接收电路520可以接收经过被探测物反射的光脉冲序列,并对该光脉冲序列进行光电转换,以得到电信号,再对电信号进行处理之后可以输出给采样电路530。采样电路530可以对电信号进行采样,以获取采样结果。运算电路540可以基于采样电路530的采样结果,以确定激光雷达500与被探测物之间的距离,也即深度。
可选地,该激光雷达500还可以包括控制电路550,该控制电路550可以实现对其他电路的控制,例如,可以控制各个电路的工作时间和/或对各个电路进行参数设置等。
应理解,虽然图5示出的激光雷达中包括一个发射电路、一个接收电路、一个采样电路和一个运算电路,用于出射一路光束进行探测,但是本申请实施例并不限于此,发射电路、接收电路、采样电路、运算电路中的任一种电路的数量也可以是至少两个,用于沿相同方向或分别沿不同方向出射至少两路光束;其中,该至少两束光路可以是同时出射,也可以是分别在不同时刻出射。一个示例中,该至少两个发射电路中的发光芯片封装在同一个模块中。例如,每个发射电路包括一个激光发射芯片,该至少两个发射电路中的激光发射芯片中的die封装到一起,容置在同一个封装空间中。
一些实现方式中,除了图5所示的电路,激光雷达500还可以包括扫描模块,用于将发射电路出射的至少一路激光脉冲序列改变传播方向出射,以对视场进行扫描。
其中,可以将包括发射电路510、接收电路520、采样电路530和运算电路540的模块,或者,包括发射电路510、接收电路520、采样电路530、运算电路540和控制电路550的模块称为测距模块,该测距模块可以独立于其他模块,例如,扫描模块。
激光雷达中可以采用同轴光路,也即激光雷达出射的光束和经反射回来的光束在激光雷达内共用至少部分光路。例如,发射电路出射的至少一路激光脉冲序列经扫描模块改变传播方向出射后,经探测物反射回来的激光脉冲序列经过扫描模块后入射至接收电路。或者,激光雷达也可以采用异轴光路,也即激光雷达出射的光束和经反射回来的光束在激光雷达内分别沿不同的光路传输。图6示出了本发明的激光雷达采用同轴光路的一种实施例的示意图。
激光雷达600包括测距模块610,测距模块610包括发射器603(可以包括上述的发射电路)、准直元件604、探测器605(可以包括上述的接收电路、采样电路和运算电路)和光路改变元件606。测距模块610用于发射光束,且接收回光,将回光转换为电信号。其中,发射器603可以用于发射光脉冲序列。在一个实施例中,发射器603可以发射激光脉冲序列。可选的,发射器603发射出的激光束为波长在可见光范围之外的窄带宽光束。准直元件604设置于发射器的出射光路上,用于准直从发射器603发出的光束,将发射器603发出的光束准直为平行光出射至扫描模块。准直元件还用于会聚经探测物反射的回光的至少一部分。该准直元件604可以是准直透镜或者是其他能够准直光束的元件。
在图6所示实施例中,通过光路改变元件606来将激光雷达内的发射光路和接收光路在准直元件604之前合并,使得发射光路和接收光路可以共用同一个准直元件,使得光路更加紧凑。在其他的一些实现方式中,也可以是发射器603和探测器605分别使用各自的准直元件,将光路改变元件606设置在准直元件之后的光路上。
在图6所示实施例中,由于发射器603出射的光束的光束孔径较小,激光雷达所接收到的回光的光束孔径较大,所以光路改变元件可以采用小面积的反射镜来将发射光路和接收光路合并。在其他的一些实现方式中,光路改变元件也可以采用带通孔的反射镜,其中该通孔用于透射发射器603的出射光,反射镜用于将回光反射至探测器605。这样可以减小采用小反射镜的情况中小反 射镜的支架会对回光的遮挡。
在图6所示实施例中,光路改变元件偏离了准直元件604的光轴。在其他的一些实现方式中,光路改变元件也可以位于准直元件604的光轴上。
激光雷达600还包括扫描模块602。扫描模块602放置于测距模块610的出射光路上,扫描模块602用于改变经准直元件604出射的准直光束619的传输方向并投射至外界环境,并将回光投射至准直元件604。回光经准直元件604汇聚到探测器605上。
在一个实施例中,扫描模块602可以包括至少一个光学元件,用于改变光束的传播路径,其中,该光学元件可以通过对光束进行反射、折射、衍射等等方式来改变光束传播路径。例如,扫描模块602包括透镜、反射镜、棱镜、振镜、光栅、液晶、光学相控阵(Optical Phased Array)或上述光学元件的任意组合。一个示例中,至少部分光学元件是运动的,例如通过驱动模块来驱动该至少部分光学元件进行运动,该运动的光学元件可以在不同时刻将光束反射、折射或衍射至不同的方向。在一些实施例中,扫描模块602的多个光学元件可以绕共同的轴609旋转或振动,每个旋转或振动的光学元件用于不断改变入射光束的传播方向。在一个实施例中,扫描模块602的多个光学元件可以以不同的转速旋转,或以不同的速度振动。在另一个实施例中,扫描模块602的至少部分光学元件可以以基本相同的转速旋转。在一些实施例中,扫描模块的多个光学元件也可以是绕不同的轴旋转。在一些实施例中,扫描模块的多个光学元件也可以是以相同的方向旋转,或以不同的方向旋转;或者沿相同的方向振动,或者沿不同的方向振动,在此不作限制。
在一个实施例中,扫描模块602包括第一光学元件614和与第一光学元件614连接的驱动器616,驱动器616用于驱动第一光学元件614绕转动轴609转动,使第一光学元件614改变准直光束619的方向。第一光学元件614将准直光束619投射至不同的方向。在一个实施例中,准直光束619经第一光学元件改变后的方向与转动轴609的夹角随着第一光学元件614的转动而变化。在一个实施例中,第一光学元件614包括相对的非平行的一对表面,准直光束619穿过该对表面。在一个实施例中,第一光学元件614包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第一光学元件614包括楔角棱镜,对准直光束619进行折射。
在一个实施例中,扫描模块602还包括第二光学元件615,第二光学元件615绕转动轴609转动,第二光学元件615的转动速度与第一光学元件614的转动速度不同。第二光学元件615用于改变第一光学元件614投射的光束的方向。在一个实施例中,第二光学元件615与另一驱动器617连接,驱动器617驱动第二光学元件615转动。第一光学元件614和第二光学元件615可以由相同或不同的驱动器驱动,使第一光学元件614和第二光学元件615的转速和/或转向不同,从而将准直光束619投射至外界空间不同的方向,可以扫描较大的空间范围。在一个实施例中,控制器618控制驱动器616和617,分别驱动第一光学元件614和第二光学元件615。第一光学元件614和第二光学元件615的转速可以根据实际应用中预期扫描的区域和样式确定。驱动器616和617可以包括电机或其他驱动器。
在一个实施例中,第二光学元件615包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第二光学元件615包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第二光学元件615包括楔角棱镜。
一个实施例中,扫描模块602还包括第三光学元件(图未示)和用于驱动第三光学元件运动的驱动器。可选地,该第三光学元件包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第三光学元件包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第三光学元件包括楔角棱镜。第一、第二和第三光学元件中的至少两个光学元件以不同的转速和/或转向转动。
扫描模块602中的各光学元件旋转可以将光投射至不同的方向,例如方向611和方向613,如此对激光雷达600周围的空间进行扫描。如图7所示,图7为激光雷达600的一种扫描图案的示意图。根据本发明实施例的上述激光雷达600,扫描模块的扫描密度在时间轴上随着积分时间的累加而变化。由于测距装置的扫描轨迹随着时间改变,随着积分时间的累加,扫描密度逐渐增加,因而由上述激光雷达采集到的不同帧的点云数据中将扫描到属于目标物体不同部位处的点云。
可以理解的是,扫描模块内的光学元件的速度变化时,扫描图案也会随之变化。
当扫描模块602投射出的光打到探测物601时,一部分光被探测物601沿与投射的光相反的方向反射至激光雷达600。探测物601反射的回光612经 过扫描模块602后入射至准直元件604。
探测器605与发射器603放置于准直元件604的同一侧,探测器605用于将穿过准直元件604的至少部分回光转换为电信号。
一个实施例中,各光学元件上镀有增透膜。可选的,增透膜的厚度与发射器603发射出的光束的波长相等或接近,能够增加透射光束的强度。
一个实施例中,激光雷达中位于光束传播路径上的一个元件表面上镀有滤光层,或者在光束传播路径上设置有滤光器,用于至少透射发射器所出射的光束所在波段,反射其他波段,以减少环境光给接收器带来的噪音。
在一些实施例中,发射器603可以包括激光二极管,通过激光二极管发射纳秒级别的激光脉冲。进一步地,可以确定激光脉冲接收时间,例如,通过探测电信号脉冲的上升沿时间和/或下降沿时间确定激光脉冲接收时间。如此,激光雷达600可以利用脉冲接收时间信息和脉冲发出时间信息计算TOF,从而确定探测物601到激光雷达600的距离。
在一种实施方式中,可移动平台400还包括可移动平台本体。在一些实施方式中,可移动平台包括无人汽车、遥控车、机器人中的至少一种。当可移动平台为无人车时,可移动平台本体为无人车的车身。当可移动平台为遥控车时,可移动平台本体为遥控车的车身。当可移动平台为机器人时,可移动平台本体为机器人本体。
另外,本发明实施例还提供了一种计算机存储介质,其上存储有计算机程序。当所述计算机程序由处理器执行时,可以实现本发明实施例的绝对位姿确定方法的各个步骤。例如,所述计算机存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。
综上所述,本发明实施例的绝对位姿确定方法、电子设备、可移动平台和计算机计算机存储介质预先根据点云数据构建基础地图,并将根据实时采集的点云数据所生成的局部地图与基础地图进行匹配以确定当前的绝对位姿,位姿解算精度高、无需依赖基站布置、通用性强。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本发明的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本发明的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本发明的范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细 示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本发明并帮助理解各个发明方面中的一个或多个,在对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本发明的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的一些模块的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计 算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。

Claims (25)

  1. 一种基于激光雷达的绝对位姿确定方法,其特征在于,所述方法包括:
    载入预先构建的当前场景下的基础地图,所述基础地图包括多个关键帧地图,所述关键帧地图与关键帧位姿相对应,所述关键帧地图包含第一激光雷达在关键帧位姿下采集的第一点云数据的信息;
    在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据所述第二点云数据得到当前位姿下的局部地图;
    将所述局部地图与所述多个关键帧地图进行匹配,以确定与所述局部地图相匹配的关键帧地图,并根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿。
  2. 如权利要求1所述的方法,其特征在于,构建所述基础地图包括:
    由可移动平台搭载第一激光雷达在所述当前场景下移动,并在移动过程中由所述第一激光雷达采集第一点云数据;
    根据所述第一点云数据构建所述基础地图。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述第一点云数据构建所述基础地图,包括:
    将关键帧时刻前后一段时间内的所述第一点云数据叠加在所述关键帧时刻,根据叠加在所述关键帧时刻的所述第一点云数据生成所述关键帧地图,其中,与所述关键帧地图相对应的关键帧位姿为第一激光雷达在所述关键帧时刻的位姿。
  4. 如权利要求3所述的方法,其特征在于,将所述一段时间内的点云数据叠加在所述关键帧时刻,包括:
    将所述一段时间内的所述第一点云数据转换到所述关键帧时刻的点云坐标系下,并进行叠加。
  5. 如权利要求1-4之一所述的方法,其特征在于,将所述当前位姿下的局部地图与所述多个关键帧地图进行匹配包括:
    提取所述局部地图和所述关键帧地图的图像特征以进行匹配,若匹配度大于预设阈值,则确定所述局部地图匹配到所述关键帧地图。
  6. 如权利要求5所述的方法,其特征在于,所述根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿,包括:
    在所述局部地图和所述关键帧地图中提取相匹配的特征点对,根据所述特征点对的三维空间信息解算出所述局部地图相对于所述关键帧地图的位姿变换关系,根据所述位姿变换关系和所述关键帧位姿确定所述当前位姿。
  7. 如权利要求1-6之一所述的方法,其特征在于,所述关键帧地图包括根据所述第一点云数据的反射率信息所构建的关键帧反射率地图,所述局部地图包括根据所述第二点云数据的反射率信息所构建的局部反射率地图,所述将所述局部地图与所述多个关键帧地图进行匹配包括:
    将所述关键帧反射率地图与所述局部反射率地图进行匹配。
  8. 如权利要求1-6之一所述的方法,其特征在于,所述关键帧地图包括根据所述第一点云数据的深度信息所构建的关键帧深度地图,所述局部地图包括根据所述第二点云数据的深度信息所构建的局部深度地图,所述将所述局部地图与所述多个关键帧地图进行匹配包括:
    将所述关键帧深度地图与所述局部深度地图进行匹配。
  9. 如权利要求1-8之一所述的方法,其特征在于,所述基础地图还包括根据所述第一点云数据生成的当前场景下的全局点云地图,所述方法还包括:
    根据所述第二点云数据生成当前视角下的局部点云地图;
    将所述局部点云地图与所述全局点云地图进行匹配,以确定所述当前位姿。
  10. 如权利要求9所述的方法,其特征在于,所述确定所述当前位姿包括:
    在所述关键帧地图中选择多个候选关键帧地图;
    根据与所述候选关键帧地图相对应的关键帧位姿,将所述局部点云地图转换到世界坐标系下;
    使用转换后的局部点云地图与全局点云地图进行匹配,根据匹配结果在所述候选关键帧地图中选择目标关键帧地图;
    根据与所述目标关键帧地图相对应的关键帧位姿信息,将所述局部点云地图转换到世界坐标系下,根据所述局部点云地图与所述全局点云地图之间的特征距离优化与所述目标关键帧地图相对应的关键帧位姿,以确定当前位姿。
  11. 如权利要求10所述的方法,其特征在于,所述在所述关键帧地图中选择多个候选关键帧地图包括:
    若当前局部地图中存在高反射物体,则将存在高反射物体的关键帧地图作为所述候选关键帧地图。
  12. 如权利要求9-11之一所述的方法,其特征在于,所述全局点云地图包括全局平面点地图和全局边沿点地图,所述局部点云地图包括局部平面点地图和局部边沿点地图,所述将所述局部点云地图与所述全局点云地图进行匹配包括:
    将所述局部平面点地图与所述全局平面点地图进行匹配,将所述局部边沿点地图与所述全局边沿点地图进行匹配。
  13. 如权利要求12所述的方法,其特征在于,构建所述局部平面点地图包括:
    从所述第二点云数据中按照时序获取满足第一预设条件的第一预定数目的一组点云点,以作为平面点候选点;
    基于所确定的平面点候选点获取所述当前帧点云数据的最终平面点提取结果;
    其中所述第一预设条件包括:所述一组点云点的空间分布近似为一条直线,并且所述一组点云点以中间点为中心时近似中心对称。
  14. 如权利要求12所述的方法,其特征在于,构建所述局部边沿点地图包括:从所述第二点云数据中提取边沿点,以构建所述局部边沿点地图,所述边沿点包括面面相交边沿点和跳跃边沿点,所述面面相交边沿点对应于三维空间中相交的面的交界线上的点,所述跳跃边沿点对应于三维空间中孤立面的边沿上的点。
  15. 如权利要求1-14之一所述的方法,其特征在于,还包括:
    根据所述当前位姿进行可移动平台的路径规划;
    在所述可移动平台移动的过程中采用即时定位与地图构建算法在所述当前位姿的基础上增量式地确定位姿信息。
  16. 如权利要求1-15之一所述的方法,其特征在于,还包括:
    根据所述第二点云数据对所述基础地图进行更新。
  17. 一种电子设备,其特征在于,所述电子设备包括存储装置和处理器,其中,所述存储装置用于存储程序代码;所述处理器用于执行所述程序代码,当所述程序代码执行时,用于:
    载入预先构建的当前场景下的基础地图,所述基础地图包括多个关键帧地图,所述关键帧地图与关键帧位姿相对应,所述关键帧地图包含第一激光雷达 在关键帧位姿下采集的第一点云数据的信息;
    在当前位姿下,由搭载在可移动平台上的第二激光雷达采集第二点云数据,并根据所述第二点云数据得到当前位姿下的局部地图;
    将所述局部地图与所述多个关键帧地图进行匹配,以确定与所述局部地图相匹配的关键帧地图,并根据与所述关键帧地图相对应的关键帧位姿确定所述第二激光雷达的当前位姿。
  18. 如权利要求17所述的电子设备,其特征在于,所述关键帧地图包括根据所述第一点云数据的反射率信息所构建的关键帧反射率地图,所述局部地图包括根据所述第二点云数据的反射率信息所构建的局部反射率地图,所述将所述局部地图与所述多个关键帧地图进行匹配包括:
    将所述关键帧反射率地图与所述局部反射率地图进行匹配。
  19. 如权利要求17所述的电子设备,其特征在于,所述关键帧地图包括根据所述第一点云数据的深度信息所构建的关键帧深度地图,所述局部地图包括根据所述第二点云数据的深度信息所构建的局部深度地图,所述将所述局部地图与所述多个关键帧地图进行匹配包括:
    将所述关键帧深度地图与所述局部深度地图进行匹配。
  20. 如权利要求17-19之一所述的电子设备,其特征在于,所述基础地图还包括根据所述第一点云数据生成的当前场景下的全局点云地图,所述电子设备还包括:
    根据所述第二点云数据生成当前视角下的局部点云地图;
    将所述局部点云地图与所述全局点云地图进行匹配,以确定所述当前位姿。
  21. 如权利要求20所述的电子设备,其特征在于,所述确定所述当前位姿包括:
    在所述关键帧地图中选择多个候选关键帧地图;
    根据与所述候选关键帧地图相对应的关键帧位姿,将所述局部点云地图转换到世界坐标系下;
    使用转换后的局部点云地图与全局点云地图进行匹配,根据匹配结果在所述候选关键帧地图中选择目标关键帧地图;
    根据与所述目标关键帧地图相对应的关键帧位姿信息,将所述局部点云地图转换到世界坐标系下,根据所述局部点云地图与所述全局点云地图之间的特 征距离优化与所述目标关键帧地图相对应的关键帧位姿,以确定当前位姿。
  22. 如权利要求21所述的电子设备,其特征在于,所述在所述关键帧地图中选择多个候选关键帧地图包括:
    若当前局部地图中存在高反射物体,则将存在高反射物体的关键帧地图作为所述候选关键帧地图。
  23. 如权利要求20-22之一所述的电子设备,其特征在于,所述全局点云地图包括全局平面点地图和全局边沿点地图,所述局部点云地图包括局部平面点地图和局部边沿点地图,所述将所述局部点云地图与所述全局点云地图进行匹配包括:
    将所述局部平面点地图与所述全局平面点地图进行匹配,将所述局部边沿点地图与所述全局边沿点地图进行匹配。
  24. 一种可移动平台,其特征在于,所述可移动平台上搭载有激光雷达,所述可移动平台还包括如权利要求17-23之一所述的电子设备。
  25. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至16中任一项所述的绝对位姿确定方法的步骤。
PCT/CN2020/097198 2020-06-19 2020-06-19 绝对位姿确定方法、电子设备及可移动平台 WO2021253430A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/097198 WO2021253430A1 (zh) 2020-06-19 2020-06-19 绝对位姿确定方法、电子设备及可移动平台
CN202080006249.7A CN114080625A (zh) 2020-06-19 2020-06-19 绝对位姿确定方法、电子设备及可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/097198 WO2021253430A1 (zh) 2020-06-19 2020-06-19 绝对位姿确定方法、电子设备及可移动平台

Publications (1)

Publication Number Publication Date
WO2021253430A1 true WO2021253430A1 (zh) 2021-12-23

Family

ID=79269020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097198 WO2021253430A1 (zh) 2020-06-19 2020-06-19 绝对位姿确定方法、电子设备及可移动平台

Country Status (2)

Country Link
CN (1) CN114080625A (zh)
WO (1) WO2021253430A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326051A (zh) * 2022-08-03 2022-11-11 广州高新兴机器人有限公司 一种基于动态场景的定位方法、装置、机器人及介质
CN115937383A (zh) * 2022-09-21 2023-04-07 北京字跳网络技术有限公司 渲染图像的方法、装置、电子设备及存储介质
CN116449392A (zh) * 2023-06-14 2023-07-18 北京集度科技有限公司 一种地图构建方法、装置、计算机设备及存储介质
CN116939815A (zh) * 2023-09-15 2023-10-24 常熟理工学院 一种基于激光点云地图的uwb定位基站选择方法
CN116977622A (zh) * 2023-09-22 2023-10-31 国汽(北京)智能网联汽车研究院有限公司 初始化定位方法及其装置、设备、介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115031718B (zh) * 2022-05-25 2023-10-31 合肥恒淏智能科技合伙企业(有限合伙) 一种多传感器融合的无人船同步定位与建图方法(slam)及系统
CN116310126B (zh) * 2023-03-23 2023-11-03 南京航空航天大学 基于合作目标的飞机进气道三维重建方法及系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107796397A (zh) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 一种机器人双目视觉定位方法、装置和存储介质
CN109141437A (zh) * 2018-09-30 2019-01-04 中国科学院合肥物质科学研究院 一种机器人全局重定位方法
CN109407073A (zh) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 反射值地图构建方法和装置
CN109887053A (zh) * 2019-02-01 2019-06-14 广州小鹏汽车科技有限公司 一种slam地图拼接方法及系统
CN110097045A (zh) * 2018-01-31 2019-08-06 株式会社理光 一种定位方法、定位装置及可读存储介质
US20190301873A1 (en) * 2018-03-27 2019-10-03 Uber Technologies, Inc. Log trajectory estimation for globally consistent maps
CN110533722A (zh) * 2019-08-30 2019-12-03 的卢技术有限公司 一种基于视觉词典的机器人快速重定位方法及系统
CN110686677A (zh) * 2019-10-10 2020-01-14 东北大学 一种基于几何信息的全局定位方法
CN111094895A (zh) * 2018-06-29 2020-05-01 百度时代网络技术(北京)有限公司 用于在预构建的视觉地图中进行鲁棒自重新定位的系统和方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407073A (zh) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 反射值地图构建方法和装置
CN107796397A (zh) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 一种机器人双目视觉定位方法、装置和存储介质
CN110097045A (zh) * 2018-01-31 2019-08-06 株式会社理光 一种定位方法、定位装置及可读存储介质
US20190301873A1 (en) * 2018-03-27 2019-10-03 Uber Technologies, Inc. Log trajectory estimation for globally consistent maps
CN111094895A (zh) * 2018-06-29 2020-05-01 百度时代网络技术(北京)有限公司 用于在预构建的视觉地图中进行鲁棒自重新定位的系统和方法
CN109141437A (zh) * 2018-09-30 2019-01-04 中国科学院合肥物质科学研究院 一种机器人全局重定位方法
CN109887053A (zh) * 2019-02-01 2019-06-14 广州小鹏汽车科技有限公司 一种slam地图拼接方法及系统
CN110533722A (zh) * 2019-08-30 2019-12-03 的卢技术有限公司 一种基于视觉词典的机器人快速重定位方法及系统
CN110686677A (zh) * 2019-10-10 2020-01-14 东北大学 一种基于几何信息的全局定位方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326051A (zh) * 2022-08-03 2022-11-11 广州高新兴机器人有限公司 一种基于动态场景的定位方法、装置、机器人及介质
CN115937383A (zh) * 2022-09-21 2023-04-07 北京字跳网络技术有限公司 渲染图像的方法、装置、电子设备及存储介质
CN115937383B (zh) * 2022-09-21 2023-10-10 北京字跳网络技术有限公司 渲染图像的方法、装置、电子设备及存储介质
CN116449392A (zh) * 2023-06-14 2023-07-18 北京集度科技有限公司 一种地图构建方法、装置、计算机设备及存储介质
CN116449392B (zh) * 2023-06-14 2023-09-19 北京集度科技有限公司 一种地图构建方法、装置、计算机设备及存储介质
CN116939815A (zh) * 2023-09-15 2023-10-24 常熟理工学院 一种基于激光点云地图的uwb定位基站选择方法
CN116939815B (zh) * 2023-09-15 2023-12-05 常熟理工学院 一种基于激光点云地图的uwb定位基站选择方法
CN116977622A (zh) * 2023-09-22 2023-10-31 国汽(北京)智能网联汽车研究院有限公司 初始化定位方法及其装置、设备、介质

Also Published As

Publication number Publication date
CN114080625A (zh) 2022-02-22

Similar Documents

Publication Publication Date Title
WO2021253430A1 (zh) 绝对位姿确定方法、电子设备及可移动平台
Liu et al. TOF lidar development in autonomous vehicle
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
WO2021104497A1 (zh) 基于激光雷达的定位方法及系统、存储介质和处理器
WO2019179417A1 (zh) 数据融合方法以及相关设备
US11960028B2 (en) Determining specular reflectivity characteristics using LiDAR
WO2021072710A1 (zh) 移动物体的点云融合方法、系统及计算机存储介质
WO2020243962A1 (zh) 物体检测方法、电子设备和可移动平台
WO2022126427A1 (zh) 点云处理方法、点云处理装置、可移动平台和计算机存储介质
WO2021051281A1 (zh) 点云滤噪的方法、测距装置、系统、存储介质和移动平台
WO2022198637A1 (zh) 点云滤噪方法、系统和可移动平台
US11796653B2 (en) Detecting and tracking Lidar cross-talk
WO2021239054A1 (zh) 空间测量装置、方法、设备以及计算机可读存储介质
WO2019135869A1 (en) Lidar intensity calibration
WO2020241043A1 (ja) 作業分析システム、作業分析装置、および作業分析プログラム
US20210255289A1 (en) Light detection method, light detection device, and mobile platform
WO2020215252A1 (zh) 测距装置点云滤噪的方法、测距装置和移动平台
US20210333401A1 (en) Distance measuring device, point cloud data application method, sensing system, and movable platform
Steinbaeck et al. Occupancy grid fusion of low-level radar and time-of-flight sensor data
WO2021232227A1 (zh) 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质
WO2022083529A1 (zh) 一种数据处理方法及装置
CN114080545A (zh) 数据处理方法、装置、激光雷达和存储介质
TWI687706B (zh) 移動裝置之定位系統
KR102558910B1 (ko) 주변 환경을 고려한 신뢰도 높은 다종의 센서 데이터를 융합한 위치 측위 장치 및 방법
Shojaeipour et al. Laser-pointer rangefinder between mobile robot and obstacles via webcam based

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940569

Country of ref document: EP

Kind code of ref document: A1