CN114080625A - Absolute pose determination method, electronic equipment and movable platform - Google Patents

Absolute pose determination method, electronic equipment and movable platform Download PDF

Info

Publication number
CN114080625A
CN114080625A CN202080006249.7A CN202080006249A CN114080625A CN 114080625 A CN114080625 A CN 114080625A CN 202080006249 A CN202080006249 A CN 202080006249A CN 114080625 A CN114080625 A CN 114080625A
Authority
CN
China
Prior art keywords
map
point cloud
key frame
local
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080006249.7A
Other languages
Chinese (zh)
Inventor
朱晏辰
李延召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN114080625A publication Critical patent/CN114080625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

An absolute pose determination method, an electronic device and a movable platform are provided, and the method comprises the following steps: loading a base map under a current scene, which is constructed in advance, wherein the base map comprises a plurality of key frame maps, the key frame maps correspond to key frame positions and gestures, and the key frame maps contain information of first point cloud data collected by a first laser radar under the key frame positions and gestures; acquiring second point cloud data by a second laser radar carried on the movable platform under the current pose, and obtaining a local map under the current pose according to the second point cloud data; and matching the local map with a plurality of key frame maps to determine a key frame map matched with the local map, and determining the current pose of the second laser radar according to the key frame pose corresponding to the key frame map. The method matches the local map generated according to the point cloud data acquired in real time with the pre-constructed basic map to determine the current absolute pose, and is high in pose resolving accuracy and high in universality.

Description

Absolute pose determination method, electronic equipment and movable platform Technical Field
The embodiment of the invention relates to the technical field of distance measurement, in particular to an absolute pose determination method, electronic equipment and a movable platform.
Background
The existing absolute Positioning technology applied outdoors is usually implemented by using a satellite Positioning technology, wherein a GPS (Global Positioning System) can only obtain a meter-level Positioning accuracy, and an RTK (Real-time kinematic) carrier phase differential technology can obtain a centimeter-level Positioning accuracy, but is expensive. Also, the above technique can only obtain position information, and cannot obtain posture information.
In the absolute positioning technology applied indoors, most of the technologies depend on the arrangement of the environment, a positioning base station needs to be additionally added in the indoor environment, and the universality is poor. And for some positioning methods based on bluetooth and WiFi signals, the positioning accuracy is poor, and the anti-interference capability is weak.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In view of the defects in the prior art, a first aspect of the embodiments of the present invention provides a method for determining an absolute pose based on a laser radar, including:
loading a pre-constructed basic map under a current scene, wherein the basic map comprises a plurality of key frame maps, the key frame maps correspond to key frame positions and postures, and the key frame maps contain information of first point cloud data collected by a first laser radar under the key frame postures;
acquiring second point cloud data by a second laser radar carried on the movable platform under the current pose, and obtaining a local map under the current pose according to the second point cloud data;
and matching the local map with the plurality of key frame maps to determine a key frame map matched with the local map, and determining the current pose of the second laser radar according to the key frame pose corresponding to the key frame map.
A second aspect of the embodiments of the present invention provides an electronic device, which includes a storage device and a processor, where the storage device is configured to store program codes; the processor is configured to execute the program code and when the program code executes is configured to:
loading a pre-constructed basic map under a current scene, wherein the basic map comprises a plurality of key frame maps, the key frame maps correspond to key frame positions and postures, and the key frame maps contain information of first point cloud data collected by a first laser radar under the key frame postures;
acquiring second point cloud data by a second laser radar carried on the movable platform under the current pose, and obtaining a local map under the current pose according to the second point cloud data;
and matching the local map with the plurality of key frame maps to determine a key frame map matched with the local map, and determining the current pose of the second laser radar according to the key frame pose corresponding to the key frame map.
A third aspect of the embodiments of the present invention provides a movable platform, where a laser radar is mounted on the movable platform, and the movable platform further includes the electronic device provided in the second aspect of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides a computer storage medium on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the absolute pose determination method provided by the first aspect of the embodiments of the present invention.
The absolute pose determining method, the electronic device and the movable platform of the embodiment of the invention construct the basic map in advance according to the point cloud data, and match the local map generated according to the point cloud data acquired in real time with the basic map to determine the current absolute pose, so that the pose resolving accuracy is high, the base station arrangement is not required, and the universality is strong.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic flow chart of a laser radar-based absolute pose determination method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a scenario for building a base map, according to an embodiment of the invention;
FIG. 3 is a schematic block diagram of an electronic device provided by an embodiment of the invention;
FIG. 4 is a schematic block diagram of a movable platform provided by an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a ranging apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of one embodiment of a distance measuring device employing coaxial optical paths in accordance with embodiments of the present invention;
fig. 7 is a schematic diagram of a scan pattern of a lidar in accordance with an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
The embodiment of the invention provides a laser radar-based absolute pose determining method, which is used for determining the absolute pose of a laser radar and further determining the absolute pose of a movable platform carrying the laser radar. Fig. 1 shows a schematic flow diagram of a lidar based absolute pose determination method 100 according to an embodiment of the application. As shown in fig. 1, the absolute pose determination method 100 includes the steps of:
in step S110, loading a pre-constructed basic map under the current scene, where the basic map includes a plurality of keyframe maps, the keyframe maps correspond to keyframe poses, and the keyframe maps include information of first point cloud data collected by a first lidar at the keyframe poses;
in step S120, in the current pose, second point cloud data is collected by a second laser radar mounted on the movable platform, and a local map in the current pose is obtained according to the second point cloud data;
in step S130, the local map is matched with the plurality of keyframe maps to determine a keyframe map matched with the local map, and the current pose of the second lidar is determined according to the keyframe pose corresponding to the keyframe map.
The absolute pose determining method 100 of the embodiment of the invention scans the current scene through the laser radar to complete basic map construction in the map construction stage, can directly complete determination of the current absolute pose through an algorithm according to the point cloud data acquired in real time in the in-place pose determining stage, has the characteristics of high pose resolving precision, no dependence on base station arrangement, strong universality and the like, and is particularly suitable for indoor environments without GPS or weak GPS signals, such as markets, airports, warehouses, hotels and the like.
The method 100 of the embodiment of the present invention may be applied to a movable platform carrying a laser radar, where the movable platform includes, but is not limited to, an unmanned vehicle, a robot, and the like, and further may include an indoor robot operating in the indoor scene. The laser radar mounted on the movable platform is the second laser radar in step S120, and after the second point cloud data is collected by the second laser radar, the current pose of the second laser radar can be determined according to the second point cloud data and the pre-constructed basic map under the current scene, so that the current pose of the movable platform mounted with the second laser radar is determined.
In step S110, the loaded basic map is pre-constructed according to the first point cloud data collected by the first laser radar in the current scene. The first lidar may be a mechanical lidar or may be a solid or semi-solid lidar. In the map building stage, a movable platform can carry a first laser radar to move in the current scene, the first laser radar collects first point cloud data in the moving process, and then a basic map is built according to the first point cloud data. The first laser radar and the second laser radar can be the same laser radar, so that the matching degree of the local map and the basic map is improved. Of course, the first and second lidar may also be different lidar. Similarly, the same or similar movable platform can be adopted for basic map construction and absolute pose determination, so that the matching degree of the local map and the basic map is further improved.
Illustratively, the first laser radar actively emits laser pulses to a detected object, captures a laser echo signal, and calculates the distance of the detected object according to the time difference between laser emission and laser reception; obtaining angle information of the measured object based on the known emission direction of the laser; and obtaining reflectivity information according to the pulse width and the distance. In one implementation, the first lidar may detect a distance of the object to the lidar by a Time-of-Flight (TOF). Alternatively, the first lidar may detect the distance from the measured object to the lidar by other techniques, such as a phase shift (phase shift) or frequency shift (frequency shift) based method, which is not limited herein.
FIG. 2 shows a schematic diagram of a scenario for building a base map, according to one embodiment of the present invention. As shown in fig. 2, the movable platform carries a first laser radar to roam in the current scene, a Mapping track is obtained through an SLAM (Simultaneous Localization and Mapping, instant positioning and Mapping) algorithm, and point cloud data acquired in the roaming process is converted into the same coordinate system to obtain first point cloud data. In the example of fig. 2, the movable platform may be a warehouse trolley and the current scenario may be a warehouse on which the warehouse trolley is operating. The mapping trajectory of the movable platform should ensure that the first point cloud data contains as rich point cloud data as possible in the current scene, including but not limited to the closed curve shown in fig. 2.
The basic map constructed according to the first point cloud data includes a key frame map corresponding to a key frame pose at a key frame time. The keyframe map is in the form of an image, i.e., the object is characterized by pixel values. In the process of constructing the map, the smallest processing unit of the first point cloud data is called a frame, and taking the first laser radar adopting the 10Hz output frequency as an example, the first point cloud data collected every 100ms is called a frame. By employing the SLAM algorithm, the position and orientation of each frame with respect to the reference coordinate system can be output. In all the point cloud frames, one frame is determined at preset time intervals to serve as the key frames, and the time interval between adjacent key frames can be dynamically set according to the size of the current scene and the movement speed of the movable platform. In one example, as shown in fig. 2, the last frame of each time period may be determined as a key frame. In other examples, the key frame may also be the first frame in each time segment or some frame in between.
As shown in fig. 2, at a time of a key frame, the first lidar is in a key frame pose (i.e., a position and a posture), and a covered field of view (FOV) of the first lidar is a field of view in the key frame pose, and the collected first point cloud data includes a point cloud covered by the field of view. After generating the key frame map according to the point cloud data of the key frame time, the key frame map is associated with the key frame position of the key frame time.
Furthermore, as the single-frame point cloud is sparse, the key frame map not only can include the point cloud data acquired at the key frame time, but also can overlay the first point cloud data in a period of time before and after the key frame time at the key frame time, and generate the key frame map according to the first point cloud data overlaid at the key frame time. The point cloud data in a period of time before and after the key frame moment are overlapped at the key frame moment, so that the point cloud data at the key frame moment can be denser, and a key frame map generated according to the overlapped first point cloud data contains more spatial information under the key frame posture.
Under the condition of short time interval, the difference of the field angle covered by the laser radar is small, so that the difference of target objects contained in the first point cloud data in a period of time is small, and the point cloud superposition can be realized by converting the first point cloud data in the period of time into a point cloud coordinate system at the moment of key frame. And converting the first point cloud data at other moments into a coordinate system at the moment of the key frame, which is equivalent to making the first point cloud data acquired at other moments equal to the point cloud data acquired at the pose of the key frame, so that the superposed first point cloud data still corresponds to the pose of the key frame.
Exemplarily, "superimposing first point cloud data in a period of time before and after the key frame time at the key frame time" may include superimposing first point cloud data in a period of time before the key frame time at the key frame time, superimposing point cloud data in a period of time after the key frame time at the key frame time, or superimposing point cloud data in a period of time before and after the key frame time at the key frame time. In one embodiment, all the point cloud frames between two adjacent key frames can be superimposed at one of the key frame moments, so that the point cloud data superimposed at the key frame moment contains as much information as possible, and the accuracy of subsequent matching is improved. In other examples, part of the point cloud frames between adjacent key frames can be overlapped at the moment of the key frame, so that the calculation amount is reduced.
Referring to fig. 2, in a specific embodiment, the last frame of every other period of time may be determined as a key frame, and the first point cloud data in the period of time before the key frame may be all superimposed at the time of the key frame. The first point cloud data in a period of time before the key frame is overlapped at the moment of the key frame, so that the situation that the first point cloud data cannot be overlapped due to the fact that no other point cloud frame exists after the last key frame can be avoided.
In some embodiments, the keyframe map corresponding to each keyframe pose comprises at least one of a keyframe reflectivity map and a keyframe depth map. The key frame reflectivity map is constructed according to the reflectivity information of the first point cloud data superposed at the key frame moment; the key frame depth map is constructed according to depth information of the first point cloud data superimposed at the time of the key frame. Both the reflectivity information and the depth information can characterize the target object, forming a unique keyframe map corresponding to each keyframe pose. By constructing two different keyframe maps, the multi-dimensional information can be utilized, and the pose determination effect and robustness are optimized.
As an example, a method of constructing a key frame reflectivity map from reflectivity information or a key frame depth map from depth information includes: converting the point cloud points in the first point cloud data superposed at the moment of the key frame into a spherical coordinate system, respectively corresponding to the XY directions of the map by azimuth angles and pitch angles, and determining pixel values, such as pixel gray values, according to the sizes of reflectivity values or depth values, so as to convert the point cloud image into a gray image according to a certain resolution (for example, 0.1 degree by 0.1 degree). After the point cloud image is converted into the gray level image, the key frame map can be matched in the subsequent steps in an image processing mode, the matching difficulty is reduced, and meanwhile, the key frame map also carries the position information of point cloud data, so that the absolute pose resolving precision is guaranteed.
In some embodiments, the pre-constructed base map may further include a global point cloud map in the current scene generated from the first point cloud data. The global point cloud map is in the form of a point cloud map, namely, the global point cloud map is composed of massive point cloud points. Exemplarily, a first laser radar is carried by a movable platform to roam in a current scene, and after first point cloud data under different poses are collected in the roaming process, each frame of the first point cloud data is converted into a world coordinate system to be overlaid, and the overlaid first point cloud data comprise global point clouds in the current scene, so that the generated point cloud map is called a global point cloud map.
Further, in order to facilitate subsequent feature extraction and matching, the global point cloud map may be respectively constructed as a global plane point map and a global edge point map.
Wherein the global plane point map comprises plane points in the point cloud corresponding to points in the real scene that lie on a plane; the global edge point map includes edge points in a point cloud that correspond to points in a real scene that lie on the edges of planes, objects, thin rods, etc. Specifically, in each frame of point cloud of the first point cloud data, edge points and plane points are extracted respectively, and finally all the edge points and the plane points are converted into a world coordinate system, so that a global edge point map and a global plane point map are formed respectively.
As described above, the first lidar employed in the embodiments of the present invention may include not only a mechanical lidar but also a solid-state or semi-solid-state lidar. The point cloud scanning pattern of the mechanical laser radar is regular, the extraction of the plane points and the edge points is simpler, the point cloud scanning pattern of the semi-solid or solid laser radar is irregular, and the accuracy of extracting the plane points and the edge points by using a previously used feature extraction method is lower, so that the plane points and the edge points are extracted by adopting the following method in the embodiment of the invention.
Specifically, the current frame point cloud data in the first point cloud data is preprocessed first. The pretreatment may include the steps of: firstly, traversing the current frame point cloud data to obtain and mark a zero point in the current frame point cloud data. The zero point may be a point within the blind spot, a point at infinity, etc. And then, sorting the current frame point cloud data according to the depth values, selecting a median value as a scene scale threshold value, and determining the size of the sliding window based on the scene scale threshold value. And finally, traversing the point cloud data of the current frame to determine noise points in the point cloud data, and subsequently extracting plane points and edge points from point cloud points except the noise points. Illustratively, the method of determining noise comprises: and calculating the distance between each cloud point and the front and rear points of each cloud point, and marking the cloud point as a noise point if the distance exceeds a certain threshold value.
And after the preprocessing is finished, extracting plane points from the preprocessed first point cloud data. The extraction of the plane points can be realized by adopting a sliding window method.
Specifically, the method for extracting the plane point comprises the following steps: first, according to the sliding window size (for example, the sliding window size is 5) determined in the preprocessing step, a first predetermined number of point cloud points (for example, 5 point cloud points) are obtained from the first point cloud data of the current frame according to a time sequence, and whether the obtained group of point cloud points meets a first preset condition is determined. Wherein, the first preset condition comprises: the spatial distribution of the cloud points of the group of points is approximately a straight line, and the cloud points of the group of points are approximately centrosymmetric when the cloud points of the group of points take the middle point as the center. For example, a principal component analysis method may be employed to determine whether the acquired set of point cloud points satisfies a first preset condition.
And if the first preset condition is met, determining the group of point cloud points as plane point candidate points. And then, moving the sliding window backwards to obtain a next group of point cloud points with the same number for judgment, wherein the next group of point cloud points at least comprises one point cloud point in the previous group of point cloud points. After the sliding window traverses all the cloud points and extracts all the candidate points of the plane points meeting the first preset condition, the final plane point extraction result can be determined among the candidate points of the plane points. For example, to prevent feature aggregation, the current frame point cloud data may be divided into several regions, the plane point candidate points in each region are sorted according to the satisfaction degree of the first preset condition, and a part of the plane point candidate points selected from each region based on the sorting result is taken as a final plane point extraction result.
The extraction of the edge points may be performed based on the extraction result of the plane points. For edge points, the embodiment of the invention divides the edge points into three types in the feature extraction process: the surface is crossed with the edge point, the jumping edge point and the edge point of the small object. The intersection edge points of the surfaces correspond to points on the boundary lines of the intersected surfaces in the three-dimensional space, the jumping edge points correspond to points on the edges of the isolated surfaces in the three-dimensional space, and the edge points of the fine objects correspond to points on the edges of the fine objects in the three-dimensional space.
Because the intersecting edge point of the surface is a point on the connecting line of the two intersecting planes, the extraction of the intersecting edge point of the surface at least comprises the following steps:
firstly, judging whether two groups of point cloud points in front of and behind the current point cloud point simultaneously meet the judgment of plane points, namely judging whether the current point cloud point is positioned on two planes simultaneously; if the front and back groups of point cloud points where the current point cloud point is located simultaneously meet the judgment of the plane point, further judging whether the following conditions are met: the maximum value of the distance between any two points in each group of point cloud points in the front and rear two groups of point cloud points meets a first threshold range, the included angle of the direction vector formed by the front and rear two groups of point cloud points meets a second threshold range, and the included angle of the direction vector formed by the front and rear two groups of point cloud points and the emergent direction of the current point cloud point meets a third threshold range. If the above conditions are met, it is indicated that the front and rear two groups of point cloud points are not located on the same plane, so that the current point cloud point can be determined as a plane intersection edge point.
Illustratively, the extraction of the skip edge point comprises the steps of:
firstly, judging whether the difference value of the distances between the cloud point of the current point and the cloud point of the front and the back points is larger than a preset threshold value, wherein one point of the front and the back points, which is closer to the cloud point of the current point, is defined as a near point, and one point of the front and the back points, which is farther from the cloud point of the current point, is defined as a far point. And when the difference value of the distances between the cloud point of the current point and the two points before and after the cloud point of the current point is larger than a preset threshold value, determining the cloud point of the current point as a candidate jumping edge point.
Next, a final jump edge point is determined among the candidate jump edge points. Specifically, point cloud point pairs formed by candidate skip edge points and near side points are defined as a near side group, point cloud point pairs formed by candidate skip edge points and far side points are defined as a far side group, and whether the near side group meets the following conditions or not is judged: the near side group meets a first preset condition, an included angle between a direction vector formed by the near side group and a direction vector formed by the far side group meets a fourth threshold range, the maximum value of the distance between any two points in the near side group meets a fifth threshold range, and an included angle between the direction vector formed by the near side group and the emergent direction of the candidate jump edge point meets a sixth threshold range; alternatively, it may also be determined whether the far-side point is a non-zero point or a non-blind-area zero point. When the above condition is satisfied, then the candidate skip edge point may be determined as the final skip edge point.
When a fine object appears in the current scene, the edge points are difficult to extract in the above way, so that the fine object edge points are additionally extracted by adopting the extracting way aiming at the fine object edge points.
Specifically, in distinction from noise points, a second predetermined number of consecutive isolated point clouds among the point cloud points, in which the maximum value of the distance between any two points in the point cloud clouds satisfies a seventh threshold range, is first determined as a fine object edge candidate point, and the second predetermined number (e.g., two or three) is smaller than the first predetermined number (e.g., five).
Determining whether the edge point of the candidate fine object and the edge point in the previous frame jointly form a long and thin edge based on the edge point extraction result; and if the candidate edge point of the fine object and the edge point in the previous frame jointly form a long and thin edge, determining the candidate edge point of the fine object as the final edge point of the fine object.
According to the edge point and plane point extraction method, the method has strong adaptability to the scanning patterns of the laser radar, and has good extraction effect on the scanning patterns of the laser radar in solid state, semi-solid state and the like besides the scanning patterns of the traditional mechanical laser radar.
In step S120, in the current pose, second point cloud data is collected by a second laser radar mounted on the movable platform, and a local map in the current pose is obtained according to the second point cloud data. Then, in step S130, the local map is matched with the plurality of keyframe maps to determine a keyframe map matched with the local map, and the current pose of the second lidar is determined according to the keyframe pose corresponding to the keyframe map.
Taking the warehouse logistics trolley as an example, after a basic map is constructed in advance, the trolley which needs to enter the warehouse to execute tasks is loaded into the basic map, the second laser radar carried by the basic map is started, and point cloud data collection and absolute pose determination are started. Wherein the second lidar remains stationary at the current pose and continuously acquires second point cloud data for a period of time (e.g., 1 s). If the second laser radar is a non-repetitive scanning type laser radar, the coverage rate of the point clouds in the FOV is continuously improved within the time range of collecting the second point cloud data, so that more abundant information can be collected, and the determination of the current pose can be better completed.
Similar to the key frame map, the local map in the current pose is also in the form of an image, and the matching between the local map and the key frame map is the matching between the images, so that the type of the local map in the current pose should correspond to the type of the key frame map. That is, if the key frame map includes a key frame depth map, the local map at the current pose includes a local depth map, and the matching between the local map and the key frame map includes matching the local depth map with the key frame depth map; if the key frame map comprises a key frame reflectivity map, the local map under the current pose comprises a local reflectivity map, and the matching between the local map and the key frame map comprises matching the local reflectivity map with the key frame reflectivity map. It is understood that the matching may include performing the above two kinds of matching respectively in order to improve the accuracy of the matching.
The local depth map and the local reflectivity map may be constructed in a similar manner as the key frame depth map and the key frame reflectivity map, for example, the pixel value is determined according to the reflectivity value or the size of the depth value, so as to convert the point cloud map into a gray scale image, which may be referred to above.
As one implementation, the determination of the current pose may be achieved by matching only the local map with the dense keyframe map. Since the local map and the key frame map are in the form of images, any suitable image matching method can be adopted to realize matching between the local map and the key frame map. For example, a local map and a key frame map may be matched in a feature matching manner, specifically, image features of the local map and the key frame map are extracted for matching, and if the matching degree is greater than a preset threshold or a matching residual is smaller than a certain threshold, it is determined that the local map is matched to the key frame map. Wherein the image features extracted from the local map and the keyframe map include HOG features, SIFT features, SURF features, ORB features, LBP features, HAAR features, or any other suitable image features. Or, the matching between the local map and the key frame map can also be realized by adopting a gray matching mode.
When the local map and the key frame map respectively comprise the depth map and the reflectivity map, the local map can be determined to be matched with the key frame map when the matching degree of the local depth map and the key frame depth map and the matching degree of the local reflectivity map and the key frame reflectivity map are both greater than respective preset thresholds, so that the matching accuracy is ensured. Or, when at least one of the matching degree between the local depth map and the key frame depth map and the matching degree between the local reflectivity map and the key frame reflectivity map is greater than a respective preset threshold, determining that the local map is matched with the key frame map.
If the local map is successfully matched with a certain key frame map, the current pose is similar to the pose of the key frame map, but a certain difference still exists between the current pose and the key frame map. Therefore, after finding the matched key frame map and local map, the method further comprises the steps of extracting matched feature point pairs from the local map and the key frame map matched with the local map, resolving the pose transformation relation of the local map relative to the key frame map according to the three-dimensional space information of the feature point pairs by using an ICP (iterative closest point) resolving mode and the like, and determining the current pose according to the resolved pose transformation relation and the key frame pose corresponding to the key frame map, so that the pose resolving precision is ensured.
Because the key frame map and the local map of the embodiment of the invention are generated according to the point cloud data, after the matched characteristic point pairs are obtained, the three-dimensional space information of the characteristic point pairs can be directly obtained for solving the absolute pose, the method is more convenient compared with a mode of acquiring images through a camera, and the pose solving precision can be improved.
As described above, in some embodiments, the pre-constructed base map further includes the global point cloud map in the current scene generated according to the first point cloud data, and then the method 100 of the embodiment of the present invention may further include: generating a local point cloud map under the current view angle according to the second point cloud data; and matching the local point cloud map with the global point cloud map to determine the current pose.
If the global point cloud map comprises a global plane point map and a global edge point map, correspondingly, the local point cloud map also comprises a local plane point map and a local edge point map, and matching the local point cloud map with the global point cloud map comprises the following steps: and matching the local plane point map with the global plane point map, and matching the local edge point map with the global edge point map. Wherein the local plane point map is constructed by extracting plane points in the second point cloud data, and the local edge point map is constructed by extracting edge points in the second point cloud data.
Specifically, constructing the local plane point map includes: acquiring a first predetermined number of point cloud points meeting a first preset condition from the second point cloud data according to a time sequence to serve as plane point candidate points; obtaining a final plane point extraction result of the current frame point cloud data based on the determined plane point candidate points; wherein the first preset condition comprises: the spatial distribution of the group of point cloud points is approximately a straight line, and the group of point cloud points are approximately centrosymmetric when taking the middle point as the center.
Constructing the local edge point map comprises: extracting edge points from the second point cloud data to construct the local edge point map, wherein the edge points comprise surface-surface intersecting edge points and jump edge points, the surface-surface intersecting edge points correspond to points on an intersecting line of surfaces intersecting in the three-dimensional space, and the jump edge points correspond to points on an edge of an isolated surface in the three-dimensional space.
Other specific details of extracting the edge points and the plane points from the second point cloud data may refer to the above description of constructing the global plane point map and the global edge point map, which is not described herein again.
Unlike keyframe maps and local maps, local point cloud maps and global point cloud maps are in the form of point cloud maps. The local point cloud map and the global point cloud map can be used as supplements on the basis of the key frame map and the local map so as to realize more refined matching. Specifically, a rough search may be performed first, a plurality of key frame maps preliminarily matched with the local map are determined, and on the basis, a detailed search is performed according to the local point cloud map and the global point cloud map, so that a rough-to-fine search mode is realized. In an example, in the case that the key frame map with the matching degree higher than the preset threshold cannot be searched only by matching the local map with the key frame map, the pose determination may be performed by using a coarse-to-fine search method. Of course, the coarse-to-fine search mode can be implemented separately.
Specifically, the rough search step mainly includes: first, a plurality of candidate keyframe maps are selected from the plurality of keyframe maps.
Illustratively, candidate keyframes may be selected from objects with salient features in the local map and the keyframe map as a relatively simple way. For example, if a high-reflection object exists in the current local map, the key frame map in which the high-reflection object exists is used as the candidate key frame map.
However, since a highly reflective object or other object having a prominent feature has randomness, for example, after extracting a candidate key frame map from the highly reflective object, the local map may also be similarity-matched with the key frame map, and a candidate key frame in which the similarity score exceeds a threshold value is selected as a candidate key frame. The evaluation method of the image similarity includes, but is not limited to, histogram matching, high-dimensional feature matching, or a bag-of-words method. In some embodiments, the candidate keyframe maps may also be determined directly according to the similarity without adopting a manner of determining according to the highly reflective object.
Then, for each candidate key frame, converting the local point cloud map into a world coordinate system according to the key frame pose corresponding to the candidate key frame map. And then, matching the converted local point cloud map and the global point cloud map, and selecting a target key frame map from the candidate key frame maps according to a matching result.
The matching of the local point cloud map and the global point cloud map comprises matching of the local edge point map and the global edge point map, and matching of the local plane point map and the global plane point map. Since the local point cloud map and the global point cloud map are in the form of point cloud maps, the matching between the local point cloud map and the global point cloud map can be in a point cloud matching manner, such as calculating the distance from a point to a line, the distance from a point to a surface, the distance from a point to a point, and the like. Then, the sizes of all feature distance sums calculated after the key frame poses corresponding to each candidate key frame map are converted can be ranked, and the candidate key frame map with the minimum feature distance sum is used as the target key frame map.
After the target key frame map is found through rough search, further refined registration can be carried out according to the target key frame map and the key frame position and posture information thereof.
Specifically, refining the registration includes: and converting the local point cloud map into a world coordinate system according to the key frame pose information corresponding to the target key frame map, and optimizing the key frame pose corresponding to the target key frame map according to the characteristic distance between the local point cloud map and the global point cloud map so as to determine the current pose. Specifically, edge points and plane points in the local edge point map and the local plane point map can be converted into a world coordinate system according to the keyframe position and posture information corresponding to the target keyframe map, and the keyframe position and posture information corresponding to the target keyframe map is continuously adjusted and optimized by taking the sum of the distance from a point to a line, the distance from a point to a plane or other characteristic distances as a loss function until the loss function is reduced to be below a preset threshold value, so that the accurate current position and posture can be calculated.
After the steps S110 to S130 are completed, the current pose of the second laser radar can be obtained, and then the current pose of the movable platform can be obtained. Taking the warehouse logistics trolley to execute the logistics task as an example, the warehouse logistics trolley can obtain the current pose C of the warehouse logistics trolley in the warehouse. Assuming that the logistics task of the trolley is a material conveying task from A to B, after the current pose C is obtained, path planning from C to A, A to B can be carried out according to the pose C and the positions of the point A and the point B, and the subsequent poses of the trolley are updated on the basis of the determined initial poses while the trolley is controlled to move according to pose information.
The subsequent pose can be determined in an incremental manner by fusing an SLAM (immediate positioning and mapping) algorithm with the absolute pose determination method of the embodiment of the invention. Furthermore, the basic map can be updated according to the second point cloud data, the map state is maintained, the information of the basic map is perfected, and the basic map is corrected in time when the current scene changes.
The absolute pose determining method of the embodiment of the invention builds the basic map in advance according to the point cloud data, matches the local map generated according to the point cloud data acquired in real time with the basic map to determine the current absolute pose, and has the advantages of high pose resolving precision, no need of depending on base station arrangement and strong universality.
In another aspect, an electronic device is further provided, where the electronic device includes a storage device and a processor, where the storage device is configured to store program codes; the processor is configured to execute the program code, and when the program code is executed, is configured to implement the absolute pose determination method described above. The electronic equipment of the embodiment of the invention can be carried on the movable platform or be independent of the movable platform. And the electronic equipment is in communication connection with a second laser radar carried on the movable platform so as to receive second point cloud data. The electronic device is further configured to load a pre-constructed base map to enable absolute pose determination for the second lidar based on the base map and the second point cloud data. Fig. 3 shows a schematic block diagram of an electronic device 300 in an embodiment of the invention.
As shown in fig. 3, electronic device 300 includes one or more processors 320 and one or more memory devices 310. Optionally, the electronic device 300 may further include at least one of an input device (not shown), an output device (not shown), and an image sensor (not shown), which are interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 300 shown in fig. 3 are merely exemplary and not limiting, and the electronic device 300 may have other components and structures as desired, such as a transceiver for transmitting and receiving signals.
The storage 310, i.e. the memory, is used for storing the memory of processor executable instructions, e.g. for the corresponding steps and program instructions present for implementing the absolute pose determination method according to an embodiment of the present invention. May include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The input device may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.
The communication interface (not shown) is used for communication between the electronic device 300 and other devices, including wired or wireless communication. The electronic device 300 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof. In one exemplary embodiment, the communication interface receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication interface further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The processor 320 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 300 to perform desired functions. The processor can execute the instructions stored in the storage 310 to perform the absolute pose determination methods described herein. For example, processor 320 can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware Finite State Machines (FSMs), Digital Signal Processors (DSPs), or a combination thereof.
On which one or more computer program instructions may be stored, and processor 320 may execute the program instructions stored by storage device 310 to implement the functions of the embodiments of the invention described herein (implemented by the processor) and/or other desired functions, e.g., to perform the corresponding steps of the absolute pose determination method according to the embodiments of the invention. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
In another aspect, an embodiment of the present invention provides a movable platform, where the movable platform is mounted with a laser radar 410, that is, the second laser radar described above. The movable platform further includes electronics 420, the electronics 420 communicatively coupled to the second lidar.
The electronic device 420 may refer to the electronic device 300 described in fig. 3, and is not described herein again. In some examples, the point cloud point in the above may be any point cloud point in the point cloud data acquired by the laser radar. Lidar 410 may be a conventional mechanical lidar, a solid or semi-solid lidar, or any other suitable lidar device. In one embodiment, the lidar is configured to sense external environmental information, such as range information, azimuth information, reflected intensity information, velocity information, etc., of environmental targets. A point cloud point may include at least one of the external environmental information measured by the lidar.
In one implementation, the lidar may detect the range of the probe to the lidar by measuring the Time of Flight (TOF), the Time-of-Flight Time, of light propagation between the lidar and the probe. Alternatively, the laser radar may detect the distance from the probe to the laser radar by other techniques, such as a ranging method based on phase shift (phase shift) measurement or a ranging method based on frequency shift (frequency shift) measurement, which is not limited herein.
For ease of understanding, the structure of a lidar according to an embodiment of the present invention is described in detail below with reference to fig. 5 and 6. The lidar is merely exemplary and other suitable lidar applications are also contemplated in the present application.
First, the lidar referred to herein will be described by way of example with reference to lidar 500 shown in fig. 5.
As shown in fig. 5, laser radar 500 includes a transmission circuit 510, a reception circuit 520, a sampling circuit 530, and an arithmetic circuit 540.
The transmit circuitry 510 may transmit a sequence of light pulses (e.g., a sequence of laser pulses). The receiving circuit 520 may receive the optical pulse train reflected by the detected object, perform photoelectric conversion on the optical pulse train to obtain an electrical signal, process the electrical signal, and output the electrical signal to the sampling circuit 530. The sampling circuit 530 may sample the electrical signal to obtain a sampling result. Arithmetic circuit 540 may determine the distance, i.e., the depth, between laser radar 500 and the detected object based on the sampling result of sampling circuit 530.
Optionally, the laser radar 500 may further include a control circuit 550, and the control circuit 550 may implement control on other circuits, for example, may control an operating time of each circuit and/or perform parameter setting on each circuit, and the like.
It should be understood that, although the laser radar shown in fig. 5 includes a transmitting circuit, a receiving circuit, a sampling circuit and an arithmetic circuit for emitting one light beam for detection, the embodiment of the present application is not limited thereto, and the number of any one of the transmitting circuit, the receiving circuit, the sampling circuit and the arithmetic circuit may also be at least two, and the at least two light beams are emitted in the same direction or in different directions respectively; the at least two light paths may be emitted simultaneously or at different times. In one example, the light emitting chips in the at least two transmitting circuits are packaged in the same module. For example, each transmitting circuit comprises a laser emitting chip, and die of the laser emitting chips in the at least two transmitting circuits are packaged together and accommodated in the same packaging space.
In some implementations, in addition to the circuit shown in fig. 5, the laser radar 500 may further include a scanning module, configured to change a propagation direction of at least one laser pulse sequence emitted from the transmitting circuit to emit the laser pulse sequence, so as to scan the field of view.
The module including the transmitting circuit 510, the receiving circuit 520, the sampling circuit 530, and the arithmetic circuit 540, or the module including the transmitting circuit 510, the receiving circuit 520, the sampling circuit 530, the arithmetic circuit 540, and the control circuit 550 may be referred to as a ranging module, which may be independent of other modules, for example, a scanning module.
The laser radar can adopt a coaxial light path, namely the light beam emitted by the laser radar and the reflected light beam share at least part of the light path in the laser radar. For example, at least one path of laser pulse sequence emitted by the emitting circuit is emitted by the scanning module after the propagation direction is changed, and the laser pulse sequence reflected by the detector is emitted to the receiving circuit after passing through the scanning module. Alternatively, the laser radar may also adopt an off-axis optical path, that is, the light beam emitted by the laser radar and the light beam reflected by the laser radar are transmitted along different optical paths in the laser radar respectively. Fig. 6 shows a schematic diagram of an embodiment of the lidar of the present invention employing a coaxial optical path.
Lidar 600 includes a ranging module 610, ranging module 610 including a transmitter 603 (which may include the transmit circuitry described above), a collimating element 604, a detector 605 (which may include the receive circuitry, sampling circuitry, and arithmetic circuitry described above), and a path-altering element 606. The distance measurement module 610 is configured to emit a light beam, receive return light, and convert the return light into an electrical signal. Wherein the transmitter 603 may be configured to transmit a sequence of light pulses. In one embodiment, the transmitter 603 may transmit a sequence of laser pulses. Optionally, the laser beam emitted by emitter 603 is a narrow bandwidth beam with a wavelength outside the visible range. The collimating element 604 is disposed on an emitting light path of the emitter, and is configured to collimate the light beam emitted from the emitter 603, and collimate the light beam emitted from the emitter 603 into parallel light to be emitted to the scanning module. The collimating element is also for converging at least a portion of the return light reflected by the detector. The collimating element 604 may be a collimating lens or other element capable of collimating a light beam.
In the embodiment shown in fig. 6, the transmit and receive optical paths within the lidar are combined by an optical path altering element 606 before a collimating element 604, so that the transmit and receive optical paths may share the same collimating element, making the optical path more compact. In other implementations, the emitter 603 and the detector 605 may use respective collimating elements, and the optical path changing element 606 may be disposed in the optical path after the collimating elements.
In the embodiment shown in fig. 6, since the beam aperture of the beam emitted from the transmitter 603 is small and the beam aperture of the return light received by the laser radar is large, the optical path changing element can use a small-area mirror to combine the transmission optical path and the reception optical path. In other implementations, the optical path changing element may also be a mirror with a through hole for transmitting the outgoing light from the emitter 603, and a mirror for reflecting the return light to the detector 605. Therefore, the shielding of the bracket of the small reflector to the return light can be reduced in the case of adopting the small reflector.
In the embodiment shown in fig. 6, the optical path altering element is offset from the optical axis of the collimating element 604. In other implementations, the optical path altering element may also be located on the optical axis of the collimating element 604.
Lidar 600 also includes a scanning module 602. The scanning module 602 is disposed on the outgoing light path of the distance measuring module 610, and the scanning module 602 is configured to change the transmission direction of the collimated light beam 619 outgoing from the collimating element 604, project the collimated light beam to the external environment, and project the return light to the collimating element 604. The return light is focused by collimating element 604 onto detector 605.
In one embodiment, the scanning module 602 may include at least one optical element for altering the propagation path of the light beam, wherein the optical element may alter the propagation path of the light beam by reflecting, refracting, diffracting, etc., the light beam. For example, scanning module 602 includes a lens, mirror, prism, galvanometer, grating, liquid crystal, Optical Phased Array (Optical Phased Array), or any combination thereof. In one example, at least a portion of the optical element is moved, for example, by a driving module, and the moved optical element can reflect, refract, or diffract the light beam to different directions at different times. In some embodiments, multiple optical elements of scanning module 602 may rotate or oscillate about a common axis 609, with each rotating or oscillating optical element serving to constantly change the direction of propagation of an incident beam. In one embodiment, the multiple optical elements of the scanning module 602 may rotate at different rotational speeds or oscillate at different speeds. In another embodiment, at least some of the optical elements of the scanning module 602 may rotate at substantially the same rotational speed. In some embodiments, the multiple optical elements of the scanning module may also be rotated about different axes. In some embodiments, the multiple optical elements of the scanning module may also rotate in the same direction, or in different directions; or in the same direction, or in different directions, without limitation.
In one embodiment, the scan module 602 includes a first optical element 614 and a driver 616 coupled to the first optical element 614, the driver 616 configured to drive the first optical element 614 to rotate about a rotation axis 609, causing the first optical element 614 to change the direction of the collimated light beam 619. The first optical element 614 projects the collimated beam 619 into a different direction. In one embodiment, the angle between the direction of the collimated light beam 619 as altered by the first optical element and the axis of rotation 609 changes as the first optical element 614 rotates. In one embodiment, the first optical element 614 includes a pair of opposing non-parallel surfaces through which the collimated light beam 619 passes. In one embodiment, the first optical element 614 comprises a prism having a thickness that varies along at least one radial direction. In one embodiment, the first optical element 614 comprises a wedge prism that refracts the collimated beam 619.
In one embodiment, the scanning module 602 further includes a second optical element 615, the second optical element 615 rotating about the rotation axis 609, the rotation speed of the second optical element 615 being different from the rotation speed of the first optical element 614. The second optical element 615 is used to change the direction of the light beam projected by the first optical element 614. In one embodiment, the second optical element 615 is coupled to another driver 617, and the driver 617 drives the second optical element 615 to rotate. The first optical element 614 and the second optical element 615 may be driven by the same or different drivers, such that the rotational speed and/or steering of the first optical element 614 and the second optical element 615 may be different, such that the collimated light beams 619 are projected into different directions from the ambient space, which may allow scanning of a larger spatial range. In one embodiment, the controller 618 controls the drivers 616 and 617 to drive the first optical element 614 and the second optical element 615, respectively. The rotation speed of the first optical element 614 and the second optical element 615 may be determined according to the region and pattern desired to be scanned in an actual application. Drives 616 and 617 may include motors or other drives.
In one embodiment, the second optical element 615 includes a pair of opposing non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 615 includes prisms having a thickness that varies along at least one radial direction. In one embodiment, the second optical element 615 comprises a wedge angle prism.
In one embodiment, the scan module 602 further includes a third optical element (not shown) and a driver for driving the third optical element to move. Optionally, the third optical element comprises a pair of opposed non-parallel surfaces through which the light beam passes. In one embodiment, the third optical element comprises a prism having a thickness that varies along at least one radial direction. In one embodiment, the third optical element comprises a wedge angle prism. At least two of the first, second and third optical elements rotate at different rotational speeds and/or rotational directions.
Rotation of the optical elements in scanning module 602 may project light in different directions, such as direction 611 and direction 613, thus scanning the space around lidar 600. Fig. 7 is a schematic diagram of a scanning pattern of laser radar 600, as shown in fig. 7. According to the laser radar 600 of the embodiment of the present invention, the scanning density of the scanning module varies along the time axis with the accumulation of the integration time. The scanning track of the distance measuring device changes along with time, and the scanning density gradually increases along with the accumulation of the integral time, so that the point clouds belonging to different parts of the target object are scanned from the point cloud data of different frames acquired by the laser radar.
It will be appreciated that as the speed of the optical elements within the scanning module changes, the scanning pattern will also change.
When the light projected by the scanning module 602 strikes the object 601, a part of the light is reflected by the object 601 to the laser radar 600 in a direction opposite to the projected light. The return light 612 reflected by the object 601 passes through the scanning module 602 and then enters the collimating element 604.
A detector 605 is placed on the same side of the collimating element 604 as the emitter 603, the detector 605 being arranged to convert at least part of the return light passing through the collimating element 604 into an electrical signal.
In one embodiment, each optical element is coated with an antireflection coating. Optionally, the thickness of the anti-reflective coating is equal to or close to the wavelength of the light beam emitted by the emitter 603, which can increase the intensity of the transmitted light beam.
In one embodiment, a filter layer is coated on a surface of a component in the laser radar located on the light beam propagation path, or a filter is disposed on the light beam propagation path, and is used for transmitting at least a wavelength band in which the light beam emitted by the emitter is located and reflecting other wavelength bands, so as to reduce noise brought to the receiver by ambient light.
In some embodiments, the transmitter 603 may include a laser diode through which laser pulses in the order of nanoseconds are emitted. Further, the laser pulse reception time may be determined, for example, by detecting the rising edge time and/or the falling edge time of the electrical signal pulse. In this manner, laser radar 600 may calculate TOF using the pulse reception time information and the pulse emission time information, thereby determining the distance of probe 601 to laser radar 600.
In one embodiment, the movable platform 400 further comprises a movable platform body. In some embodiments, the movable platform comprises at least one of an unmanned automobile, a remote control car, a robot. When the movable platform is the unmanned vehicle, the movable platform body is the unmanned vehicle body. When the movable platform is a remote control car, the movable platform body is a car body of the remote control car. When the movable platform is a robot, the movable platform body is a robot body.
In addition, the embodiment of the invention also provides a computer storage medium, and the computer storage medium is stored with the computer program. The steps of the absolute pose determination method of the embodiment of the present invention can be implemented when the computer program is executed by a processor. For example, the computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In summary, the absolute pose determination method, the electronic device, the movable platform and the computer storage medium in the embodiments of the present invention construct the base map in advance according to the point cloud data, and match the local map generated according to the point cloud data acquired in real time with the base map to determine the current absolute pose.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (25)

  1. A laser radar-based absolute pose determination method is characterized by comprising the following steps:
    loading a pre-constructed basic map under a current scene, wherein the basic map comprises a plurality of key frame maps, the key frame maps correspond to key frame positions and postures, and the key frame maps contain information of first point cloud data collected by a first laser radar under the key frame postures;
    acquiring second point cloud data by a second laser radar carried on the movable platform under the current pose, and obtaining a local map under the current pose according to the second point cloud data;
    and matching the local map with the plurality of key frame maps to determine a key frame map matched with the local map, and determining the current pose of the second laser radar according to the key frame pose corresponding to the key frame map.
  2. The method of claim 1, wherein constructing the base map comprises:
    carrying a first laser radar by a movable platform to move under the current scene, and acquiring first point cloud data by the first laser radar in the moving process;
    and constructing the basic map according to the first point cloud data.
  3. The method of claim 2, wherein said building the base map from the first point cloud data comprises:
    and superposing the first point cloud data within a period of time before and after a key frame moment at the key frame moment, and generating the key frame map according to the first point cloud data superposed at the key frame moment, wherein a key frame pose corresponding to the key frame map is the pose of the first laser radar at the key frame moment.
  4. The method of claim 3, wherein superimposing the point cloud data over the period of time at the key frame time comprises:
    and converting the first point cloud data in the period of time into a point cloud coordinate system of the key frame time, and overlapping.
  5. The method of any one of claims 1-4, wherein matching the local map at the current pose with the plurality of keyframe maps comprises:
    and extracting image features of the local map and the key frame map for matching, and if the matching degree is greater than a preset threshold value, determining that the local map is matched with the key frame map.
  6. The method of claim 5, wherein the determining the current pose of the second lidar from the keyframe pose corresponding to the keyframe map comprises:
    extracting matched feature point pairs from the local map and the key frame map, solving a pose transformation relation of the local map relative to the key frame map according to the three-dimensional space information of the feature point pairs, and determining the current pose according to the pose transformation relation and the key frame pose.
  7. The method of any one of claims 1-6, wherein the keyframe map comprises a keyframe reflectivity map constructed from reflectivity information of the first point cloud data, the local map comprises a local reflectivity map constructed from reflectivity information of the second point cloud data, and the matching the local map to the plurality of keyframe maps comprises:
    and matching the key frame reflectivity map with the local reflectivity map.
  8. The method of one of claims 1-6, wherein the keyframe map comprises a keyframe depth map constructed from depth information of the first point cloud data, the local map comprises a local depth map constructed from depth information of the second point cloud data, and the matching the local map to the plurality of keyframe maps comprises:
    matching the keyframe depth map with the local depth map.
  9. The method of one of claims 1-8, wherein the base map further comprises a global point cloud map for a current scene generated from the first point cloud data, the method further comprising:
    generating a local point cloud map under a current view angle according to the second point cloud data;
    and matching the local point cloud map with the global point cloud map to determine the current pose.
  10. The method of claim 9, wherein the determining the current pose comprises:
    selecting a plurality of candidate keyframe maps in the keyframe map;
    converting the local point cloud map into a world coordinate system according to the key frame poses corresponding to the candidate key frame map;
    matching the converted local point cloud map and the global point cloud map, and selecting a target key frame map from the candidate key frame maps according to a matching result;
    and converting the local point cloud map into a world coordinate system according to the keyframe pose information corresponding to the target keyframe map, and optimizing the keyframe pose corresponding to the target keyframe map according to the characteristic distance between the local point cloud map and the global point cloud map so as to determine the current pose.
  11. The method of claim 10, wherein said selecting a plurality of candidate keyframe maps among said keyframe maps comprises:
    and if the high-reflection object exists in the current local map, taking the key frame map with the high-reflection object as the candidate key frame map.
  12. The method of one of claims 9-11, wherein the global point cloud map comprises a global plane point map and a global edge point map, the local point cloud map comprises a local plane point map and a local edge point map, and the matching the local point cloud map with the global point cloud map comprises:
    and matching the local plane point map with the global plane point map, and matching the local edge point map with the global edge point map.
  13. The method of claim 12, wherein constructing the local plane point map comprises:
    acquiring a first predetermined number of point cloud points meeting a first preset condition from the second point cloud data according to a time sequence to serve as plane point candidate points;
    obtaining a final plane point extraction result of the current frame point cloud data based on the determined plane point candidate points;
    wherein the first preset condition comprises: the spatial distribution of the group of point cloud points is approximately a straight line, and the group of point cloud points are approximately centrosymmetric when taking the middle point as the center.
  14. The method of claim 12, wherein constructing the local edge point map comprises: extracting edge points from the second point cloud data to construct the local edge point map, wherein the edge points comprise surface-surface intersecting edge points and jump edge points, the surface-surface intersecting edge points correspond to points on an intersecting line of surfaces intersecting in the three-dimensional space, and the jump edge points correspond to points on an edge of an isolated surface in the three-dimensional space.
  15. The method of any one of claims 1-14, further comprising:
    planning a path of the movable platform according to the current pose;
    incrementally determining pose information based on the current pose using an on-the-fly localization and mapping algorithm during movement of the movable platform.
  16. The method of any one of claims 1-15, further comprising:
    and updating the basic map according to the second point cloud data.
  17. An electronic device, comprising a storage means and a processor, wherein the storage means is configured to store program code; the processor is configured to execute the program code and when the program code executes is configured to:
    loading a pre-constructed basic map under a current scene, wherein the basic map comprises a plurality of key frame maps, the key frame maps correspond to key frame positions and postures, and the key frame maps contain information of first point cloud data collected by a first laser radar under the key frame postures;
    acquiring second point cloud data by a second laser radar carried on the movable platform under the current pose, and obtaining a local map under the current pose according to the second point cloud data;
    and matching the local map with the plurality of key frame maps to determine a key frame map matched with the local map, and determining the current pose of the second laser radar according to the key frame pose corresponding to the key frame map.
  18. The electronic device of claim 17, wherein the keyframe map comprises a keyframe reflectivity map constructed from reflectivity information of the first point cloud data, the local map comprises a local reflectivity map constructed from reflectivity information of the second point cloud data, the matching the local map to the plurality of keyframe maps comprises:
    and matching the key frame reflectivity map with the local reflectivity map.
  19. The electronic device of claim 17, wherein the keyframe map comprises a keyframe depth map constructed from depth information of the first point cloud data, the local map comprises a local depth map constructed from depth information of the second point cloud data, the matching the local map to the plurality of keyframe maps comprises:
    matching the keyframe depth map with the local depth map.
  20. The electronic device of one of claims 17-19, wherein the base map further comprises a global point cloud map at a current scene generated from the first point cloud data, the electronic device further comprising:
    generating a local point cloud map under a current view angle according to the second point cloud data;
    and matching the local point cloud map with the global point cloud map to determine the current pose.
  21. The electronic device of claim 20, wherein the determining the current pose comprises:
    selecting a plurality of candidate keyframe maps in the keyframe map;
    converting the local point cloud map into a world coordinate system according to the key frame poses corresponding to the candidate key frame map;
    matching the converted local point cloud map and the global point cloud map, and selecting a target key frame map from the candidate key frame maps according to a matching result;
    and converting the local point cloud map into a world coordinate system according to the keyframe pose information corresponding to the target keyframe map, and optimizing the keyframe pose corresponding to the target keyframe map according to the characteristic distance between the local point cloud map and the global point cloud map so as to determine the current pose.
  22. The electronic device of claim 21, wherein said selecting a plurality of candidate keyframe maps in the keyframe map comprises:
    and if the high-reflection object exists in the current local map, taking the key frame map with the high-reflection object as the candidate key frame map.
  23. The electronic device of one of claims 20-22, wherein the global point cloud map comprises a global plane point map and a global edge point map, the local point cloud map comprises a local plane point map and a local edge point map, and the matching the local point cloud map with the global point cloud map comprises:
    and matching the local plane point map with the global plane point map, and matching the local edge point map with the global edge point map.
  24. A movable platform carrying a lidar, the movable platform further comprising an electronic device according to any one of claims 17 to 23.
  25. A computer storage medium on which a computer program is stored, the computer program implementing the steps of the absolute pose determination method according to any one of claims 1 to 16 when executed by a processor.
CN202080006249.7A 2020-06-19 2020-06-19 Absolute pose determination method, electronic equipment and movable platform Pending CN114080625A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/097198 WO2021253430A1 (en) 2020-06-19 2020-06-19 Absolute pose determination method, electronic device and mobile platform

Publications (1)

Publication Number Publication Date
CN114080625A true CN114080625A (en) 2022-02-22

Family

ID=79269020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080006249.7A Pending CN114080625A (en) 2020-06-19 2020-06-19 Absolute pose determination method, electronic equipment and movable platform

Country Status (2)

Country Link
CN (1) CN114080625A (en)
WO (1) WO2021253430A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115031718A (en) * 2022-05-25 2022-09-09 安徽中科合鼎科技发展有限公司 Unmanned ship synchronous positioning and mapping method (SLAM) and system with multi-sensor fusion
CN116310126A (en) * 2023-03-23 2023-06-23 南京航空航天大学 Aircraft air inlet three-dimensional reconstruction method and system based on cooperative targets

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419268B (en) * 2022-01-20 2024-06-28 湖北亿咖通科技有限公司 Track edge connecting method for incremental map construction, electronic equipment and storage medium
CN114627182B (en) * 2022-01-26 2024-08-13 美的集团(上海)有限公司 Positioning method and device of robot, electronic equipment and storage medium
CN114758000A (en) * 2022-04-20 2022-07-15 北京京东乾石科技有限公司 Terminal relocation method and device
CN115326051A (en) * 2022-08-03 2022-11-11 广州高新兴机器人有限公司 Positioning method and device based on dynamic scene, robot and medium
CN115435784B (en) * 2022-08-31 2024-06-14 中国科学技术大学 High-altitude operation platform laser radar and inertial navigation fusion positioning map building device and method
CN115937383B (en) * 2022-09-21 2023-10-10 北京字跳网络技术有限公司 Method, device, electronic equipment and storage medium for rendering image
CN115546302B (en) * 2022-10-20 2024-06-14 上海人工智能创新中心 Point cloud data resolving method for local geometric modeling
CN118052867A (en) * 2022-11-15 2024-05-17 中兴通讯股份有限公司 Positioning method, terminal equipment, server and storage medium
CN116449392B (en) * 2023-06-14 2023-09-19 北京集度科技有限公司 Map construction method, device, computer equipment and storage medium
CN116939815B (en) * 2023-09-15 2023-12-05 常熟理工学院 UWB positioning base station selection method based on laser point cloud map
CN116977622A (en) * 2023-09-22 2023-10-31 国汽(北京)智能网联汽车研究院有限公司 Initialization positioning method and device, equipment and medium thereof
CN117824666B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method
CN118154676B (en) * 2024-05-09 2024-08-13 北京理工大学前沿技术研究院 Scene positioning method and system based on laser radar
CN118365888B (en) * 2024-06-19 2024-09-10 广汽埃安新能源汽车股份有限公司 Method and device for removing dynamic object in image, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407073B (en) * 2017-08-15 2020-03-10 百度在线网络技术(北京)有限公司 Reflection value map construction method and device
CN107796397B (en) * 2017-09-14 2020-05-15 杭州迦智科技有限公司 Robot binocular vision positioning method and device and storage medium
CN110097045A (en) * 2018-01-31 2019-08-06 株式会社理光 A kind of localization method, positioning device and readable storage medium storing program for executing
US11237004B2 (en) * 2018-03-27 2022-02-01 Uatc, Llc Log trajectory estimation for globally consistent maps
US11788845B2 (en) * 2018-06-29 2023-10-17 Baidu Usa Llc Systems and methods for robust self-relocalization in a visual map
CN109141437B (en) * 2018-09-30 2021-11-26 中国科学院合肥物质科学研究院 Robot global repositioning method
CN109887053B (en) * 2019-02-01 2020-10-20 广州小鹏汽车科技有限公司 SLAM map splicing method and system
CN110533722B (en) * 2019-08-30 2024-01-12 的卢技术有限公司 Robot rapid repositioning method and system based on visual dictionary
CN110686677B (en) * 2019-10-10 2022-12-13 东北大学 Global positioning method based on geometric information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115031718A (en) * 2022-05-25 2022-09-09 安徽中科合鼎科技发展有限公司 Unmanned ship synchronous positioning and mapping method (SLAM) and system with multi-sensor fusion
CN115031718B (en) * 2022-05-25 2023-10-31 合肥恒淏智能科技合伙企业(有限合伙) Multi-sensor fused unmanned ship synchronous positioning and mapping method (SLAM) and system
CN116310126A (en) * 2023-03-23 2023-06-23 南京航空航天大学 Aircraft air inlet three-dimensional reconstruction method and system based on cooperative targets
CN116310126B (en) * 2023-03-23 2023-11-03 南京航空航天大学 Aircraft air inlet three-dimensional reconstruction method and system based on cooperative targets

Also Published As

Publication number Publication date
WO2021253430A1 (en) 2021-12-23

Similar Documents

Publication Publication Date Title
CN114080625A (en) Absolute pose determination method, electronic equipment and movable platform
Liu et al. TOF lidar development in autonomous vehicle
US11768293B2 (en) Method and device for adjusting parameters of LiDAR, and LiDAR
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
WO2020243962A1 (en) Object detection method, electronic device and mobile platform
CN114270410A (en) Point cloud fusion method and system for moving object and computer storage medium
CN112513679B (en) Target identification method and device
WO2022126427A1 (en) Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium
CN112912756A (en) Point cloud noise filtering method, distance measuring device, system, storage medium and mobile platform
JP7277256B2 (en) Work analysis system, work analysis device, and work analysis program
WO2021239054A1 (en) Space measurement apparatus, method and device, and computer-readable storage medium
WO2022198637A1 (en) Point cloud noise filtering method and system, and movable platform
JP2020030200A (en) System and method for locating vehicle using accuracy specification
CN111999744A (en) Unmanned aerial vehicle multi-azimuth detection and multi-angle intelligent obstacle avoidance method
WO2022083529A1 (en) Data processing method and apparatus
Steinbaeck et al. Occupancy grid fusion of low-level radar and time-of-flight sensor data
CN112136018A (en) Point cloud noise filtering method of distance measuring device, distance measuring device and mobile platform
US20210255289A1 (en) Light detection method, light detection device, and mobile platform
Zhou A review of LiDAR sensor technologies for perception in automated driving
CN111684306A (en) Distance measuring device, application method of point cloud data, sensing system and mobile platform
CN114026461A (en) Method for constructing point cloud frame, target detection method, distance measuring device, movable platform and storage medium
CN114080545A (en) Data processing method and device, laser radar and storage medium
CN116679317A (en) Environmental modeling method and device based on hyperspectral laser radar
KR20220128787A (en) Method and apparatus for tracking an object using LIDAR sensor, and recording medium for recording program performing the method
Islam et al. Autonomous Driving Vehicle System Using LiDAR Sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination