CN114299147A - Positioning method, positioning device, storage medium and electronic equipment - Google Patents

Positioning method, positioning device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114299147A
CN114299147A CN202111641489.7A CN202111641489A CN114299147A CN 114299147 A CN114299147 A CN 114299147A CN 202111641489 A CN202111641489 A CN 202111641489A CN 114299147 A CN114299147 A CN 114299147A
Authority
CN
China
Prior art keywords
point cloud
positioning
frame
candidate
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111641489.7A
Other languages
Chinese (zh)
Inventor
廖方波
李秋成
何祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202111641489.7A priority Critical patent/CN114299147A/en
Publication of CN114299147A publication Critical patent/CN114299147A/en
Pending legal-status Critical Current

Links

Images

Abstract

The specification discloses a positioning method, a positioning device, a storage medium and electronic equipment. Can be applied to the field of automatic driving. The method comprises the steps of acquiring rough positioning information of an object to be positioned, acquiring a point cloud frame around the object to be positioned, taking a plurality of candidate positions around the rough position in the rough positioning information as sampling positions, sampling in a point cloud map which is constructed in advance, comparing the positioning frame with each candidate reference frame obtained through sampling, selecting a target reference frame according to the matching degree of each candidate reference frame and the positioning frame, and positioning the object to be positioned based on the sampling pose of the target reference frame in the point cloud map. After a plurality of times of sampling and comparison, the registration range is reduced to a target reference frame, then the positioning frame and the target reference frame are registered, namely the pose information of the object to be positioned can be determined based on the sampling position of the target reference frame in the point cloud map, and the computing resources required by registration are reduced.

Description

Positioning method, positioning device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of positioning technologies, and in particular, to a positioning method, an apparatus, a storage medium, and an electronic device.
Background
At present, unmanned equipment with detectors such as laser radar can acquire surrounding point cloud data in real time, and determine the pose of the unmanned equipment by registering with a point cloud map established in advance.
The point cloud registration refers to that the point cloud around the unmanned equipment is aligned to a coordinate system identical to the point cloud map through rotation and translation, so that the pose of the unmanned equipment is positioned according to the determined rotation and translation matrix.
However, the point cloud map usually includes a large amount of point cloud data, and the registration of the point cloud requires a large amount of computing resources, so how to narrow down the registration range in the point cloud map for the unmanned device is a problem to be solved at present.
Disclosure of Invention
The present specification provides a positioning method, an apparatus, a storage medium, and an electronic device, so as to partially solve the above problems in the prior art.
The technical scheme adopted by the specification is as follows:
the present specification provides a positioning method, comprising:
acquiring coarse positioning information of an object to be positioned, wherein the coarse positioning information at least comprises a coarse position of the object to be positioned; collecting a point cloud frame around an object to be positioned as a positioning frame;
determining a plurality of candidate positions around the rough position, taking the candidate positions as sampling positions in the point cloud map for each candidate position, sampling in the point cloud map constructed in advance to generate a plurality of point cloud frames, and taking the generated point cloud frames as candidate reference frames, wherein the sampling postures of each candidate reference frame generated by taking the candidate positions as the sampling positions are different;
aiming at each candidate reference frame, comparing the point cloud contained in the candidate reference frame with the point cloud contained in the positioning frame, and determining the matching degree of the candidate reference frame and the positioning frame;
and selecting a target reference frame from the candidate reference frames according to the matching degree of the candidate reference frames and the positioning frame, and determining the pose information of the object to be positioned based on the corresponding sampling pose of the target reference frame in the point cloud map.
Optionally, taking the candidate position as a sampling position in the point cloud map, and sampling in a point cloud map constructed in advance, specifically including:
sampling in a pre-constructed point cloud map by taking the rough position as a sampling position in the point cloud map to generate a plurality of point cloud frames, and taking the generated point cloud frames as visual angle reference frames, wherein the acquisition postures of the visual angle reference frames are different;
aiming at each visual angle reference frame, comparing the point cloud contained in the visual angle reference frame with the point cloud contained in the positioning frame, and determining the matching degree of the visual angle reference frame and the positioning frame;
determining at least one candidate attitude according to the matching degree of each visual angle reference frame and the positioning frame and the corresponding sampling attitude of each visual angle reference frame in the point cloud map;
and for each candidate gesture, taking the candidate position as a sampling position in the point cloud map, taking the candidate gesture as a sampling gesture in the point cloud map, and sampling in a point cloud map constructed in advance.
Optionally, the method further comprises:
determining a rough gesture included in the rough positioning information of the object to be positioned as a specified gesture; and/or the presence of a gas in the gas,
when the object to be positioned is unmanned vehicle, determining the lane line direction of the road where the object to be positioned is located, and taking the posture that the included angle between the course of the object to be positioned and the lane line direction is zero as the designated posture;
determining at least one candidate attitude according to the matching degree of each visual angle reference frame and the positioning frame and the sampling attitude of each visual angle reference frame in the point cloud map, and specifically comprising the following steps:
and determining a visual angle reference frame with the highest matching degree with the positioning frame, and taking the sampling posture and the designated posture of the visual angle reference frame in the point cloud map as candidate postures.
Optionally, selecting a target reference frame from the candidate reference frames according to the matching degree between the candidate reference frames and the positioning frame, specifically including:
judging whether each candidate reference frame comprises at least one candidate reference frame of which the matching degree with the positioning frame is greater than a preset matching degree threshold value or not according to the matching degree of each candidate reference frame and the positioning frame;
and if so, taking the candidate reference frame with the maximum matching degree with the positioning frame in the candidate reference frames as a target reference frame, if not, re-acquiring the rough positioning information of the object to be positioned, and re-acquiring the point cloud frame around the object to be positioned.
Optionally, determining pose information of the object to be positioned based on a sampling pose corresponding to the target reference frame in the point cloud map, specifically including:
registering the point cloud contained in the target reference frame and the point cloud contained in the positioning frame to obtain a transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame;
and determining the pose information of the object to be positioned according to the sampling pose of the target reference frame in the point cloud map and the transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame.
Optionally, before determining the pose information of the object to be positioned, the method further includes:
judging whether the difference between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame is larger than a preset point cloud difference threshold value or not according to the transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame;
if the point cloud map sampling position information of the object to be positioned is not obtained, the rough positioning information of the object to be positioned is obtained again, the point cloud frame around the object to be positioned is collected again, and if the point cloud frame is not obtained, the position and posture information of the object to be positioned is determined based on the corresponding sampling position and posture of the target reference frame in the point cloud map according to the conversion relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame.
Optionally, the method further comprises:
determining pose information of a specified number of objects to be positioned under the condition that the movement speed of the objects to be positioned is lower than a preset speed threshold;
judging whether the difference between the posture information of the determined object to be positioned is larger than a preset posture difference threshold value or not;
and if so, re-determining the pose information of the specified number of the objects to be positioned, and if not, determining the target pose of the objects to be positioned according to the pose information of the determined objects to be positioned.
The present specification provides a positioning device comprising:
the system comprises a rough positioning module, a rough positioning module and a positioning module, wherein the rough positioning module is used for acquiring rough positioning information of an object to be positioned, and the rough positioning information at least comprises a rough position of the object to be positioned; collecting a point cloud frame around an object to be positioned as a positioning frame;
the sampling module is used for determining a plurality of candidate positions around the rough position, taking the candidate positions as sampling positions in the point cloud map for each candidate position, sampling in the point cloud map which is constructed in advance to generate a plurality of point cloud frames, and taking the generated point cloud frames as candidate reference frames, wherein the sampling postures of the candidate reference frames generated by taking the candidate positions as the sampling positions are different;
the matching module is used for comparing the point cloud contained in the candidate reference frame with the point cloud contained in the positioning frame aiming at each candidate reference frame and determining the matching degree of the candidate reference frame and the positioning frame;
and the pose determining module is used for selecting a target reference frame from the candidate reference frames according to the matching degree of the candidate reference frames and the positioning frame, and determining pose information of the object to be positioned on the basis of the corresponding sampling pose of the target reference frame in the point cloud map.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described positioning method.
The present specification provides an unmanned device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above positioning method when executing the program.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the positioning method provided by the present specification, coarse positioning information of an object to be positioned is obtained, a point cloud frame around the object to be positioned is collected, a plurality of candidate positions around the coarse position in the coarse positioning information are taken as sampling positions, sampling is performed in a point cloud map which is constructed in advance, the positioning frame is compared with each candidate reference frame obtained by sampling, a target reference frame is selected according to the matching degree of each candidate reference frame and the positioning frame, and the object to be positioned is positioned based on the sampling pose of the target reference frame in the point cloud map.
After a plurality of times of sampling and comparison, the registration range is reduced to the target reference frame, and the positioning frame is registered with the target reference frame, so that the pose information of the object to be positioned can be determined based on the sampling position of the target reference frame in the point cloud map, and the computing resources required by registration are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a schematic flow chart of a positioning method in the present specification;
FIG. 2A is a schematic diagram of a candidate position determination method provided herein;
FIG. 2B is a schematic illustration of another alternative position determination method provided herein;
FIG. 3 is a schematic view of a positioning device provided herein;
fig. 4 is a schematic structural diagram of the unmanned aerial vehicle provided in this specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
Besides the point cloud data acquired in real time are matched with all the point cloud data in the whole point cloud map, when a Global Navigation Satellite System (GNSS) with high precision is loaded on an object to be positioned, the position and orientation data returned by the GNSS can be directly sampled in a coordinate System of the point cloud map, and the point cloud data acquired in real time by the object to be positioned and the point cloud data sampled from the point cloud map are registered to position the position and orientation of the object to be positioned.
The point cloud map describes a scene in a positioning area in a point cloud form, so that sampling in the point cloud map with a certain sampling pose means that the scene in the point cloud map is observed in the point cloud map with the sampling pose, and the observed scene described by the point cloud is generated into a frame of point cloud data.
However, because the registration between the point cloud data and the point cloud data has a high requirement on the similarity between the point cloud data, that is, the deviation between the sampling pose and the pose when the point cloud data is acquired by the object to be positioned cannot be too large, if the accuracy of the pose data output by the GNSS cannot meet the requirement on the registration, the registration often fails, so that the object to be positioned cannot be positioned.
In practical applications, the positioning accuracy of the GNSS carried by an object to be positioned is usually affected by the environment where the object to be positioned is located, for example, when the object to be positioned runs in an urban canyon environment or a heavily-shielded environment such as under dense trees, the GNSS signal is weak, so that the pose accuracy of the object to be positioned output by the GNSS is also low. At this time, in order to obtain the pose data of the object to be positioned with higher accuracy, extra operation cost is generally consumed, and the unmanned equipment is moved to an open environment to receive GNSS signals so as to perform positioning.
In order to solve the above problem, the present specification provides a positioning method, so that under a condition that a requirement on positioning accuracy of a positioning device is low, registration is performed within a small range in a point cloud map, and then positioning of an object to be positioned can be achieved.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a positioning method in this specification, which specifically includes the following steps:
s100: acquiring coarse positioning information of an object to be positioned, wherein the coarse positioning information at least comprises a coarse position of the object to be positioned; and collecting a point cloud frame around the object to be positioned as a positioning frame.
The positioning method provided by the specification aims to realize positioning of an object to be positioned, namely determining the pose of the object to be positioned. The method can be used for carrying out initialization positioning on the object to be positioned on the point cloud map.
The object to be positioned can be unmanned equipment, including unmanned aerial vehicle, unmanned equipment (hereinafter referred to as unmanned vehicle) etc., wherein, unmanned vehicle described in this specification can include vehicle of automatic driving and have vehicle of supplementary driving function. The unmanned vehicle may be a delivery vehicle applied to the delivery field. For example only, the following description will be given taking an object to be positioned as an unmanned vehicle as an example.
An execution main body for executing the positioning method provided by the present specification may be an object to be positioned itself, or may be another control device that can perform information transmission with the object to be positioned and control the object to be positioned, where the control device may be a server or a terminal device, and when the control device is a server, the embodiment of the present specification does not limit the control device to be a distributed server or a clustered service area, and when the control device is a terminal device, the control device may be any existing terminal device, such as a notebook computer, a mobile phone, a server, and the like, and the present specification does not limit this. The following portions of the embodiments of the present specification are described by taking the object to be positioned itself as an execution subject, which is only an example.
The object to be positioned is equipped with at least a first device and a second device, where the first device is a positioning device, such as a GNSS, and the second device is a detection device, such as a radar, specifically a lidar. In the embodiment of the present disclosure, the first device is a GNSS, and the second device is a lidar.
In the positioning method provided in this specification, an object to be positioned acquires coarse positioning information through a first device, where the coarse positioning information at least includes a coarse position of the object to be positioned, and the object to be positioned can acquire point cloud data around itself in real time as a positioning frame through a second device.
In an embodiment of the present specification, the coarse positioning information obtained by the object to be positioned at least includes its coarse position, and the positioning frame is data of points on a surface of a three-dimensional object around itself obtained after processing a signal reflected by a second device, specifically, when the second device is a laser radar mounted on the object to be positioned, the laser radar generally includes a transmitter for transmitting a light source and a receiver for receiving a reflected light beam, when the laser radar operates, the transmitter transmits a laser beam, the surface of the object irradiated by the laser beam reflects the laser beam, the receiver collects laser points returned from the surface of the object, and then, according to information such as propagation time of the laser points, three-dimensional coordinates of the points on the surface of the object returned to each laser point can be accurately calculated, since the point cloud is data of points on the surface of the object obtained according to the laser points, the points in the point cloud can also be considered to carry position data.
In an embodiment of the present specification, a range of a first specified distance of an object to be positioned may be set as the periphery of the object to be positioned (for example, within 20 meters of the object to be positioned), which is not limited in the embodiment of the present specification, but of course, since the environment around the object to be positioned is limited by the detection range of the laser radar that detects the surrounding environment, the environment within the detection range of the laser radar may also be set as the periphery of the object to be positioned.
In an embodiment of the present specification, a pose of the object to be positioned when acquiring the coarse positioning information through the first device is the same as a pose of the object to be positioned when acquiring the surrounding point cloud data through the second device.
In an embodiment of the present specification, there is a requirement for the accuracy of the coarse positioning information obtained by the first device, and when the accuracy of the coarse positioning information obtained by the first device is greater than a specified first accuracy threshold, the positioning method provided in the present specification may use data returned by the first device as the coarse positioning information, for example, when the first device is a GNSS, the accuracy of the GNSS may be set to be greater than the first accuracy threshold when the error of the GNSS is not greater than 10 meters.
It should be noted that, when sampling is directly performed in the point cloud map by using the pose returned by the positioning device, and the sampled point cloud data and the point cloud data acquired in real time can be directly aligned, the positioning accuracy of the positioning device is greater than the first accuracy threshold.
S102: determining a plurality of candidate positions around the rough position, taking the candidate positions as sampling positions in the point cloud map for each candidate position, sampling in the point cloud map constructed in advance to generate a plurality of point cloud frames, and taking the generated point cloud frames as candidate reference frames, wherein the sampling postures of the candidate reference frames generated by taking the candidate positions as the sampling positions are different.
In the embodiment of the description, the point cloud map describes a scene in the positioning area in a point cloud form, so that sampling in the point cloud map with a certain sampling pose means observing the scene in the point cloud map with the sampling pose in the point cloud map, and generating the observed scene described by the point cloud into a frame of point cloud data. In an embodiment of this specification, a device to be positioned is located within the positioning area.
In this embodiment of the present specification, after acquiring the rough location output by the first device, a number of candidate locations around the rough location are determined. In an embodiment of this specification, a range of a second specified distance of an object to be positioned may be set as a periphery of the object to be positioned (for example, within 20 meters of the object to be positioned), and of course, a specific data value of the second specified distance is not limited in this specification, and specifically, this specification provides the following two determination methods of candidate positions:
firstly, a plurality of concentric circles are determined within a third designated distance from the rough position by taking the rough position as a center, and then a plurality of candidate positions on the concentric circles are determined. Exemplarily, in fig. 2A, it is shown that after two concentric circles centering on the rough position of the object to be positioned as shown in fig. 2A are determined, eight candidate positions on the concentric circles shown in fig. 2A are determined.
Second, a first designated direction and a second designated direction may be determined, a position every fourth designated distance between the coarse position and the first designated direction and the second designated direction may be determined as a candidate position, and then, for each determined candidate position, a position every fifth designated distance between the coarse position and the candidate position in the third designated direction and the fourth designated direction may be determined as a candidate position, where the first designated direction and the second designated direction may be two directions opposite to each other, the third designated direction and the fourth designated direction may be two directions opposite to each other, the fourth designated distance and the fifth designated distance may be the same or different, and the present specification embodiment does not limit how the designated direction and the designated distance are determined. Taking fig. 2B as an example, fig. 2B shows a determination manner of candidate positions, where a position shown by a solid dot in the center of fig. 2B is a rough position, and other unmarked solid dots are candidate positions determined according to the rough position. As shown in fig. 2B, positions at every fourth specified distance from the rough position in the first specified direction and the second specified direction may be determined as candidate positions, and then, for each of the determined candidate positions, positions at every fifth specified distance from the candidate position in the third specified direction and the fourth specified direction may be determined as candidate positions, wherein the first specified direction, the second specified direction, the third specified direction, and the fourth specified direction may be as shown in fig. 2B.
In addition, the candidate positions around the rough position may be determined by any other existing method, which is not limited in this embodiment of the present disclosure. For convenience of description, the specification will hereinafter take the second manner described above as an example of determining each candidate position.
And for each determined candidate position, taking the candidate position as a sampling position in the point cloud map, sampling in the point cloud map constructed in advance, generating a plurality of point cloud frames, and taking the generated point cloud frames as candidate reference frames. In an embodiment of the present specification, the sampling pose of each candidate reference frame generated by using the candidate position as the sampling position is different.
The sampling postures of the candidate reference frames when sampling with the candidate position as the sampling position may be determined in any conventional manner, for example, several postures may be specified in advance, and sampling may be performed with the specified postures, and in this case, the postures specified for the candidate positions may be the same or different.
And then, obtaining a plurality of candidate reference frames in any one way, wherein sampling is carried out in the point cloud map by using a sampling pose, namely, a scene in a positioning area described by the point cloud in the point cloud map is observed by using the sampling pose, and the observed scene described by the point cloud is generated into the candidate reference frames.
S104: and aiming at each candidate reference frame, comparing the point cloud contained in the candidate reference frame with the point cloud contained in the positioning frame, and determining the matching degree of the candidate reference frame and the positioning frame.
In the embodiment of the present specification, the positioning frame and the candidate reference frame both include point clouds, and each point cloud corresponds to its own shape structure feature, so that the point clouds in the positioning frame and the candidate reference frame can be compared, and the matching degree between the positioning frame and the candidate reference frame can be determined according to the determined matching degree of the point clouds in the positioning frame and the candidate reference frame.
For example, for a pair of point clouds located in the positioning frame and the candidate reference frame, a transformation relationship between the two point clouds may be found first, so that the degree of overlap between the two point clouds after transformation is the maximum, where the transformation relationship may be a rotation and translation matrix of one point cloud with respect to the other point cloud, and generally speaking, for one point cloud, if a point in the other point cloud exists in a neighborhood of a certain point in the point cloud, it may be considered that a pair of points in the two point clouds coincide with each other, and then, a ratio of the number of points in the point cloud that coincide with the other point cloud to all the points in the point cloud may be used as the degree of overlap of the pair of point clouds.
For each candidate reference frame, the matching degree between the candidate reference frame and the positioning frame can be determined according to the overlapping degree between each point cloud in the candidate reference frame and each point cloud in the positioning frame, and the higher the overlapping degree between each point cloud in the candidate reference frame and each point cloud in the positioning frame is, the higher the matching degree between the candidate reference frame and the positioning frame is.
Of course, the matching degree between the candidate reference frame and the positioning frame may also be determined in other manners, for example, the matching degree between the candidate reference frame and the positioning frame may be determined according to the similarity between the shape of each point cloud in the candidate reference frame and the shape of each point cloud in the positioning frame. This is not limited by the present description.
S106: and selecting a target reference frame from the candidate reference frames according to the matching degree of the candidate reference frames and the positioning frame, and determining the pose information of the object to be positioned based on the corresponding sampling pose of the target reference frame in the point cloud map.
In this embodiment of the present specification, after determining the matching degree between each candidate reference frame and the positioning frame, a target reference frame may be selected from each candidate reference frame. Specifically, the candidate reference frame with the maximum matching degree with the positioning frame in the candidate reference frames may be used as the target reference frame.
In an embodiment of this specification, before determining a target reference frame, it may be further determined, according to a matching degree between each candidate reference frame and the positioning frame, whether each candidate reference frame includes at least one candidate reference frame whose matching degree with the positioning frame is greater than a preset matching degree threshold, if yes, a candidate reference frame whose matching degree with the positioning frame is the greatest among the candidate reference frames is used as the target reference frame, if not, coarse positioning information of the object to be positioned is obtained again, point cloud frames around the object to be positioned are re-acquired, and a pose of the device to be positioned is positioned again according to any one of the above manners by using the positioning method provided by this specification.
Of course, without performing the above determination, the target reference frame having the highest matching degree may be directly selected from the candidate reference frames.
Then, after the target reference frame is selected in any one of the above manners, the pose information of the object to be positioned may be determined based on the sampling pose of the target reference frame corresponding to the point cloud map. Specifically, the pose information of the object to be positioned may be solved according to the difference between the target reference frame and the positioning frame and based on the corresponding sampling pose of the target reference frame in the point cloud map.
Based on the method as described in the above fig. 1, an object to be positioned first acquires its own coarse positioning information, collects its own surrounding point cloud frames, samples several candidate positions around the coarse position contained in the coarse positioning information as sampling positions in a pre-constructed point cloud map to obtain several candidate reference frames, compares the positioning frame with each candidate reference frame, and selects a target reference frame from the candidate reference frames according to the matching degree of the candidate reference frames and the positioning frame obtained by the comparison, i.e. the object to be positioned can be positioned according to the sampling pose of the target reference frame in the point cloud map, and it can be seen that after several times of sampling and comparison, the pose information of the object to be positioned can be determined based on the sampling positions of the target reference frame in the point cloud map only by registering the positioning frame with the target reference frame, the registration range is reduced, and the calculation resources required by registration are reduced.
In an embodiment of the present specification, after the target reference frame is determined, pose information of an object to be positioned may be determined based on a sampling pose corresponding to the target reference frame in a point cloud map, and specifically, the pose information of the object to be positioned when the point cloud in the positioning frame is projected to a coordinate system of the point cloud map may be determined according to a difference between the point cloud included in the target reference frame and the point cloud included in the positioning frame.
Further, the point cloud contained in the target reference frame and the point cloud contained in the positioning frame may be registered to obtain a transformation relationship between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame; and determining the pose information of the object to be positioned according to the sampling pose of the target reference frame in the point cloud map and the transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame, wherein the transformation relation can be a rotation and translation matrix.
In an embodiment of this specification, before determining the pose information of the object to be positioned in the above manner, the pose information of the object to be positioned may not be directly determined according to a transformation relationship between the two, but instead, it is determined whether a difference between a point cloud included in the positioning frame and a point cloud included in the target reference frame is greater than a preset point cloud difference threshold, for example, a parameter in the rotation matrix and/or the translation matrix is greater than a specified parameter difference threshold, if so, the rough positioning information of the object to be positioned may be re-acquired, and a point cloud frame around the object to be positioned may be re-acquired, and if not, based on a corresponding sampling pose of the target reference frame in the point cloud map according to a transformation relationship between a point cloud included in the positioning frame and a point cloud included in the target reference frame, and determining the pose information of the object to be positioned.
In addition, sampling can be carried out in the point cloud map and candidate reference frames are generated due to the need of determining the sampling pose. The above part of the present specification embodiment explains how to determine candidate positions, and hereinafter, a manner of determining candidate poses is exemplarily presented in the present specification embodiment.
After the rough position is obtained, the rough position can be used as a sampling position in the point cloud map, sampling is carried out in the point cloud map which is constructed in advance, a plurality of point cloud frames are generated, and the generated point cloud frames are used as visual angle reference frames, wherein the acquisition postures of the visual angle reference frames are different.
Illustratively, when the object to be positioned is an unmanned vehicle, since the unmanned vehicle runs on the road surface of the road, it may be assumed that the pitch angle and the roll angle of the unmanned vehicle are both 0 °, then an initial heading angle may be determined first, then an initial attitude with a heading angle of 0 ° is determined, and the initial attitude is used as a sampling attitude, the rough position is generated based on the point cloud map, the initial attitude is used as a viewing angle reference frame of the sampling attitude, then the heading angle is rotated by a specified angle relative to the initial heading angle to obtain another sampling attitude, the rough position is used as a sampling position based on the point cloud map, another viewing angle reference frame is generated based on the point cloud map, and then the specified angle is rotated continuously on the basis of the heading angle, until the course angle is rotated back to the initial course angle, and a plurality of visual angle reference frames are obtained in the process.
Of course, a plurality of view angle reference frames may also be generated in other manners, for example, a course angle, a pitch angle, and a roll angle may be preset to obtain a plurality of sampling postures, and the rough position is used as a sampling position, and sampling is performed in the point cloud map based on each sampling posture to obtain a plurality of view angle reference frames. The embodiments of the present specification do not limit how the view reference frame is generated.
Then, after a plurality of view reference frames are generated in any one of the above manners, for each view reference frame, the point cloud included in the view reference frame and the point cloud included in the positioning frame may be compared in any one of the above manners, and the matching degree between the view reference frame and the positioning frame is determined.
Then, at least one candidate attitude can be determined according to the matching degree of each view angle reference frame and the positioning frame and the corresponding sampling attitude of each view angle reference frame in the point cloud map. And for each candidate gesture, taking the candidate position as a sampling position in the point cloud map, taking the candidate gesture as a sampling gesture in the point cloud map, and sampling in a point cloud map constructed in advance.
That is, after determining each candidate position and each candidate pose, for each candidate position, for each candidate pose, a sampling pose using the candidate position as a sampling position and the candidate pose as a sampling pose may be determined, thereby obtaining a plurality of sampling poses. And then sampling is carried out under the coordinate system of the point cloud map by each sampling pose so as to obtain the generated candidate reference frame.
In addition, in addition to selecting a candidate posture from the sampling postures of the reference frames of the various visual angles through the matching degree obtained by comparing the reference frames of the visual angles with the positioning frames, other specified postures can be used as the candidate postures.
For example, when the rough positioning information of the object to be positioned includes a rough gesture, the rough gesture may be used as a designated gesture, and for example, when the object to be positioned is an unmanned vehicle, a lane line direction of a road where the object to be positioned is located may be determined, and a gesture in which an included angle between the heading of the object to be positioned and the lane line direction is zero may be used as the designated gesture, and so on.
Under the condition that the movement speed of the object to be positioned is lower than a preset speed threshold, for example, when the object to be positioned is initially positioned, in order to improve the positioning accuracy, any one of the above manners can be adopted for multiple times to determine pose information of a specified number of devices to be positioned, then whether the difference between each piece of pose information of the determined object to be positioned is larger than a preset pose difference threshold is judged, if yes, the pose information of the specified number of objects to be positioned is determined again, and if not, the target pose of the object to be positioned is determined according to each piece of pose information of the determined object to be positioned.
The embodiment of the present specification does not limit the specified number and the pose difference threshold.
In addition, after the positioning of the object to be positioned is realized through the method, the positioning in the motion process can be realized through the detection equipment carried by the object to be positioned and the point cloud map. When positioning is abnormal, for example, the point cloud map describes a scene in the positioning area, so that when the position of the object to be positioned is detected to be outside the positioning area, the pose information of the object to be positioned cannot be output through registration.
Based on the same idea, the positioning method provided for one or more embodiments of the present specification further provides a corresponding positioning device, as shown in fig. 3.
Fig. 3 is a schematic view of a positioning device provided in the present specification, the device including:
a coarse positioning module 300, configured to obtain coarse positioning information of an object to be positioned, where the coarse positioning information at least includes a coarse position of the object to be positioned; collecting a point cloud frame around an object to be positioned as a positioning frame;
a sampling module 302, configured to determine a plurality of candidate positions around the rough position, sample in a pre-constructed point cloud map by taking the candidate position as a sampling position in the point cloud map for each candidate position, generate a plurality of point cloud frames, and take the generated point cloud frames as candidate reference frames, where sampling postures of each candidate reference frame generated by taking the candidate position as a sampling position are different;
a matching module 304, configured to compare, for each candidate reference frame, a point cloud included in the candidate reference frame with a point cloud included in the positioning frame, and determine a matching degree between the candidate reference frame and the positioning frame;
and the pose determining module 306 is configured to select a target reference frame from the candidate reference frames according to the matching degree between each candidate reference frame and the positioning frame, and determine pose information of the object to be positioned based on a sampling pose of the target reference frame corresponding to the point cloud map.
Optionally, the sampling module 302 is specifically configured to sample a point cloud map constructed in advance by taking the rough position as a sampling position in the point cloud map, generate a plurality of point cloud frames, and use the generated point cloud frames as view reference frames, where an acquisition posture of each view reference frame is different; aiming at each visual angle reference frame, comparing the point cloud contained in the visual angle reference frame with the point cloud contained in the positioning frame, and determining the matching degree of the visual angle reference frame and the positioning frame; determining at least one candidate attitude according to the matching degree of each visual angle reference frame and the positioning frame and the corresponding sampling attitude of each visual angle reference frame in the point cloud map; and for each candidate gesture, taking the candidate position as a sampling position in the point cloud map, taking the candidate gesture as a sampling gesture in the point cloud map, and sampling in a point cloud map constructed in advance.
Optionally, determining a coarse pose included in the coarse positioning information of the object to be positioned as a specified pose; and/or when the object to be positioned is unmanned vehicle, determining the lane line direction of the road where the object to be positioned is located, and taking the posture that the included angle between the course of the object to be positioned and the lane line direction is zero as the designated posture; the sampling module 302 is specifically configured to determine a view reference frame with the highest matching degree with the positioning frame, and use a sampling posture and a designated posture of the view reference frame in the point cloud map as candidate postures.
Optionally, the pose determining module 306 is specifically configured to determine, according to the matching degree between each candidate reference frame and the positioning frame, whether each candidate reference frame includes at least one candidate reference frame whose matching degree with the positioning frame is greater than a preset matching degree threshold; and if so, taking the candidate reference frame with the maximum matching degree with the positioning frame in the candidate reference frames as a target reference frame, if not, re-acquiring the rough positioning information of the object to be positioned, and re-acquiring the point cloud frame around the object to be positioned.
Optionally, the pose determination module 306 is specifically configured to register the point cloud included in the target reference frame and the point cloud included in the positioning frame, so as to obtain a transformation relationship between the point cloud included in the positioning frame and the point cloud included in the target reference frame; and determining the pose information of the object to be positioned according to the sampling pose of the target reference frame in the point cloud map and the transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame.
Optionally, before determining the pose information of the object to be positioned, the pose determining module 306 is specifically configured to determine whether a difference between the point cloud included in the positioning frame and the point cloud included in the target reference frame is greater than a preset point cloud difference threshold according to a transformation relationship between the point cloud included in the positioning frame and the point cloud included in the target reference frame; if the point cloud map sampling position information of the object to be positioned is not obtained, the rough positioning information of the object to be positioned is obtained again, the point cloud frame around the object to be positioned is collected again, and if the point cloud frame is not obtained, the position and posture information of the object to be positioned is determined based on the corresponding sampling position and posture of the target reference frame in the point cloud map according to the conversion relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame.
Optionally, the rough positioning module 300 is specifically configured to, when the movement speed of the object to be positioned is lower than a preset speed threshold, determine pose information of a specified number of objects to be positioned; judging whether the difference between the posture information of the determined object to be positioned is larger than a preset posture difference threshold value or not; and if so, re-determining the pose information of the specified number of the objects to be positioned, and if not, determining the target pose of the objects to be positioned according to the pose information of the determined objects to be positioned.
The present specification also provides a computer-readable storage medium storing a computer program, which can be used to execute the above-mentioned positioning method.
The present specification also provides a schematic diagram of the structure of the drone shown in figure 4. As shown in fig. 4, at the hardware level, the electronic device includes a processor, an internal bus, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the positioning method.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method of positioning, comprising:
acquiring coarse positioning information of an object to be positioned, wherein the coarse positioning information at least comprises a coarse position of the object to be positioned; collecting a point cloud frame around an object to be positioned as a positioning frame;
determining a plurality of candidate positions around the rough position, taking the candidate positions as sampling positions in the point cloud map for each candidate position, sampling in the point cloud map constructed in advance to generate a plurality of point cloud frames, and taking the generated point cloud frames as candidate reference frames, wherein the sampling postures of each candidate reference frame generated by taking the candidate positions as the sampling positions are different;
aiming at each candidate reference frame, comparing the point cloud contained in the candidate reference frame with the point cloud contained in the positioning frame, and determining the matching degree of the candidate reference frame and the positioning frame;
and selecting a target reference frame from the candidate reference frames according to the matching degree of the candidate reference frames and the positioning frame, and determining the pose information of the object to be positioned based on the corresponding sampling pose of the target reference frame in the point cloud map.
2. The method of claim 1, wherein the candidate location is a sampling location in the point cloud map, and sampling is performed in a pre-constructed point cloud map, specifically comprising:
sampling in a pre-constructed point cloud map by taking the rough position as a sampling position in the point cloud map to generate a plurality of point cloud frames, and taking the generated point cloud frames as visual angle reference frames, wherein the acquisition postures of the visual angle reference frames are different;
aiming at each visual angle reference frame, comparing the point cloud contained in the visual angle reference frame with the point cloud contained in the positioning frame, and determining the matching degree of the visual angle reference frame and the positioning frame;
determining at least one candidate attitude according to the matching degree of each visual angle reference frame and the positioning frame and the corresponding sampling attitude of each visual angle reference frame in the point cloud map;
and for each candidate gesture, taking the candidate position as a sampling position in the point cloud map, taking the candidate gesture as a sampling gesture in the point cloud map, and sampling in a point cloud map constructed in advance.
3. The method of claim 2, wherein the method further comprises:
determining a rough gesture included in the rough positioning information of the object to be positioned as a specified gesture; and/or the presence of a gas in the gas,
when the object to be positioned is unmanned vehicle, determining the lane line direction of the road where the object to be positioned is located, and taking the posture that the included angle between the course of the object to be positioned and the lane line direction is zero as the designated posture;
determining at least one candidate attitude according to the matching degree of each visual angle reference frame and the positioning frame and the sampling attitude of each visual angle reference frame in the point cloud map, and specifically comprising the following steps:
and determining a visual angle reference frame with the highest matching degree with the positioning frame, and taking the sampling posture and the designated posture of the visual angle reference frame in the point cloud map as candidate postures.
4. The method of claim 1, wherein selecting the target reference frame from the candidate reference frames according to the matching degree between the candidate reference frames and the positioning frame comprises:
judging whether each candidate reference frame comprises at least one candidate reference frame of which the matching degree with the positioning frame is greater than a preset matching degree threshold value or not according to the matching degree of each candidate reference frame and the positioning frame;
and if so, taking the candidate reference frame with the maximum matching degree with the positioning frame in the candidate reference frames as a target reference frame, if not, re-acquiring the rough positioning information of the object to be positioned, and re-acquiring the point cloud frame around the object to be positioned.
5. The method of claim 1, wherein determining pose information of the object to be positioned based on the corresponding sampling pose of the target reference frame in the point cloud map comprises:
registering the point cloud contained in the target reference frame and the point cloud contained in the positioning frame to obtain a transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame;
and determining the pose information of the object to be positioned according to the sampling pose of the target reference frame in the point cloud map and the transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame.
6. The method of claim 5, wherein prior to determining pose information for the object to be positioned, the method further comprises:
judging whether the difference between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame is larger than a preset point cloud difference threshold value or not according to the transformation relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame;
if the point cloud map sampling position information of the object to be positioned is not obtained, the rough positioning information of the object to be positioned is obtained again, the point cloud frame around the object to be positioned is collected again, and if the point cloud frame is not obtained, the position and posture information of the object to be positioned is determined based on the corresponding sampling position and posture of the target reference frame in the point cloud map according to the conversion relation between the point cloud contained in the positioning frame and the point cloud contained in the target reference frame.
7. The method of claim 1, wherein the method further comprises:
determining pose information of a specified number of objects to be positioned under the condition that the movement speed of the objects to be positioned is lower than a preset speed threshold;
judging whether the difference between the posture information of the determined object to be positioned is larger than a preset posture difference threshold value or not;
and if so, re-determining the pose information of the specified number of the objects to be positioned, and if not, determining the target pose of the objects to be positioned according to the pose information of the determined objects to be positioned.
8. A positioning device, characterized in that the device specifically comprises:
the system comprises a rough positioning module, a rough positioning module and a positioning module, wherein the rough positioning module is used for acquiring rough positioning information of an object to be positioned, and the rough positioning information at least comprises a rough position of the object to be positioned; collecting a point cloud frame around an object to be positioned as a positioning frame;
the sampling module is used for determining a plurality of candidate positions around the rough position, taking the candidate positions as sampling positions in the point cloud map for each candidate position, sampling in the point cloud map which is constructed in advance to generate a plurality of point cloud frames, and taking the generated point cloud frames as candidate reference frames, wherein the sampling postures of the candidate reference frames generated by taking the candidate positions as the sampling positions are different;
the matching module is used for comparing the point cloud contained in the candidate reference frame with the point cloud contained in the positioning frame aiming at each candidate reference frame and determining the matching degree of the candidate reference frame and the positioning frame;
and the pose determining module is used for selecting a target reference frame from the candidate reference frames according to the matching degree of the candidate reference frames and the positioning frame, and determining pose information of the object to be positioned on the basis of the corresponding sampling pose of the target reference frame in the point cloud map.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1 to 7.
10. An unmanned aerial vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any of claims 1 to 7.
CN202111641489.7A 2021-12-29 2021-12-29 Positioning method, positioning device, storage medium and electronic equipment Pending CN114299147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111641489.7A CN114299147A (en) 2021-12-29 2021-12-29 Positioning method, positioning device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111641489.7A CN114299147A (en) 2021-12-29 2021-12-29 Positioning method, positioning device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114299147A true CN114299147A (en) 2022-04-08

Family

ID=80971074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111641489.7A Pending CN114299147A (en) 2021-12-29 2021-12-29 Positioning method, positioning device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114299147A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559928A (en) * 2023-07-11 2023-08-08 新石器慧通(北京)科技有限公司 Pose information determining method, device and equipment of laser radar and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559928A (en) * 2023-07-11 2023-08-08 新石器慧通(北京)科技有限公司 Pose information determining method, device and equipment of laser radar and storage medium
CN116559928B (en) * 2023-07-11 2023-09-22 新石器慧通(北京)科技有限公司 Pose information determining method, device and equipment of laser radar and storage medium

Similar Documents

Publication Publication Date Title
CN109270545B (en) Positioning true value verification method, device, equipment and storage medium
CN112001456B (en) Vehicle positioning method and device, storage medium and electronic equipment
WO2021213432A1 (en) Data fusion
CN111077555B (en) Positioning method and device
CN111639682A (en) Ground segmentation method and device based on point cloud data
CN111288971B (en) Visual positioning method and device
CN114111774B (en) Vehicle positioning method, system, equipment and computer readable storage medium
CN116740361B (en) Point cloud segmentation method and device, storage medium and electronic equipment
CN113109851A (en) Abnormity detection method and device, storage medium and electronic equipment
CN111797711A (en) Model training method and device
CN114299147A (en) Positioning method, positioning device, storage medium and electronic equipment
CN112859131B (en) Positioning method and device of unmanned equipment
CN113674424A (en) Method and device for drawing electronic map
CN112861831A (en) Target object identification method and device, storage medium and electronic equipment
CN112712009A (en) Method and device for detecting obstacle
CN112902987A (en) Pose correction method and device
CN112462403A (en) Positioning method, positioning device, storage medium and electronic equipment
US11807262B2 (en) Control device, moving body, control method, and computer-readable storage medium
CN112712561A (en) Picture construction method and device, storage medium and electronic equipment
CN113887351A (en) Obstacle detection method and obstacle detection device for unmanned driving
CN114187355A (en) Image calibration method and device
CN112393723B (en) Positioning method, positioning device, medium and unmanned equipment
CN113888624B (en) Map construction method and device
CN114706048A (en) Calibration method and device for radar and camera combined calibration
CN115388886A (en) Vehicle positioning method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination