Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a robot positioning and attitude determining method, equipment and a storage medium, which do not need additional GNSS positioning equipment to provide positioning information and can completely avoid the problem of weak positioning signals caused by external factors.
In a first aspect of the present invention, there is provided a positioning and attitude determination method for a robot for controlling a tool, the robot including a lidar and an inertial navigation measurement unit and moving within a defined area, comprising the steps of:
constructing a prior map according to the laser point cloud data and the inertial navigation data in the limited area acquired by the robot;
carrying out curvature calculation on laser point cloud data to extract laser point cloud characteristics, and acquiring corresponding initial position postures of the inertial navigation position postures of the robot in the prior map according to a repositioning method based on the extracted laser point cloud characteristics and the inertial navigation position postures of the robot;
acquiring the current position posture of the robot in a laser point cloud coordinate system based on the initial position posture, and constructing a local tracking point cloud map based on the current position posture of the robot in the laser point cloud coordinate system;
converting the local tracking point cloud map into a local tracking grid map, and acquiring position and attitude conversion parameters of the prior map and the local tracking grid map according to a similar environment detection method;
and correcting the current position and posture of the robot in a laser point cloud coordinate system based on the position and posture conversion parameters.
According to the embodiment of the invention, at least the following technical effects are achieved:
the method comprises the steps of firstly, acquiring laser point cloud data and inertial navigation data in a limited area according to a robot, and constructing a prior map in the limited area; secondly, curvature calculation is carried out on the laser point cloud data to extract laser point cloud characteristics, and the inertial navigation position attitude of the robot is combined to obtain the corresponding initial position attitude of the inertial navigation position attitude of the robot in the prior map according to a repositioning method; then, based on the initial position posture, obtaining the current position posture of the robot in a laser point cloud coordinate system, and constructing a local tracking point cloud map according to the current position posture of the robot in the laser point cloud coordinate system; then, converting the local tracking point cloud map into a local tracking grid map, and performing similar environment matching according to a similar environment detection method to obtain position and attitude conversion parameters between the prior map and the local tracking grid map; and finally, correcting the current position posture of the robot in the laser point cloud coordinate system based on the position posture conversion parameters. According to the method, additional GNSS positioning equipment is not needed to provide positioning information, an external signal is not needed, and the positioning and attitude determining precision of the robot in the autonomous movement is improved. In addition, the method adopts a curvature calculation mode for the laser point cloud data, only effective information in the laser point cloud data is reserved, redundant data is eliminated, and a grid map is also adopted for tracking calculation during positioning, so that the calculation amount is reduced, the requirement on calculation force is lowered, and meanwhile, the pressure of data transmission is greatly reduced.
In a second aspect of the present invention, there is provided a positioning and attitude determination apparatus for a robot, comprising: at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a method of positioning and pose-ing a robot according to the first aspect of the invention.
In a third aspect of the present invention, a computer-readable storage medium is provided, which stores computer-executable instructions for causing a computer to perform the method for positioning and determining the pose of a robot according to the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the existing positioning and attitude determination technology, a robot positioning and attitude determination method generally adopts a differential GNSS and an inertial navigation to carry out combined navigation to obtain position and attitude information, the GNSS positioning is easily interfered by external weather environment factors, the inertial navigation has high instantaneous positioning accuracy, but errors are accumulated along with time to reduce the accuracy, the two methods can make up for the deficiencies of the two methods by combination, but still belong to active positioning, need the input of external GNSS signals, and are influenced by the uncontrollable factors of external conditions, and the positioning effect is uncontrollable.
In order to solve the above-mentioned disadvantages, referring to fig. 1 and 2, an embodiment of the present invention provides a positioning and attitude determination method for a robot for controlling a tool, the robot including a laser radar and an inertial navigation measurement unit and moving within a defined area, comprising the steps of:
s101, constructing a point cloud grid map according to laser point cloud data in a limited area collected by a robot, and combining inertial navigation data to serve as a prior map;
s102, curvature calculation is carried out on laser point cloud data to extract features, and an initial position attitude corresponding to the current inertial navigation position attitude of the robot in a prior map is obtained according to a repositioning method by combining the inertial navigation position attitude;
s103, acquiring the current position posture of the robot in a laser point cloud coordinate system based on the initial position posture, and constructing a local tracking point cloud map based on the current position posture;
step S104, processing the local tracking point cloud map, converting the local tracking point cloud map into a local tracking grid map, and acquiring position and posture conversion parameters of a prior map and the local tracking grid map according to a similar environment detection method;
and S105, correcting the current position and posture of the robot in the laser point cloud coordinate system based on the position and posture conversion parameters, and jumping to the step S103.
The method does not need additional GNSS positioning equipment to provide positioning information, does not depend on external signals, improves the positioning and attitude determination precision of the robot in autonomous movement, adopts a curvature calculation mode for laser point cloud data, only retains effective information in the laser point cloud data, eliminates redundant data, also adopts a grid map to perform tracking calculation during positioning, reduces the calculation amount, reduces the requirement on calculation force, and greatly reduces the pressure of data transmission.
As an optional implementation manner, the control tool in this embodiment may be a control system of the robot (a control system with a processor as a core), may also be a server that establishes a communication connection with the robot, and may also be an upper computer (e.g., a PC end) that establishes a communication connection with the robot. Various control modes of the robot can be realized.
Based on the above embodiment, step S101 specifically includes the steps of:
to be provided with
As a data format, constructing an inertial navigation position attitude, wherein
The coordinates are represented by a coordinate system of,
the gesture is represented by a gesture that is,
indicating identification。
To be provided with
As a data format, a point cloud grid map is constructed, wherein
Is shown and
the identification of the matching format of (a) is,
representing the data of the current point cloud frame,
representing a matrix composed of fixed row and column numbers, each position in the matrix is stored with an elevation mean value of the laser point cloud in the position range which is consistent with the real geographic coordinates,
is a pair of
A single row matrix of the average value for each column,
is a pair of
Taking a single-column matrix of the mean value of each row;
point cloud frame data
The format is as follows:
wherein
Representing point coordinate information in the laser point cloud,
the information on the color is represented by,
the information on the intensity of the laser light is represented,
representing time information.
And storing the inertial navigation position attitude and the point cloud grid map constructed by the format as prior map data in the preset area to generate a prior map.
Based on the above embodiment, step S102 specifically includes the steps of:
and S1021, extracting laser point cloud characteristics from the collected laser point cloud data.
Firstly, extracting features according to the inter-point curvature in the laser point cloud, wherein the curvature is calculated in the following mode:
(1) for m points on the edge of each scanning line in the laser radar, the curvature calculation condition of each m point on the left and the right is not met, so that the calculation is not involved;
(2) for any target laser point, on the scanning line where the point is located, selecting n points on the left side and the right side of the point, wherein n is less than
;
(3) Calculating the average difference between each point and the target laser point in the x coordinate direction
:
(4) Calculating the average difference of each point and the target laser point in the y coordinate direction
:
(5) Calculating the average difference of each point and the target laser point in the z coordinate direction
:
(6) Using the obtained
,
And
the three-dimensional curvature is calculated according to the following formula:
and then, after the three-dimensional curvature c is obtained, selecting the characteristic points according to the three-dimensional curvature threshold t to distinguish the angle characteristic points from the plane characteristic points.
Angular feature point selection conditions:
(1) at most N points are selected from the point with the maximum three-dimensional curvature, and only the points with the curvature larger than t can be selected;
(2) if the existing points in the five points around one point are selected as the angle characteristic points, skipping the points and selecting the points with smaller curvature.
Selecting conditions of the plane feature points:
(1) at most N points are selected from the point with the minimum three-dimensional curvature, and only the point with the curvature smaller than t can be selected;
(2) if the existing points in the five points around one point are selected as the plane feature points, skipping the points and selecting the points with larger curvature.
And respectively obtaining laser point cloud characteristics (namely the obtained laser point cloud angular points and plane point characteristics) under the laser coordinate system.
And S1022, extracting the current inertial navigation position posture of the robot from the acquired inertial navigation data.
According to the acquired inertial navigation data, calculating the position and the attitude of the robot under the inertial navigation coordinate system to obtain the position and the attitude of the robot
。
And S1023, converting the laser point cloud characteristic data into an inertial navigation coordinate system according to a fixed rotation relation and translation relation between the inertial navigation coordinate system and the laser radar coordinate system, combining the laser point cloud characteristic data with inertial navigation position attitude data, registering the laser point cloud characteristic data with point cloud frame data in a prior map, and acquiring the position attitude of the origin of the inertial navigation coordinate system corresponding to the prior map.
Using point cloud registration algorithm with
As an initial registration value, registering the laser point cloud characteristics with point cloud frame data in a prior map, and determining the position and the attitude of the origin of an inertial navigation coordinate system in the repositioning period of the robot under the coordinate system of the prior map to obtain the position and the attitude of the origin of the inertial navigation coordinate system in the repositioning period of the robot
。
And S1024, obtaining the corresponding initial position posture of the current inertial navigation position posture of the robot in the prior map through the corresponding position posture of the current inertial navigation position posture of the robot and the original point of the inertial navigation coordinate system in the prior map.
The robot is obtained by the following formulaThe front inertial navigation position attitude is corresponding to the initial position attitude under the prior map coordinate system
:
Wherein,
representing the inertial navigation position attitude of the robot in the initial position attitude in the prior map.
Based on the above embodiment, step S103 specifically includes the steps of:
firstly, according to the initial position and posture
And converting the current inertial navigation position posture of the robot to obtain the current position posture of the robot in the laser point cloud coordinate system by the fixed rotation relation and translation relation between the inertial navigation coordinate system and the laser radar coordinate system:
wherein,
representing the current position and posture of the robot in the laser point cloud coordinate system,
representing the initial position pose of the robot in a prior map,
representing a fixed rotational relationship between the inertial navigation coordinate system and the lidar coordinate system,
representing a fixed translational relationship between the inertial navigation coordinate system and the lidar coordinate system.
Then, according to
And constructing a local tracking point cloud map by the currently acquired laser point cloud and the currently acquired inertial navigation position attitude.
Based on the above embodiment, step S104 specifically includes the steps of:
firstly, processing a local tracking point cloud map at the current moment, dividing an area by a fixed distance and an angle by taking the x direction of a laser coordinate system as a starting point and the center of a laser as the origin of the coordinate system, and dividing the laser point according to the distance
And angle
The arrangement is summarized as fixed
Matrix of rows and columns, the number of rows and columns corresponding to each laser spot
And
the calculation formula is as follows:
wherein,
is a preset distance threshold.
Storing the height average value H of all laser point clouds of a local tracking point cloud map in the position range at each position in the matrix:
and processing the matrix, and respectively taking the mean values of the columns and the rows to obtain a row matrix and a column matrix of the local tracking grid map.
Then, establishing a binary tree index model for a matrix in a prior map, establishing a binary tree index model for a row matrix and a column matrix of a local tracking grid map, performing iterative search on the binary tree index model and the row matrix and the column matrix of the local tracking grid map, calculating environmental similarity and identification, setting a threshold value for the environmental similarity, judging that the local tracking grid map has certain similarity with the prior map data only if the similarity is greater than the threshold value, determining that the position of the robot belongs to a specific area of the prior map, and performing registration on point cloud frame data by using the identification to obtain position posture conversion parameters
。
Based on the above embodiment, step S105 specifically includes the steps of:
based on the current position and posture of the robot
Using position attitude transformation parameters
Making a correction to obtain
:
The method comprises the steps of firstly, acquiring laser point cloud data and inertial navigation data in a limited area according to a robot, and constructing a prior map in the limited area; secondly, curvature calculation is carried out on the laser point cloud data to extract laser point cloud characteristics, and the inertial navigation position attitude of the robot is combined to obtain the corresponding initial position attitude of the inertial navigation position attitude of the robot in the prior map according to a repositioning method; then, based on the initial position posture, obtaining the current position posture of the robot in a laser point cloud coordinate system, and constructing a local tracking point cloud map according to the current position posture of the robot in the laser point cloud coordinate system; then, converting the local tracking point cloud map into a local tracking grid map, and performing similar environment matching according to a similar environment detection method to obtain position and attitude conversion parameters between the prior map and the local tracking grid map; and finally, correcting the current position posture of the robot in the laser point cloud coordinate system based on the position posture conversion parameters. According to the method, additional GNSS positioning equipment is not needed to provide positioning information, an external signal is not needed, and the positioning and attitude determining precision of the robot in the autonomous movement is improved. In addition, the method adopts a curvature calculation mode for the laser point cloud data, only effective information in the laser point cloud data is reserved, redundant data is eliminated, and a grid map is also adopted for tracking calculation during positioning, so that the calculation amount is reduced, the requirement on calculation force is lowered, and meanwhile, the pressure of data transmission is greatly reduced.
Referring to fig. 3, an embodiment of the present invention provides a robot positioning and attitude determination device, which may be any type of smart terminal, such as a mobile phone, a tablet computer, a personal computer, and the like.
Specifically, the positioning and attitude determination device of the robot comprises: one or more control processors and memory. The control processor and memory may be connected by a bus or other means.
The memory, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the positioning and pose device of the robot in the embodiments of the present invention. The control processor executes the robot positioning and attitude determination method of the above method embodiments by running non-transitory software programs, instructions and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the control processor, and the remote memory may be connected to the robot's position and orientation device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory and, when executed by the one or more control processors, perform the positioning and pose determination method for the robot in the above-described method embodiments.
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions, which are executed by one or more control processors, may cause the one or more control processors to execute the positioning and attitude determination method for a robot in the above method embodiment.
Through the above description of the embodiments, those skilled in the art can clearly understand that the embodiments can be implemented by software plus a general hardware platform. Those skilled in the art will appreciate that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.