CN114425774B - Robot walking road recognition method, robot walking road recognition device, and storage medium - Google Patents

Robot walking road recognition method, robot walking road recognition device, and storage medium Download PDF

Info

Publication number
CN114425774B
CN114425774B CN202210071467.XA CN202210071467A CN114425774B CN 114425774 B CN114425774 B CN 114425774B CN 202210071467 A CN202210071467 A CN 202210071467A CN 114425774 B CN114425774 B CN 114425774B
Authority
CN
China
Prior art keywords
cloud data
point cloud
data
point
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210071467.XA
Other languages
Chinese (zh)
Other versions
CN114425774A (en
Inventor
刘大志
梁健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202210071467.XA priority Critical patent/CN114425774B/en
Publication of CN114425774A publication Critical patent/CN114425774A/en
Application granted granted Critical
Publication of CN114425774B publication Critical patent/CN114425774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a method for identifying a walking road of a robot, identification equipment and a storage medium, wherein the method comprises the following steps: receiving first point cloud data of a first radar sensor in the walking process of a robot, and extracting first road edge point cloud data according to the first point cloud data; receiving second point cloud data of a second radar sensor in the walking process of the robot, and extracting second path edge point cloud data according to the second point cloud data, wherein the density of the second path edge point cloud data is greater than that of the first path edge point cloud data; fitting according to the first road edge point cloud data and the second road edge point cloud data to generate target road edge point cloud data; and determining the walking road of the robot according to the target road edge point cloud data. According to the invention, two sections of road edge information are respectively extracted by using two sensors, and then the two sections of road edge information are combined into a complete road edge, so that the accuracy of capturing the road edge information is improved, and the road edge information is accurately extracted.

Description

Robot walking road recognition method, robot walking road recognition device, and storage medium
Technical Field
The application relates to the field of intelligent robots, in particular to a method and equipment for identifying a walking road of a robot and a storage medium.
Background
With the rapid development of computer technology and artificial intelligence, current logistics, takeaway and the like are gradually intelligentized, takeaway delivery is replaced by a robot, even the logistics is replaced by an unmanned vehicle, the unmanned vehicle replaces people to work, and labor cost is saved to a certain extent. At present, when an unmanned logistics vehicle works, the normal work flow can be influenced due to obstacles on the road surface, or the extracted road edge has a large error due to inaccurate road edge information capturing.
It should be noted that the foregoing is only for aiding in understanding the technical problem to be solved by the present application, and is not an admission that the foregoing is related art.
Disclosure of Invention
The embodiment of the application provides a method for identifying a walking road of a robot, which adopts various radar sensors to improve the accuracy of extracting road edge information and accurately extract the road edge.
In order to achieve the above object, the method for identifying a walking path of a robot according to the present application includes the steps of:
Receiving first point cloud data of a first radar sensor in the walking process of a robot, and extracting first road edge point cloud data according to the first point cloud data;
receiving second point cloud data of a second radar sensor in the walking process of the robot, and extracting second path edge point cloud data according to the second point cloud data, wherein the density of the second path edge point cloud data is greater than that of the first path edge point cloud data;
fitting according to the first road edge point cloud data and the second road edge point cloud data to generate target road edge point cloud data;
and determining the walking road of the robot according to the target road edge point cloud data.
Optionally, after the step of receiving the first point cloud data of the first radar sensor during the walking of the robot, the method further includes:
acquiring the angular speed and the acceleration of each point data corresponding to the time stamp in the first point cloud data through an inertial sensor;
calculating the acceleration and the angular velocity by using a data registration algorithm to obtain correction point data of the corresponding time stamp of each point data;
and correcting each point data in the first point cloud data according to the correction point data and the time stamp.
Optionally, the step of extracting first waypoint cloud data according to the first point cloud data includes:
sequentially taking each point data in the first point cloud data as target point data, and acquiring a line segment formed by the target point data and the adjacent preset number of point data;
calculating the curvature value of each line segment;
and extracting target point data on a line segment with the curvature value smaller than or equal to a first preset threshold value as the first waypoint cloud data.
Optionally, the step of calculating the curvature value of each line segment includes:
traversing point data in the first point cloud data to obtain coordinates respectively corresponding to the target point data and the preset number of point data adjacent to the target point data;
and calculating the total distance of the line segment corresponding to the target point data according to the coordinates, and representing the curvature value of the line segment corresponding to the target point data by using the total distance, wherein the total distance is positively correlated with the curvature value.
Optionally, the step of extracting second road edge point cloud data according to the second point cloud data includes:
processing the second point cloud data by adopting a random consistency sampling algorithm, fitting to generate a ground equation, and extracting ground points;
And extracting second road edge point cloud data according to the ground points.
Optionally, the step of processing the second point cloud data by using a random consistency sampling algorithm, generating a ground equation by fitting, and extracting a ground point includes:
processing the second point cloud data by adopting a random consistency sampling algorithm, and fitting to generate a ground equation;
inputting each point data in the second point cloud data into the ground equation for calculation to obtain a distance value between each point data in the second point cloud data and a reference position of the ground equation;
and if the distance value is smaller than a second preset threshold value, extracting the point data as a ground point.
Optionally, the step of extracting the second waypoint according to the ground point includes:
extracting ground edge points according to the ground points, and fitting the ground edge points into edge point straight lines according to the ground edge points;
and if the slope of the target point data in the ground edge point and the slope of the edge point straight line meet the preset condition, extracting the target point data as a second road edge point.
Optionally, the step of generating target curbside point cloud data according to the first curbside point cloud data and the second curbside point cloud data in a fitting manner includes:
And fitting the first path edge point cloud data and the second path edge point cloud data by adopting a quadratic function to generate target path edge point cloud data.
In addition, to achieve the above object, the present invention also provides an identification apparatus of a robot walking path, including: the robot walking road identification method comprises the steps of a memory, a processor and a robot walking road identification program which is stored in the memory and can run on the processor, wherein the robot walking road identification program is executed by the processor to realize the robot walking road identification method.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a robot walking path identification program which, when executed by a processor, implements the steps of the robot walking path identification method as described above.
The invention discloses a method for identifying a walking road of a robot, identification equipment and a storage medium, wherein the method comprises the following steps: receiving first point cloud data of a first radar sensor in the walking process of a robot, and extracting first road edge point cloud data according to the first point cloud data; receiving second point cloud data of a second radar sensor in the walking process of the robot, and extracting second path edge point cloud data according to the second point cloud data, wherein the density of the second path edge point cloud data is greater than that of the first path edge point cloud data; fitting according to the first road edge point cloud data and the second road edge point cloud data to generate target road edge point cloud data; and determining the walking road of the robot according to the target road edge point cloud data. According to the method, the first path edge point cloud data are extracted after the first point cloud data scanned by the first radar sensor are received and processed, the second path edge point cloud data are also extracted after the second point cloud data scanned by the second radar sensor are received, and as the first point cloud data scanned by the first radar sensor are sparse, the distance between the point clouds is far, the density is small, and the density of the extracted first path edge point cloud data is also small. The second point cloud data scanned by the second radar sensor are denser, the distance between the point clouds is closer, the density is higher, the density of the extracted second road edge point cloud data is higher, and only the road edge point cloud data of the same scanning line can be extracted, so that after the two sections of road edge point cloud data of the first point cloud data and the second point cloud data are fitted, a complete and accurate road edge can be obtained, the accuracy of capturing the road edge information is improved, the road edge can be accurately extracted, and the walking road of the robot is determined.
Drawings
FIG. 1 is a schematic diagram of a device architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of a method for identifying a walking path of a robot according to the present invention;
fig. 3 is a schematic diagram of a refinement flow of step S100 in the second embodiment of the method for identifying a walking path of a robot according to the present invention;
fig. 4 is a schematic diagram of a refinement flow of step S120 in the second embodiment of the method for identifying a walking path of a robot according to the present invention;
fig. 5 is a schematic diagram of a refinement flow of step S200 in a third embodiment of the method for identifying a walking path of a robot according to the present invention;
fig. 6 is a schematic diagram of a refinement flow of step S210 in a third embodiment of the method for identifying a walking path of a robot according to the present invention;
fig. 7 is a schematic diagram of a refinement flow of step S220 in a third embodiment of the method for identifying a walking path of a robot according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order that the above-described aspects may be better understood, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal device structure of a hardware running environment according to an embodiment of the present application.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 1 is not limiting of the terminal device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, a recognition program of a robot walking path may be included in a memory 1005 as one type of computer-readable storage medium.
In the terminal device shown in fig. 1, the network interface 1004 is mainly used for data communication with a background server; the user interface 1003 is mainly used for data communication with a client (user side); the processor 1001 may be configured to call an identification program of a robot walking path in the memory 1005, and perform the following operations:
receiving first point cloud data of a first radar sensor in the walking process of a robot, and extracting first road edge point cloud data according to the first point cloud data;
receiving second point cloud data of a second radar sensor in the walking process of the robot, and extracting second path edge point cloud data according to the second point cloud data, wherein the density of the second path edge point cloud data is greater than that of the first path edge point cloud data;
fitting according to the first road edge point cloud data and the second road edge point cloud data to generate target road edge point cloud data;
and determining the walking road of the robot according to the target road edge point cloud data.
The present application provides a method of identifying a path traveled by a robot, it being noted that although a logical sequence is shown in the flow chart, in some cases, the steps shown or described may be performed in a different order than that shown or described herein.
Referring to fig. 2, fig. 2 is a flowchart of a first embodiment of a method for identifying a walking path of a robot according to the present application.
The method for identifying the walking road of the robot in the embodiment comprises the following steps:
step S100, first point cloud data of a first radar sensor in the walking process of a robot are received, and first road edge point cloud data are extracted according to the first point cloud data;
step S200, receiving second point cloud data of a second radar sensor in the walking process of the robot, and extracting second road edge point cloud data according to the second point cloud data, wherein the density of the second road edge point cloud data is greater than that of the first road edge point cloud data;
step S300, fitting and generating target road edge point cloud data according to the first road edge point cloud data and the second road edge point cloud data;
step S400, determining the walking road of the robot according to the target road edge point cloud data.
In this embodiment, the method for identifying the walking path of the robot may be used in an intelligent robot, for example: the method can be used for extracting the route to determine the walking road of the robot in the scene of the intelligent robot. Therefore, the following description is not limited to a scene, and is described with a robot in a unified manner, and the execution subject in the present embodiment is also a robot. In the whole process of the method, the extracted road edge is a core element, and only the extracted road edge information can determine the walking road of the robot according to the road edge information.
It should be noted here that, the robot controls the radar sensor to scan the surrounding environment information, extracts all the point cloud data including the obstacle, and finds the road edge on the point cloud map according to the scanned point cloud map, so as to remove the obstacle inside the road edge, while the obstacle outside the road edge is not removed, which may be greening, walls or other obstacles on the roadside, which are required to remain on the point cloud map, and remove the obstacle inside the road edge of the point cloud map, so as to obtain a perfect point cloud map, and at the same time, a road map on which the robot walks. The robot can work according to the map when working. If the position on the map does not show an obstacle in the actual working process, but the obstacle is scanned in the actual working process, the robot can scan in real time so as to bypass walking.
The embodiment combines a 16-line laser radar sensor and a solid-state laser radar to extract road edge information, and fits to obtain the final road edge, namely the road information that the robot can walk when working. It should be noted that, if the 16-line lidar is used alone to extract the road edge information, because the distance between each of the point clouds in the point cloud data extracted by the 16-line lidar is far, the density is sparse, and the points on the road surface may be mistaken for the road edge points due to the large distance, if the solid-state lidar is used alone to extract the road edge information, the distance between each of the point clouds in the point cloud data extracted by the solid-state lidar is very close, the density is dense, and meanwhile, after the position of the solid-state lidar is fixed, the scanned point cloud data is the point cloud data on the same horizontal line, and the point clouds on the upper side or the lower side are not scanned. Therefore, the problem that the road edge extraction is inaccurate exists when the 16-line laser radar is used alone or the solid-state laser radar is used, the road edge cannot be extracted accurately, and the defects existing in the 16-line laser radar and the solid-state laser radar can be mutually compensated by combining the 16-line laser radar and the solid-state laser radar, so that a section of accurate road edge can be extracted for the robot.
In this embodiment, the robot first controls the first radar sensor to acquire first point cloud data of the surrounding environment of the robot during the walking process, and the extracted first point cloud data may include greening on the road edge, walls, houses beside the road edge, office buildings, road edges, obstacles on the ground, and the like. The road edge is determined according to the scanned road edge point cloud data required by the first point cloud data extraction, however, when the radar laser sensor scans the surrounding environment to acquire the first point cloud data, the robot is also in motion and the point cloud data of the same frame are distorted, so that the first road edge point cloud data is extracted after the first point cloud data is required to be subjected to distortion correction before the first road edge point cloud data is extracted, the first road edge point cloud data is ensured to be accurate, and the first radar sensor can be a 16-line laser radar.
In a further embodiment, after the step of receiving the first point cloud data of the first radar sensor during walking of the robot, the method further comprises:
acquiring the angular speed and the acceleration of each point data corresponding to the time stamp in the first point cloud data through an inertial sensor;
Calculating the acceleration and the angular velocity by using a data registration algorithm to obtain correction point data of the corresponding time stamp of each point data;
and correcting each point data in the first point cloud data according to the correction point data and the time stamp.
Based on the above embodiments, the distortion correction needs to be performed on the first point cloud data before the first waypoint is extracted by the first point cloud data. The first radar sensor corresponding to the first point cloud data is a 16-line laser radar sensor, the 16-line laser radar sensor is mounted on the robot, and as the robot moves, point data in the point cloud data are not acquired at the same time when the radar sensor acquires the point cloud data, and when distorted point data occur, the point data need to be subjected to distortion correction. According to the application, corresponding IMU (inertial sensor) data are acquired according to the time stamp of each point data in the point cloud data, wherein the IMU data comprise acceleration and angular velocity, and the motion compensation is respectively carried out on each point data in the first point cloud data according to the acceleration and angular velocity acquired by different time stamps. In some embodiments, because the IMU has an output frequency of 100HZ, and the 16-wire lidar has an output frequency of only 10HZ, and each laser beam of the 16-wire lidar generates 2000 points in one revolution, it is necessary to uniformly interpolate the IMU to generate 2000 data within 0.1 seconds, thereby performing motion compensation on each point, calculating an offset angle and an offset displacement of each point by acceleration and angular velocity, and correcting each point data in the first point cloud data by the offset angle and the offset displacement. After each point data in the point cloud data is subjected to distortion correction, the first road edge point data can be extracted, the road edge is extracted through a 16-line laser radar, the road edge is long and sparse, and a second radar sensor is required to be controlled to extract second road edge point data.
The robot controls the second radar sensor to collect second point cloud data, and second road edge point data are extracted from the second point cloud data. It should be noted that, the second radar sensor refers to a solid-state laser radar, the collected point cloud data is relatively dense and the distance is relatively short, and meanwhile, the scanned point cloud data is on a horizontal line, and the point clouds of other areas are not scanned, for example: either lower or higher than the scan line of the solid-state lidar is not scanned. If the road surface is scanned, the scanned distance is shorter, and the time is longer, so that the first road edge point cloud data and the second road edge point cloud data are required to be combined, a piece of complete road edge information is fitted, a road area where the final robot can walk in daily work can be determined according to the fitted road edge information, if obstacles exist in the fitted road edge information, the obstacles are required to be removed, and a perfect point cloud map can be obtained.
And according to the acquired first road edge point data and the second road edge point data, fitting to complete target road edge point cloud data.
Optionally, in another embodiment, the step of generating the target curbside point cloud data according to the first curbside point cloud data and the second curbside point cloud data includes:
And fitting the first path edge point cloud data and the second path edge point cloud data by adopting a quadratic function to generate target path edge point cloud data.
The embodiment is based on the above embodiment, the first road edge point cloud data and the second road edge point cloud data are fitted by adopting a quadratic function, if the first road edge point cloud data and the second road edge point cloud data are fitted by adopting a linear function, for some curved road edges, the fitted road edges are larger than the actual road edge information in-out, and the fitted road edge information is inaccurate. Therefore, the first path edge point cloud data and the second path edge point cloud data are fitted by adopting a quadratic function, and the fitted path edge information can be ensured to be similar to the actual one no matter the straight path edge or the curved path edge. Curve fitting is a curve drawn from a number of discrete data points, and is a method of curve fitting that involves establishing a data relationship (numberAccording to the model), a series of tiny straight line segments are obtained to connect the interpolation points into a curve, and a smooth curve can be formed as long as the interpolation points are properly selected, and the popular points are in a basic form of a quadratic function: y=ax 2 And carrying each point data in the first point cloud data and the second point cloud data into the secondary function for calculation to obtain a relatively perfect curve, wherein the curve is the extracted road edge information. Therefore, the first path edge point cloud data and the second path edge point cloud data are fitted into target path edge point cloud data, namely coordinates corresponding to each point data in the point cloud data are put into a quadratic function to be calculated, and a smoother curve is formed and is approximately equal to an actual path edge.
According to the embodiment, the first path edge point cloud data is extracted after the first point cloud data scanned by the first radar sensor is received and processed, the second path edge point cloud data is also extracted and processed after the second path edge point cloud data scanned by the second radar sensor is received, and as the first point cloud data scanned by the first radar sensor is sparse, the distance between the point clouds is far apart and the density is small, the first path edge point cloud data is also small, the second path edge point cloud data scanned by the second radar sensor is dense, the distance between the point clouds is near and the density is large, the second path edge point cloud data is large, and only the path edge point cloud data on the same level can be extracted, so that a complete and accurate path edge can be obtained after the first point cloud data and the second path edge point cloud data are fitted, the accuracy of the captured path edge information is improved, the distance between the second path edge point cloud data is accurately extracted, and the path edge of a person can be determined to walk on a road.
Further, referring to fig. 3, a second embodiment of the method for identifying a walking path of a robot according to the present application is proposed based on the first embodiment described above.
Step S100, receiving first point cloud data of a first radar sensor during a robot walking process, extracting first waypoint cloud data according to the first point cloud data, including:
step S110, each point data in the first point cloud data is sequentially used as target point data, and a line segment formed by the target point data and the adjacent preset number of point data is obtained;
step S120, calculating curvature values of the line segments;
step S130, extracting the target point data on the line segment with the curvature value smaller than or equal to the first preset threshold as the first waypoint cloud data.
In this embodiment, after the first radar sensor is controlled by the robot to acquire the first point cloud data and correct distortion, the first waypoint data needs to be extracted according to the first point cloud data. In the process of extracting the first road edge point cloud data, the curvature value of a line segment formed by each point data and other point data in the first point cloud data needs to be calculated. The first radar sensor is a 16-line laser radar sensor, so that the collected point cloud data are scattered, the road edge information can be extracted inaccurately only by calculating the height difference between the road edge and the road surface, the curvature of the ground point and the road edge point scanned by each wire harness of the 16-line laser radar is different, and the first road edge point information can be extracted according to the curvature value. Before calculating the curvature value, whether a preset number of (for example, five) point data exist before and after the current target point data is needed to be judged, if so, the curvature value of a line segment formed by the target point data and the five point data before and after the target point data is calculated, each point data in the first point cloud data is traversed in sequence, each point data is used as the target point data, and the curvature value of the line segment formed by each target point data and the five point data before and after the target point data is calculated.
Judging whether the curvature value meets the preset condition or not according to the calculated curvature value, namely whether the curvature value is in the threshold range or not, if the curvature value is in the preset threshold range, extracting target point data which accords with the curvature value of the line segment to serve as first waypoint data, sequentially extracting until the first waypoint cloud data is traversed, and searching the target point data which accords with the first waypoint data.
In a further embodiment, referring to fig. 4, step S120, calculating the curvature value of each of the line segments includes:
step S121, traversing point data in the first point cloud data, and obtaining coordinates corresponding to the target point data and adjacent preset number of point data respectively;
step S122, calculating a total distance of the line segment corresponding to the target point data according to the coordinates, and characterizing a curvature value of the line segment corresponding to the target point data with the total distance, wherein the total distance is positively correlated with the curvature value.
According to the embodiment, each point data in the collected first point cloud data is sequentially traversed, each traversed point data is used as the target point data in the traversing process, whether the point data adjacent to the current target point data has the preset number of point data or not is judged, if the point data does not have the preset number of point data, the traversing is continued, the curve value is not calculated on the target point data, and if the point data has the preset number of point data, the coordinates corresponding to the preset number of point data and the current target point data are sequentially obtained. It should be noted that, the preset number of point data is set to be front and back five point data, or may be front six point data or front three point data, and back five point data, which may be randomly calculated, without any limitation. And calculating the distance from the target point data to the adjacent point data according to the acquired coordinates of each point data.
After coordinates of each target point data and front and rear five point data are obtained, a line segment is formed between each target point data and front and rear five point data, distances between the target point data and front and rear five points of the target point data, namely the lengths of the formed line segments, are calculated respectively according to a point-to-point distance formula, the calculated distances are added to obtain a total distance, and the curvature value of the line segment formed by the current target point data and front and rear five points of the target point data, namely the curvature value corresponding to the target point data, is calculated according to the positive correlation relation between the total distance and the curvature value. That is, the longer the total distance of the line segments, the larger the curvature value of the line segments; the shorter the total distance of the line segments, the smaller the curvature value of the line segments.
In this embodiment, the robot controls the first radar sensor to scan the surrounding environment and extract the first point cloud data, the first radar sensor is a 16-line laser radar sensor, and the point cloud data collected by the multi-line laser radar are sparse, so that the first waypoint data is extracted through calculating the curvature value of a line segment formed by each piece of point data in the first point cloud data and the adjacent point data, and the first waypoint data can be extracted more accurately, so that errors occurring in the extraction process are reduced.
Further, referring to fig. 5, a third embodiment of the identification method of the robot walking path of the present application is proposed based on either one of the above-described first and second embodiments.
Step S200, receiving second point cloud data of a second radar sensor in a robot walking process, and extracting second road edge point cloud data according to the second point cloud data, wherein the density of the second road edge point cloud data is greater than that of the first road edge point cloud data, and the method comprises the following steps:
step S210, processing the second point cloud data by adopting a random consistency sampling algorithm, fitting to generate a ground equation, and extracting ground points;
and step S220, extracting second road edge point cloud data according to the ground points.
In this embodiment, after the first radar sensor is controlled by the robot to collect the first point cloud data and successfully extract the first road edge point cloud data, the second radar sensor is also required to be controlled to collect the second point cloud data, and the second road edge point cloud data is extracted through the second point cloud data. A number of methods of processing the second point cloud data may be used, where a random consistency sampling algorithm is used to process the second point cloud data, where it is to be noted that the random consistency sampling algorithm calculates mathematical model parameters of the data from a set of sample data sets, and an algorithm for obtaining valid sample data. The specific algorithm is to randomly select a plurality of data from a group of sample data, and presume that the randomly selected points are local points, so as to fit a plane model, and then put other data in the sample data into the plane model to judge whether the other data points are local points or not until all the point data in the sample data are extracted. In this embodiment, the point data selected randomly from the second point cloud data is put into a random consistency sampling algorithm for calculation, a ground equation is fitted, and then other point data in the second point cloud data is put into the ground equation for calculation, so that the conforming point data are proposed as ground point data and non-ground point data.
In one embodiment, referring to fig. 6, step S210, processing the second point cloud data using a random consistency sampling algorithm, generating a ground equation by fitting, and extracting ground points includes:
step S211, processing the second point cloud data by adopting a random consistency sampling algorithm, and fitting to generate a ground equation;
step S212, inputting each point data in the second point cloud data into the ground equation for calculation to obtain a distance value between each point data in the second point cloud data and a reference position of the ground equation;
and step S213, if the distance value is smaller than a second preset threshold value, the point data is taken as a ground point to be extracted.
The robot controls the second radar sensor to scan the surrounding environment to the second point cloud data, the collected second point cloud data needs to be processed, and a random consistency sampling algorithm is adopted to process the second point cloud data. Firstly, randomly selecting a plurality of point data from second point cloud data, putting the point data into a random consistency sampling algorithm to build a plane model, and in the embodiment, the built plane model is a ground equation fitted through the plurality of point data, and the plurality of point data which are randomly selected are assumed to be ground point data.
And all the point data in the second point cloud data are put into the ground equation to be calculated, a distance value between each point data and a reference position represented by the ground equation is obtained, and the ground point and the non-ground point are judged through the distance value.
And if the calculated distance value is smaller than the preset threshold value, extracting the point data corresponding to the current distance value as the ground point, and if the calculated distance value is larger than or equal to the preset threshold value, extracting the point data corresponding to the current distance value as the non-ground point. In popular terms, if the distance from the ground is very close, it is similar to a ground point, and if the distance is very far, it is similar to a non-ground point. The preset threshold is here set to 3 cm.
And after the second point cloud data is processed through the random consistency sampling algorithm and extracted to the ground point and the non-ground point, the second path edge point cloud data is further extracted according to the ground point and the non-ground point. The second radar sensor is a solid-state laser radar, the distance between the point clouds is very close, although the solid-state laser radar is fixed in position, the ground points and the non-ground points can be scanned, and the road edge point cloud data can be identified according to the determined ground points and non-ground points, so that the second road edge point cloud data can be identified.
In one embodiment, referring to fig. 7, step S220, extracting second road edge point cloud data according to the ground point includes:
step S221, extracting ground edge points according to the ground points, and fitting the ground edge points into edge point lines according to the ground edge points;
step S222, if the slope of the edge point straight line and the target point data in the ground edge point meet the preset condition, extracting the target point data as a second waypoint.
The road edge points are extracted through the ground points, and the road edge points are essentially the same as the ground edge points. The ground points include ground points and ground edge points, so edge points need to be extracted according to the ground points, and the normal vector of each point data in the ground points is calculated first, namely: and solving a plane equation in 1 meter nearby and current point data, carrying the point data into the plane equation according to the solved plane equation, calculating to obtain a normal vector of the current point data, calculating normal vectors corresponding to each point data by other point data according to the algorithm, and sequentially judging whether the angle between the normal vector of each point data and the reference normal vector is larger than a preset angle or not by taking the normal vector in the vertical direction as a reference, and extracting the ground point larger than the preset angle as a ground edge point. After the ground edge points are determined, the extracted ground edge points are processed by adopting a random consistency sampling algorithm to obtain a ground edge point equation, and an edge point straight line is fitted. The preset angle here is 45 degrees.
According to the fitted straight line, the slope of the fitted straight line is calculated through a primary function, and the primary function has the basic form that: y=kx+b, and the coordinates of any two points on the selected straight line are put into a primary function basic form to calculate the slope K, that is, the slope of the straight line of the edge point. The Euro clustering algorithm is used for extracting target point data from the ground edge points, firstly, one target ground edge point is selected from the ground edge points at will, the nearest neighbor algorithm is started, a plurality of other ground edge points close to the ground edge point are found, the similar distance can be set to be 5 cm or 3 cm, the similar ground edge points can be fitted to a straight line according to actual needs, and the slope is calculated. And if the slope of the straight line fitted by the target ground edge point data and the slope of the edge point straight line accord with the preset condition, extracting the target point data as a second road edge point. The preset condition here is that the difference between the slope of the straight line fitted with the target ground edge point data and the slope of the edge point straight line is smaller than a preset difference, for example, the preset difference is 5 degrees.
In the embodiment, the ground edge point is obtained through calculation of the ground point through a normal vector, a random consistency sampling algorithm is adopted to fit into an edge point straight line, then the target point data is extracted through an Euclidean clustering algorithm, the slope of the target point data and the slope of the edge point straight line are compared, if the preset condition is met, the current target point data is judged to be a road edge point, and the accuracy of extracting road edge information is improved.
In addition, the embodiment of the invention also provides a computer readable storage medium. The computer-readable storage medium stores thereon a robot walking path recognition method program that, when executed by a processor, implements the steps of the robot walking path recognition method according to any one of the embodiments described above.
The specific implementation manner of the computer readable storage medium of the present invention is basically the same as the embodiments of the method for identifying a walking path of a robot, and will not be described herein.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. The method for identifying the walking road of the robot is characterized by comprising the following steps of:
receiving first point cloud data of a first radar sensor in the walking process of a robot, and extracting first road edge point cloud data according to the first point cloud data, wherein the first radar sensor is a multi-line laser radar, and the first point cloud data comprises point cloud data about a road edge and the ground;
receiving second point cloud data of a second radar sensor in the walking process of the robot, and extracting second path edge point cloud data according to the second point cloud data, wherein the second radar sensor is a solid-state laser radar, the second point cloud data comprises point cloud data about a path edge and the ground, and the density of the second path edge point cloud data is greater than that of the first path edge point cloud data;
The coordinates corresponding to each point data in the first path edge point cloud data and the second path edge point cloud data are put into a quadratic function for calculation, and target path edge point cloud data are generated through fitting;
and determining the walking road of the robot according to the target road edge point cloud data.
2. The method for recognizing a walking path of a robot according to claim 1, wherein the step of receiving first point cloud data of the first radar sensor during the walking of the robot further comprises:
acquiring the angular speed and the acceleration of each point data corresponding to the time stamp in the first point cloud data through an inertial sensor;
calculating the acceleration and the angular velocity by using a data registration algorithm to obtain correction point data of the corresponding time stamp of each point data;
and correcting each point data in the first point cloud data according to the correction point data and the time stamp.
3. The method of claim 1, wherein the step of extracting first waypoint cloud data from the first point cloud data comprises:
sequentially taking each point data in the first point cloud data as target point data, and acquiring a line segment formed by the target point data and the adjacent preset number of point data;
Calculating the curvature value of each line segment;
and extracting target point data on a line segment with the curvature value smaller than or equal to a first preset threshold value as the first waypoint cloud data.
4. A method of identifying a robot walking path as claimed in claim 3, wherein the step of calculating the curvature value of each of the line segments comprises:
traversing point data in the first point cloud data to obtain coordinates respectively corresponding to the target point data and the adjacent preset number of point data;
and calculating the total distance of the line segment corresponding to the target point data according to the coordinates, and representing the curvature value of the line segment corresponding to the target point data by using the total distance, wherein the total distance is positively correlated with the curvature value, and the positive correlation means that the longer the total distance is, the larger the curvature value is, and the shorter the total distance is, the smaller the curvature value is.
5. The method of claim 1, wherein the step of extracting second curbside point cloud data from the second point cloud data comprises:
processing the second point cloud data by adopting a random consistency sampling algorithm, fitting to generate a ground equation, and extracting ground points;
And extracting second road edge point cloud data according to the ground points.
6. The method for recognizing a walking path of a robot according to claim 5, wherein the steps of processing the second point cloud data using a random consistency sampling algorithm, fitting to generate a ground equation, and extracting ground points include:
processing the second point cloud data by adopting a random consistency sampling algorithm, and fitting to generate a ground equation;
inputting each point data in the second point cloud data into the ground equation for calculation to obtain a distance value between each point data in the second point cloud data and a reference position of the ground equation;
and if the distance value is smaller than a second preset threshold value, extracting the point data as a ground point.
7. The method of claim 5, wherein the step of extracting the second waypoint from the ground point comprises:
extracting ground edge points according to the ground points, and fitting the ground edge points into edge point straight lines according to the ground edge points;
and if the slope of the target point data in the ground edge point and the slope of the edge point straight line meet the preset condition, extracting the target point data as a second road edge point.
8. An identification apparatus of a robot walking path, characterized in that the identification apparatus of a robot walking path comprises: a memory, a processor, and a robot walking path recognition program stored on the memory and operable on the processor, which when executed by the processor, implements the steps of the robot walking path recognition method according to any one of claims 1 to 7.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a robot-walking-road identification program, which when executed by a processor, implements the steps of the robot-walking-road identification method according to any one of claims 1 to 7.
CN202210071467.XA 2022-01-21 2022-01-21 Robot walking road recognition method, robot walking road recognition device, and storage medium Active CN114425774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210071467.XA CN114425774B (en) 2022-01-21 2022-01-21 Robot walking road recognition method, robot walking road recognition device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210071467.XA CN114425774B (en) 2022-01-21 2022-01-21 Robot walking road recognition method, robot walking road recognition device, and storage medium

Publications (2)

Publication Number Publication Date
CN114425774A CN114425774A (en) 2022-05-03
CN114425774B true CN114425774B (en) 2023-11-03

Family

ID=81314184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210071467.XA Active CN114425774B (en) 2022-01-21 2022-01-21 Robot walking road recognition method, robot walking road recognition device, and storage medium

Country Status (1)

Country Link
CN (1) CN114425774B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114779794B (en) * 2022-06-21 2022-10-11 东风悦享科技有限公司 Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519605A (en) * 2018-04-09 2018-09-11 重庆邮电大学 Curb detection method based on laser radar and video camera
WO2018205119A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Roadside detection method and system based on laser radar scanning
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN109613543A (en) * 2018-12-06 2019-04-12 深圳前海达闼云端智能科技有限公司 Method and device for correcting laser point cloud data, storage medium and electronic equipment
CN109752701A (en) * 2019-01-18 2019-05-14 中南大学 A kind of road edge detection method based on laser point cloud
CN110033482A (en) * 2018-01-11 2019-07-19 沈阳美行科技有限公司 A kind of curb recognition methods and device based on laser point cloud
CN110376604A (en) * 2019-08-09 2019-10-25 北京智行者科技有限公司 Curb detection method based on single line laser radar
CN110781827A (en) * 2019-10-25 2020-02-11 中山大学 Road edge detection system and method based on laser radar and fan-shaped space division
CN110928320A (en) * 2020-02-10 2020-03-27 上海高仙自动化科技发展有限公司 Path generation method and generation device, intelligent robot and storage medium
CN112014856A (en) * 2019-05-30 2020-12-01 初速度(苏州)科技有限公司 Road edge extraction method and device suitable for cross road section
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112147602A (en) * 2019-06-26 2020-12-29 陕西汽车集团有限责任公司 Road edge identification method and system based on laser point cloud
CN112149572A (en) * 2020-09-24 2020-12-29 知行汽车科技(苏州)有限公司 Road edge detection method, device and storage medium
WO2021052121A1 (en) * 2019-09-20 2021-03-25 于毅欣 Object identification method and apparatus based on laser radar and camera
CN112674646A (en) * 2020-12-15 2021-04-20 广东盈峰智能环卫科技有限公司 Self-adaptive welting operation method based on multi-algorithm fusion and robot
CN112699734A (en) * 2020-12-11 2021-04-23 深圳市银星智能科技股份有限公司 Threshold detection method, mobile robot and storage medium
CN113570665A (en) * 2021-07-30 2021-10-29 苏州挚途科技有限公司 Road edge extraction method and device and electronic equipment
CN113658256A (en) * 2021-08-16 2021-11-16 智道网联科技(北京)有限公司 Target detection method and device based on laser radar and electronic equipment
CN113740874A (en) * 2020-05-14 2021-12-03 深圳市镭神智能系统有限公司 Road edge detection method, electronic equipment and vehicle
CN113759391A (en) * 2021-08-27 2021-12-07 武汉大学 Passable area detection method based on laser radar
CN113888649A (en) * 2021-10-18 2022-01-04 上海振华重工(集团)股份有限公司 Multi-laser-radar external parameter calibration method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106405555B (en) * 2016-09-23 2019-01-01 百度在线网络技术(北京)有限公司 Obstacle detection method and device for Vehicular radar system
US10663590B2 (en) * 2017-05-01 2020-05-26 Symbol Technologies, Llc Device and method for merging lidar data
CN116129376A (en) * 2018-05-02 2023-05-16 北京图森未来科技有限公司 Road edge detection method and device

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018205119A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Roadside detection method and system based on laser radar scanning
CN110033482A (en) * 2018-01-11 2019-07-19 沈阳美行科技有限公司 A kind of curb recognition methods and device based on laser point cloud
CN108519605A (en) * 2018-04-09 2018-09-11 重庆邮电大学 Curb detection method based on laser radar and video camera
CN108828621A (en) * 2018-04-20 2018-11-16 武汉理工大学 Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN109613543A (en) * 2018-12-06 2019-04-12 深圳前海达闼云端智能科技有限公司 Method and device for correcting laser point cloud data, storage medium and electronic equipment
CN109752701A (en) * 2019-01-18 2019-05-14 中南大学 A kind of road edge detection method based on laser point cloud
CN112014856A (en) * 2019-05-30 2020-12-01 初速度(苏州)科技有限公司 Road edge extraction method and device suitable for cross road section
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112147602A (en) * 2019-06-26 2020-12-29 陕西汽车集团有限责任公司 Road edge identification method and system based on laser point cloud
CN110376604A (en) * 2019-08-09 2019-10-25 北京智行者科技有限公司 Curb detection method based on single line laser radar
WO2021052121A1 (en) * 2019-09-20 2021-03-25 于毅欣 Object identification method and apparatus based on laser radar and camera
CN110781827A (en) * 2019-10-25 2020-02-11 中山大学 Road edge detection system and method based on laser radar and fan-shaped space division
CN110928320A (en) * 2020-02-10 2020-03-27 上海高仙自动化科技发展有限公司 Path generation method and generation device, intelligent robot and storage medium
CN113740874A (en) * 2020-05-14 2021-12-03 深圳市镭神智能系统有限公司 Road edge detection method, electronic equipment and vehicle
CN112149572A (en) * 2020-09-24 2020-12-29 知行汽车科技(苏州)有限公司 Road edge detection method, device and storage medium
CN112699734A (en) * 2020-12-11 2021-04-23 深圳市银星智能科技股份有限公司 Threshold detection method, mobile robot and storage medium
CN112674646A (en) * 2020-12-15 2021-04-20 广东盈峰智能环卫科技有限公司 Self-adaptive welting operation method based on multi-algorithm fusion and robot
CN113570665A (en) * 2021-07-30 2021-10-29 苏州挚途科技有限公司 Road edge extraction method and device and electronic equipment
CN113658256A (en) * 2021-08-16 2021-11-16 智道网联科技(北京)有限公司 Target detection method and device based on laser radar and electronic equipment
CN113759391A (en) * 2021-08-27 2021-12-07 武汉大学 Passable area detection method based on laser radar
CN113888649A (en) * 2021-10-18 2022-01-04 上海振华重工(集团)股份有限公司 Multi-laser-radar external parameter calibration method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于智能仓储的AGV道路识别系统;陆佳依等;《轻工机械》;第38卷(第3期);第74-77页 *

Also Published As

Publication number Publication date
CN114425774A (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN108256446B (en) Method, device and equipment for determining lane line in road
CN110531759B (en) Robot exploration path generation method and device, computer equipment and storage medium
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN109074085B (en) Autonomous positioning and map building method and device and robot
CN109635816B (en) Lane line generation method, apparatus, device, and storage medium
CN111912416B (en) Method, device and equipment for positioning equipment
WO2022188663A1 (en) Target detection method and apparatus
CN104517275A (en) Object detection method and system
CN112949366B (en) Obstacle identification method and device
CN112270272B (en) Method and system for extracting road intersections in high-precision map making
JP2014523572A (en) Generating map data
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN111007534A (en) Obstacle detection method and system using sixteen-line laser radar
CN111739099B (en) Falling prevention method and device and electronic equipment
CN114425774B (en) Robot walking road recognition method, robot walking road recognition device, and storage medium
CN115880953B (en) Unmanned aerial vehicle management and control method and intelligent street lamp system
CN116734828A (en) Determination of road topology information, electronic map data processing method and electronic equipment
CN113724387A (en) Laser and camera fused map construction method
CN116592872A (en) Method and device for updating occupied grid map and related equipment
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
CN110110678B (en) Method and apparatus for determining road boundary, storage medium, and electronic apparatus
CN114937177A (en) Automatic marking and detection model training and target recognition method and electronic equipment
JP2020118575A (en) Inter-vehicle distance measurement device, error model generation device, learning model generation device, and method and program thereof
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant