CN110928320A - Path generation method and generation device, intelligent robot and storage medium - Google Patents

Path generation method and generation device, intelligent robot and storage medium Download PDF

Info

Publication number
CN110928320A
CN110928320A CN202010084825.1A CN202010084825A CN110928320A CN 110928320 A CN110928320 A CN 110928320A CN 202010084825 A CN202010084825 A CN 202010084825A CN 110928320 A CN110928320 A CN 110928320A
Authority
CN
China
Prior art keywords
point cloud
cloud data
road edge
current frame
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010084825.1A
Other languages
Chinese (zh)
Other versions
CN110928320B (en
Inventor
胡焉为
沈孝通
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaoxian Automation Technology Co Ltd
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaoxian Automation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaoxian Automation Technology Co Ltd filed Critical Shanghai Gaoxian Automation Technology Co Ltd
Priority to CN202010084825.1A priority Critical patent/CN110928320B/en
Publication of CN110928320A publication Critical patent/CN110928320A/en
Application granted granted Critical
Publication of CN110928320B publication Critical patent/CN110928320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0289Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0291Fleet control

Abstract

The application discloses a path generation method. The path generation method is used for the intelligent robot, and comprises the following steps: acquiring continuous multi-frame point cloud data; identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data are point cloud data corresponding to points on road edges of roads; and generating a moving path according to the road edge point cloud data and controlling the intelligent robot to move according to the moving path. The moving path is generated according to the road edge point cloud data, the moving path can be well completely attached to the edge, and the cleaning effect is good when the intelligent robot is controlled to move along the completely attached moving path to clean the road. The application also discloses a path generation device, an intelligent robot and a computer readable storage medium.

Description

Path generation method and generation device, intelligent robot and storage medium
Technical Field
The present disclosure relates to the field of automatic cleaning technologies, and in particular, to a path generation method, a path generation apparatus, an intelligent robot, and a computer-readable storage medium.
Background
When an existing intelligent robot executes a cleaning task, a road edge is an area needing important cleaning, and a cleaning path is usually determined by the existing intelligent robot according to a pre-stored map.
Disclosure of Invention
In view of the above, the present invention is directed to solving, at least to some extent, one of the problems in the related art. To this end, embodiments of the present application provide a path generation method, a path generation apparatus, an intelligent robot, and a computer-readable storage medium.
The path generation method of the embodiment of the application is applied to the intelligent robot, and comprises the following steps: acquiring continuous multi-frame point cloud data; identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data are point cloud data corresponding to points on road edges of roads; and generating a moving path according to the road edge point cloud data and controlling the intelligent robot to move according to the moving path.
According to the path generation method, continuous multi-frame point cloud data are firstly obtained, then road edge point cloud data in the point cloud data are identified, and finally a moving path is generated according to the road edge point cloud data.
In some embodiments, the acquiring consecutive multi-frame point cloud data comprises: acquiring three-dimensional coordinate data of multi-frame data in a first coordinate system; determining a transformation matrix of the first coordinate system and the second coordinate system based on the pose of the intelligent robot; and converting the three-dimensional coordinate data into the point cloud data under the second coordinate system according to the conversion matrix to obtain the multi-frame point cloud data.
In the embodiment, the three-dimensional coordinate data can be accurately converted into the point cloud data based on the three-dimensional coordinate data acquired by the intelligent robot and the conversion matrix determined based on the pose of the intelligent robot.
In some embodiments, the identifying the road edge point cloud data in the plurality of frames of point cloud data comprises: calculating a ground normal vector according to current frame point cloud data and point cloud data of a preset frame number before the current frame point cloud data; calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
In this embodiment, when the normal vector of the current frame, the curvature of the current frame, the height of the current frame, and the second derivative of the neighborhood of the current frame respectively satisfy the above corresponding relationships, it is determined that a point on the road corresponding to the current frame point cloud data satisfies that the point is located on a vertical surface (i.e., a surface perpendicular to a bottom surface), the height of the point is low (smaller than the height of a general road edge), and the height of the point is abrupt change compared with the height of the surrounding ground (for example, when a plurality of points are all points on the road, the heights are substantially the same, the difference between the points on the road is not greatly changed, but the difference between the height of the point on the road edge and the height of the point on the road becomes larger, i.e., the heights are abrupt changes), and the point satisfying the above conditions is generally a road edge point, so.
In some embodiments, the calculating a ground normal vector from the current frame point cloud data and a predetermined number of frames of point cloud data preceding the current frame point cloud data comprises: storing the current frame point cloud data and the point cloud data of the preset frame number into a queue with a preset length, wherein the preset length is determined according to the preset frame number; accumulating all point cloud data in the queue into continuous frame point cloud data; and calculating the ground normal vector according to the continuous frame point cloud data and a first function prestored in a point cloud base.
In this embodiment, the point cloud data is stored in a queue of a predetermined length, all the point cloud data in the queue is accumulated into continuous frame point cloud data, a ground normal vector can be calculated according to the continuous frame point cloud data and a first function prestored in a point cloud base, when one point cloud data is currently acquired, due to the limitation of the queue length, the current frame point cloud data has a queue tail, the point cloud data at the head of the queue is separated from the queue, each calculation of the ground normal vector is obtained according to the current frame point cloud data and the point cloud data of the previous frame number, and the accuracy of the obtained ground normal vector is high.
In some embodiments, the path generation method further comprises: and when the inner product of the normal vector of the current frame and the ground normal vector is larger than a preset inner product threshold value, the height of the current frame is out of a preset height range, the curvature of the current frame is smaller than a preset curvature threshold value, or the neighborhood second derivative corresponding to the point cloud data of the current frame is smaller than a preset second derivative threshold value, determining that the point cloud data of the current frame is noise point cloud data.
In this embodiment, when any one of the normal vector of the current frame, the curvature of the current frame, the height of the current frame, and the second derivative of the neighborhood of the current frame does not satisfy the above corresponding relationship, it may be determined that the current frame point cloud data is not the road edge point cloud data but noise point cloud data corresponding to noise (e.g., raised stones and obstacles on the road) in the scene, so as to accurately identify the road edge point cloud data and the noise point cloud data.
In some embodiments, the path generation method further comprises: calculating a fitting straight line according to the multiple frames of road edge point cloud data based on a random sampling consistency algorithm; when the distance between the road edge point cloud data and the fitting straight line is smaller than a first preset distance, determining that the road edge point cloud data is interior point cloud data; when the number of the interior point cloud data accounts for a ratio of the multiple frames of road edge point cloud data to be greater than a preset ratio, determining projection points of each interior point cloud data on the fitting straight line, and calculating the distance between adjacent projection points; and when the maximum distance between the points is smaller than a second preset distance, saving the fitted straight line as a road edge section, wherein the road edge section comprises a starting point coordinate, an end point coordinate and direction data indicating one side of two sides of the road edge section close to the road.
In this embodiment, after multiple frames of road edge point cloud data are identified, a straight line is fitted based on a random sampling consistency algorithm, whether the road edge point cloud data are interior point cloud data or not is determined according to the road edge point cloud data and the distance of the fitted straight line, and the projection points of the interior point cloud data on the fitted straight line are determined.
In some embodiments, the path generation method further comprises: judging whether the two adjacent road edge sections are disconnected or not according to the starting point coordinates and the end point coordinates of the two adjacent road edge sections; and when the two adjacent road edge sections are not disconnected, connecting the two adjacent road edge sections into an integrated road edge section.
In the embodiment, the distance between the end point of the previous road edge section and the start point of the next road edge section can be determined according to the start point coordinates and the end point coordinates of the two adjacent road edge sections, so that whether the two road edge sections are disconnected or not can be accurately judged, when the two road edge sections are not disconnected, the two road edge sections can be connected into an integrated road edge section, the road edge sections are connected as far as possible, and the continuity of the moving path of the intelligent robot is ensured.
In some embodiments, two adjacent road edge segments are a first road edge segment and a second road edge segment, respectively, and the connecting the two adjacent road edge segments into one integrated road edge segment includes: determining a first drop foot of the end point of the first road edge section on the second road edge section and a second drop foot of the start point of the second road edge section on the first road edge section; when the first drop foot is positioned on the extension line of the second road edge section and the second drop foot is positioned on the extension line of the first road edge section, connecting the end point of the first road edge section and the starting point of the second road edge section to generate the integrated road edge section; and when the first drop foot is positioned on the second road edge section and the second drop foot is positioned on the first road edge section, deleting a line segment between the first drop foot and the starting point of the second road edge section in the second road edge section and deleting a line segment between the second drop foot and the ending point of the first road edge section in the first road edge section, and connecting the first drop foot and the second drop foot to generate the integrated road edge section.
In this embodiment, according to the relative positional relationship between the two edge segments, when the end point of the first edge segment and the start point of the second edge segment are connected, in order to prevent the connection from being not smooth enough, when the first hanging foot is located on the extension line of the second edge segment and the second hanging foot is located on the extension line of the first edge segment, the end point of the first edge segment and the start point of the second edge segment are directly connected, and when the first hanging foot is located on the second edge segment and the second hanging foot is located on the first edge segment, the first hanging foot and the second hanging foot are connected, and the line segment between the start points of the first hanging foot and the second edge segment and the line segment between the end point of the second hanging foot and the first edge segment are deleted, thereby realizing the smooth connection of the two edge segments.
In some embodiments, the generating a movement path according to the road edge point cloud data and controlling the intelligent robot to move according to the movement path includes: offsetting the integrated road section by a preset width according to the direction data; connecting adjacent two of the integrated road segments that are biased to open to generate the movement path; and controlling the intelligent robot to move according to the moving path.
In this embodiment, after a plurality of integrated road edge sections are determined, in order to enable the intelligent robot to move along the integrated road edge sections and prevent the intelligent robot from colliding with the road edge too close to the road edge, the integrated road edge sections need to be offset to one side of the road by a predetermined width, due to the difference of direction data of the integrated road edge sections, adjacent integrated road edge sections after offset may be disconnected or may be changed from a disconnected state to an un-disconnected state due to offset, at this time, two integrated road edge sections which are not disconnected are connected into a new integrated road edge section, and finally, a moving path is generated according to all the integrated road edge sections and the intelligent robot is controlled to move according to the moving path, so that the robot can move along the road edge and can also prevent the intelligent robot from colliding with the road edge too close to the road edge.
The path generation device of the embodiment of the application is used for the intelligent robot and comprises an acquisition module, an identification module and a processing module. The acquisition module is used for acquiring continuous multi-frame point cloud data; the identification module is used for identifying road edge point cloud data in the multi-frame point cloud data, and the road edge point cloud data is point cloud data corresponding to points on road edges of roads; the processing module is used for generating a moving path according to the road edge point cloud data and controlling the intelligent robot to move according to the moving path.
In the path generation device of the embodiment of the application, continuous multi-frame point cloud data are firstly acquired, then the road edge point cloud data in the point cloud data are identified, and finally the moving path is generated according to the road edge point cloud data.
In some embodiments, the obtaining module may be further configured to obtain three-dimensional coordinate data of the multiple frames of data in the first coordinate system; determining a transformation matrix of the first coordinate system and the second coordinate system based on the pose of the intelligent robot; and converting the three-dimensional coordinate data into the point cloud data under the second coordinate system according to the conversion matrix to obtain the multi-frame point cloud data.
In some embodiments, the identification module may be configured to calculate a ground normal vector from current frame point cloud data and a predetermined number of frames of point cloud data preceding the current frame point cloud data; calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
In some embodiments, the path generating apparatus further includes a first determining module, where the first determining module determines that the current frame point cloud data is noise point cloud data when an inner product of the current frame normal vector and the ground normal vector is greater than a predetermined inner product threshold, the current frame height is outside a predetermined height range, the current frame curvature is smaller than a predetermined curvature threshold, or a neighborhood second derivative corresponding to the current frame point cloud data is smaller than a predetermined second derivative threshold.
In some embodiments, the identification module may be configured to store the current frame point cloud data and the point cloud data of the predetermined number of frames in a queue with a predetermined length, where the predetermined length is determined according to the predetermined number of frames; accumulating all point cloud data in the queue into continuous frame point cloud data; and calculating the ground normal vector according to the continuous frame point cloud data and a first function prestored in a point cloud base.
In some embodiments, the path generation apparatus further comprises a first calculation module, a determination module, a second calculation module, and a storage module. The first calculation module is used for calculating a fitting straight line according to a plurality of frames of the road edge point cloud data based on a random sampling consistency algorithm; the determining module is used for determining that the road edge point cloud data is interior point cloud data when the distance between the road edge point cloud data and the fitting straight line is smaller than a first preset distance; the second calculation module is used for determining projection points of each internal point cloud data on the fitting straight line and calculating the distance between adjacent projection points when the number of the internal point cloud data accounts for a ratio of the multiple frames of road edge point cloud data to be greater than a preset ratio; the storage module is used for storing the fitted straight line as a road edge section when the maximum distance between the points is smaller than a second preset distance, wherein the road edge section comprises a starting point coordinate, an end point coordinate and direction data indicating one side, close to the road, of two sides of the road edge section.
In some embodiments, the path generating device further comprises a judging module and a connecting module. The judging module can be used for judging whether the two adjacent road edge sections are disconnected according to the starting point coordinates and the end point coordinates of the two adjacent road edge sections; the connecting module is used for connecting the two adjacent road edge sections into an integrated road edge section when the two adjacent road edge sections are not disconnected.
In some embodiments, the connection module is configured to determine a first drop foot of the end point of the first edge segment on the second edge segment and a second drop foot of the start point of the second edge segment on the first edge segment; when the first drop foot is positioned on the extension line of the second road edge section and the second drop foot is positioned on the extension line of the first road edge section, connecting the end point of the first road edge section and the starting point of the second road edge section to generate the integrated road edge section; and when the first drop foot is positioned on the second road edge section and the second drop foot is positioned on the first road edge section, deleting a line segment between the first drop foot and the starting point of the second road edge section in the second road edge section and deleting a line segment between the second drop foot and the ending point of the first road edge section in the first road edge section, and connecting the first drop foot and the second drop foot to generate the integrated road edge section.
In certain embodiments, the processing module may be operable to offset the integrated road along a segment by a predetermined width based on the directional data; connecting adjacent two of the integrated road segments that are biased to open to generate the movement path; and controlling the intelligent robot to move according to the moving path.
The intelligent robot of the embodiment of the application comprises one or more processors and a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the path generation method of any of the embodiments described above.
In the intelligent robot of this application embodiment, at first acquire continuous multiframe point cloud data, then the curb point cloud data in the discernment point cloud data, at last according to curb point cloud data generation movement path, because movement path is according to curb point cloud data generation, the realization that movement path can be fine is welt completely, when control intelligent robot removes in order to realize cleaning of road along the movement path of full welt, cleans the effect better.
The non-transitory computer-readable storage medium containing computer-executable instructions of the embodiments of the present application, when executed by one or more processors, causes the processors to perform the path generation method of any of the embodiments described above.
According to the computer-readable storage medium, continuous multi-frame point cloud data are firstly acquired, then road edge point cloud data in the point cloud data are identified, and finally a moving path is generated according to the road edge point cloud data.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a path generation method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of a smart robot module according to some embodiments of the present application;
FIG. 3 is a block diagram of a path generation apparatus according to some embodiments of the present application;
FIG. 4 is a schematic flow chart diagram of a path generation method in accordance with certain embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of a path generation method according to some embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of a path generation method in accordance with certain embodiments of the present application;
FIG. 7 is a block diagram of a path generation apparatus according to some embodiments of the present application;
FIG. 8 is a schematic flow chart diagram of a path generation method in accordance with certain embodiments of the present application;
FIG. 9 is a schematic flow chart diagram of a path generation method in accordance with certain embodiments of the present application;
FIG. 10 is a block diagram of a path generation apparatus according to some embodiments of the present application;
FIG. 11 is a schematic flow chart diagram of a path generation method in accordance with certain embodiments of the present application;
FIG. 12 is a block diagram of a path generation apparatus according to some embodiments of the present application;
FIG. 13 is a schematic flow chart diagram of a path generation method in accordance with certain embodiments of the present application;
FIG. 14 is a schematic diagram of a scenario of a path generation method according to some embodiments of the present application;
FIG. 15 is a schematic diagram of a scenario of a path generation method according to some embodiments of the present application;
FIG. 16 is a schematic flow chart diagram of a path generation method according to some embodiments of the present application;
FIG. 17 is a schematic diagram of a connection between a computer-readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and 2, a path generating method according to an embodiment of the present disclosure may be applied to an intelligent robot 100, and the path generating method includes the steps of:
011: acquiring continuous multi-frame point cloud data;
012: identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data is point cloud data corresponding to points on road edges of roads;
013: and generating a moving path according to the road edge point cloud data and controlling the intelligent robot 100 to move according to the moving path.
The intelligent robot 100 of the present embodiment includes one or more processors 10, a memory 20, and one or more programs, where the one or more programs are stored in the memory 20 and executed by the one or more processors 10, and the programs include instructions for executing the path generation method of the present embodiment. When the processor 10 executes the program, the processor 10 may be configured to implement the path generation method according to any embodiment of the present application. When the processor 10 executes the program, the processor 10 is configured to perform step 011, step 012, and step 013. That is, the processor 10 may be configured to obtain continuous multi-frame point cloud data; identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data is point cloud data corresponding to points on road edges of roads; and generating a moving path according to the road edge point cloud data and controlling the intelligent robot 100 to move according to the moving path.
Referring to fig. 3, the path generating apparatus 200 according to the embodiment of the present disclosure includes an obtaining module 211, an identifying module 212, and a processing module 213. The obtaining module 211, the identifying module 212 and the processing module 213 can be used to implement step 011, step 012 and step 013, respectively. Namely, the obtaining module 211 is configured to obtain continuous multi-frame point cloud data; the identification module 212 is configured to identify road edge point cloud data in the multi-frame point cloud data, where the road edge point cloud data is point cloud data corresponding to points on a road edge of a road; the processing module 213 is configured to generate a moving path according to the road edge point cloud data and control the intelligent robot 100 to move according to the moving path.
The intelligent robot 100 may be specifically an intelligent robot 100 such as a sweeper, a scrubber, a vacuum cleaner, or the like. The smart robot 100 may also include elements such as a communication interface 30, a cleaning implement, and the like. The intelligent robot 100 may be used to clean surfaces such as floors, floor tiles, pavements, or cement grounds.
Specifically, the intelligent robot 100 may be provided with a positioning device (e.g., a laser ranging And imaging device), the positioning device may implement positioning itself based on a Simultaneous Localization And Mapping (SLAM) positioning technology, the positioning device may obtain three-dimensional coordinates of each position in a current scene in real time, the processor 10 may obtain point cloud data of each frame by processing the three-dimensional coordinates obtained by each frame, And the processor 10 may perform Mapping on the current scene according to multi-frame point cloud data; the multi-frame point cloud data includes point cloud data of each position of a scene, in order to implement edge-attaching cleaning of the intelligent robot 100, the processor 10 may identify road edge point cloud data in the multi-frame point cloud data (that is, point cloud data corresponding to a point located on a road edge in a current scene), and after identifying the road edge point cloud data, the processor 10 may determine a position where the road edge in a constructed map is located according to the road edge point cloud data, thereby generating a moving path, and further controlling the intelligent robot 100 to move along the moving path, so as to perform a cleaning task (for example, a cleaning task for a position where a road edge needs to be cleaned with emphasis).
In the path generation method, the path generation device 200 and the intelligent robot 100 according to the embodiment of the present application, firstly, continuous multi-frame point cloud data is obtained, then, the road edge point cloud data in the multi-frame point cloud data is identified, and finally, a moving path is generated according to the road edge point cloud data.
Referring to FIG. 4, in some embodiments, step 011 includes the steps of:
0111: acquiring three-dimensional coordinate data of multi-frame data in a first coordinate system;
0112: determining a transformation matrix of the first coordinate system and the second coordinate system based on the pose of the intelligent robot 100; and
0113: and converting the three-dimensional coordinate data into point cloud data under a second coordinate system according to the conversion matrix to obtain multi-frame point cloud data.
Referring to fig. 2, in some embodiments, the processor 10 may be further configured to implement step 0111, step 0112, and step 0113. That is, the processor 10 may also be configured to obtain three-dimensional coordinate data of multiple frames of data in the first coordinate system; determining a transformation matrix of the first coordinate system and the second coordinate system based on the pose of the intelligent robot 100; and converting the three-dimensional coordinate data into point cloud data under a second coordinate system according to the conversion matrix to obtain multi-frame point cloud data.
Referring to fig. 3, in some embodiments, the obtaining module 211 may further be configured to perform step 011, step 012, and step 013. That is, the obtaining module 211 may further be configured to obtain three-dimensional coordinate data of multiple frames of data in the first coordinate system; determining a transformation matrix of the first coordinate system and the second coordinate system based on the pose of the intelligent robot 100; and converting the three-dimensional coordinate data into point cloud data under a second coordinate system according to the conversion matrix to obtain multi-frame point cloud data.
Specifically, the positioning device on the intelligent robot 100 may obtain, in real time, three-dimensional coordinate data of a current scene, where the three-dimensional coordinate data is coordinates in a first coordinate system, where the first coordinate system may be a laser coordinate system of the intelligent robot 100, and since the laser coordinate system is constantly changing along with movement of the intelligent robot 100, for convenience of data processing, the three-dimensional coordinate data needs to be converted into the same static coordinate system (i.e., a second coordinate system), and the second coordinate system may be a world coordinate system.
Then, the processor 10 can acquire the pose of the intelligent robot 100 according to a pose detection device (such as a gyroscope) arranged on the intelligent robot 100, when three-dimensional coordinate data of a first coordinate system is converted into point cloud data of a second coordinate system, different transformation matrixes are corresponding to different poses, the processor 10 firstly determines the transformation matrix corresponding to the pose according to the pose, and converts the three-dimensional coordinate data of the first coordinate system into the point cloud data of the second coordinate system based on the transformation matrix, so that multi-frame point cloud data are acquired. In the present embodiment, the three-dimensional coordinate data can be accurately converted into point cloud data based on the three-dimensional coordinate data acquired by the intelligent robot 100 and the conversion matrix determined based on the pose of the intelligent robot 100.
Referring to fig. 5, in some embodiments, step 012 includes the steps of:
0121: calculating a ground normal vector according to the current frame point cloud data and the point cloud data of a preset frame number before the current frame point cloud data;
0122: calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and
0123: and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
Referring to fig. 2, in some embodiments, processor 10 may be configured to perform steps 0121, 0122, and 0123. That is, the processor 10 may be configured to calculate a ground normal vector according to the current frame point cloud data and point cloud data of a predetermined number of frames before the current frame point cloud data; calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
Referring to fig. 3, in some embodiments, the identification module 212 may be used to perform steps 0121, 0122, and 123. That is, the identification module 212 may be configured to calculate a ground normal vector according to the current frame point cloud data and the point cloud data of a predetermined frame number before the current frame point cloud data; calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
Specifically, in order to accurately determine whether the current frame point cloud data is road edge point cloud data, the processor 10 may first calculate a ground normal vector according to the current frame point cloud data and point cloud data of a predetermined frame number (the predetermined frame number is a positive integer greater than 0, such as a predetermined frame number of 1, 3, 7, 15, etc.) before the current frame point cloud data (specifically, may calculate the ground normal vector according to the current frame point cloud data, point cloud data of the predetermined frame number before the current frame point cloud data, and a first function prestored in the point cloud library), where the calculated ground normal vector and the current frame point cloud data have a high correlation, the error caused by the inner product of the current frame normal vector and the ground normal vector calculated based on the ground normal vector before change can be prevented when the ground normal is changed under the condition that the intelligent robot 100 is on an up-down slope, and the calculation accuracy of the inner product of the current frame normal vector and the ground normal vector is improved.
The processor 10 calculates a current frame normal vector, a current frame curvature, a current frame height and a current frame neighborhood second derivative according to the point cloud data of the current frame, wherein the processor 10 can calculate the current frame normal vector according to the current frame point cloud data and a second function prestored in a point cloud base, calculate the current frame curvature according to the current frame point cloud data and a third function prestored in the point cloud base, and calculate the current frame height according to the current frame point cloud data and a fourth function prestored in the point cloud base; and calculating a second derivative of the current frame field according to the current frame point cloud data, the point cloud data of preset frame numbers (the preset frame number is a preset value and can be 1, 3, 4, 5 and the like) before and after the current frame point cloud data and a fifth function prestored in the point cloud base. The first function, the second function, the third function, the fourth function and the fifth function are all functions prestored in the point cloud base. To prevent the space process, the following description only takes the point cloud data of the preset frame number (for example, the preset frame number is 5) before and after the current frame point cloud data and the fifth function pre-stored in the point cloud library to calculate the second derivative of the current frame field as an example.
For example, the fifth function contains the formula as follows: d1= range [ i-5] + range [ i-4] + range [ i-3], d2= range [ i +5] + range [ i +4] + range [ i +3 ]; d3= range [ i-2] + range [ i-1] + range [ i ]; d4= range [ i +2] + range [ i +1] + range [ i ]; d = D1+ D2-D3-D4; wherein i represents the serial number of the current point corresponding to the current frame point cloud data, range [ i ] represents the distance value of the ith point, range [ i-k ] and range [ i + k ] respectively represent the distance value between the kth point before the current point and the distance value of the kth point after the current point, k is a positive integer greater than 0, D1 to D4 are intermediate quantities, and D is a second derivative in the current frame field. In this way, the processor 10 can accurately calculate the ground normal vector, the current frame curvature, the current frame height, and the current frame neighborhood second derivative.
After the ground normal vector, the current frame curvature, the current frame height and the current frame neighborhood second order derivative are obtained through calculation, according to the conditions required to be met by the road edge point: a surface located on a vertical plane (i.e., a surface perpendicular to a bottom surface), a height lower than the height of a general road edge (e.g., 20 cm)), and a height abrupt change compared to the height of the surrounding ground (e.g., when a plurality of points are all points on a road, the heights are substantially the same, the difference between the points on the road does not change much, but the difference between the height of a point on the road edge and the height of a point on the road becomes larger, i.e., the height abrupt change occurs). According to the above conditions, the processor 10 may determine whether the point corresponding to the current frame point cloud data is a road edge point by determining whether an inner product of the current frame normal vector and the ground normal vector is less than or equal to a predetermined inner product threshold (hereinafter referred to as condition a), whether the current frame height is within a predetermined height range (hereinafter referred to as condition b), whether the current frame curvature is greater than or equal to a predetermined curvature threshold (hereinafter referred to as condition c), and whether a neighborhood second derivative corresponding to the current frame point cloud data is greater than or equal to a predetermined second derivative threshold (hereinafter referred to as condition d), where condition a determines whether the point is located on a vertical plane, condition b determines whether the point is low in height, condition c and condition d are jointly used to determine whether the point is mutated in height compared with the height of the surrounding ground, and the predetermined inner product threshold may be 0.2, 0.3, 0.5, etc., the predetermined height range may be [0.05 meters, 0.1 meters ], [0.07 meters, 0.11 meters ], [0.08 meters, 0.12 meters ], etc., the predetermined curvature threshold may be 0.02, 0.031, 0.035, 0.04, etc., and the predetermined second derivative threshold may be 0.2, 0.3, 0.5, etc.
When the condition a, the condition b, the condition c and the condition d are all satisfied (that is, when the inner product of the normal vector of the current frame and the normal vector of the ground is smaller than a predetermined inner product threshold, the height of the current frame is within a predetermined height range, the curvature of the current frame is greater than a predetermined curvature threshold, and the neighborhood second derivative corresponding to the point cloud data of the current frame is greater than a predetermined second derivative threshold), the point can be determined to be a road edge point, so that whether the point cloud data of the current frame is road edge point cloud data or not can be accurately judged.
In other embodiments, the current frame point cloud data may be determined to be the road edge point cloud data only when any three of the condition a, the condition b, the condition c, and the condition d are satisfied, or may be determined to be the road edge point cloud data when any two of the condition a, the condition b, the condition c, and the condition d are satisfied.
Referring to fig. 6, in some embodiments, the path generation method further includes the steps of:
014: and when the inner product of the normal vector of the current frame and the ground normal vector is larger than a preset inner product threshold value, the height of the current frame is out of a preset height range, the curvature of the current frame is smaller than a preset curvature threshold value, or the neighborhood second derivative corresponding to the point cloud data of the current frame is smaller than a preset second derivative threshold value, determining the point cloud data of the current frame as noise point cloud data.
Referring to fig. 2, in some embodiments, processor 10 is further configured to perform step 014. That is, the processor 10 is further configured to determine that the current frame point cloud data is noise point cloud data when an inner product of the current frame normal vector and the ground normal vector is greater than a predetermined inner product threshold, the current frame height is outside a predetermined height range, the current frame curvature is less than a predetermined curvature threshold, or a neighborhood second derivative corresponding to the current frame point cloud data is less than a predetermined second derivative threshold.
Referring to fig. 7, in some embodiments, the path generation apparatus 200 further includes a first determining module 214. The first determination module 214 may be used to implement step 014. That is, the first determining module 214 may be configured to determine that the current frame point cloud data is noise point cloud data when an inner product of the current frame normal vector and the ground normal vector is greater than a predetermined inner product threshold, a current frame height is outside a predetermined height range, a current frame curvature is smaller than a predetermined curvature threshold, or a neighborhood second derivative corresponding to the current frame point cloud data is smaller than a predetermined second derivative threshold.
It can be understood that when the current frame point cloud data does not satisfy any one of the conditions a, b, c and d (i.e., the inner product of the normal vector of the current frame and the normal vector of the ground is greater than the predetermined inner product threshold, the height of the current frame is outside the predetermined height range, the curvature of the current frame is less than the predetermined curvature threshold, or the neighborhood second derivative corresponding to the current frame point cloud data is less than the predetermined second derivative threshold), the point corresponding to the current frame point cloud data does not conform to the features of the road edge point (not located on the vertical plane, the height is higher, or the height is not abrupt change compared with the height of the surrounding ground), so that it can be determined that the current frame point cloud data is not the road edge point cloud data, but may be noise point cloud data formed by noise in the. Therefore, the road edge point cloud data and the noise point cloud data can be accurately identified.
Referring to fig. 8, in certain embodiments, step 0121 comprises the steps of:
01211: storing the current frame point cloud data and the point cloud data of a preset frame number into a queue with a preset length, wherein the preset length is determined according to the preset frame number;
01212: accumulating all point cloud data in the queue into continuous frame point cloud data; and
01213: and calculating a ground normal vector according to the continuous frame point cloud data and a first function prestored in the point cloud base.
Referring to fig. 2, in some embodiments, processor 10 may be configured to perform steps 01211, 01212, and 01213. That is, the processor 10 may be configured to store the current frame point cloud data and the point cloud data of the predetermined number of frames into a queue of a predetermined length, where the predetermined length is determined according to the predetermined number of frames; accumulating all point cloud data in the queue into continuous frame point cloud data; and calculating a ground normal vector according to the continuous frame point cloud data and a first function prestored in the point cloud base.
Referring to fig. 3, in some embodiments, the identification module 212 may be used to perform steps 01211, 01212, and 01213. That is, the identification module 212 may be configured to store the current frame point cloud data and the point cloud data of the predetermined number of frames into a queue with a predetermined length, where the predetermined length is determined according to the predetermined number of frames; accumulating all point cloud data in the queue into continuous frame point cloud data; and calculating a ground normal vector according to the continuous frame point cloud data and a first function prestored in the point cloud base.
Specifically, when the current frame point cloud data is acquired, the processor 10 stores the current frame point cloud data and the point cloud data of the predetermined number of frames before the current frame point cloud data in a queue of a predetermined length, the queue includes a queue head and a queue tail, the queue conforms to a first-in first-out principle, the current frame point cloud data has the queue tail, and after the queue is full of the point cloud data, the point cloud data at the queue head will be separated from the queue whenever one current frame point cloud data is stored to the queue tail. The predetermined length is determined according to the predetermined number of frames, for example, the predetermined number of frames to be stored in the queue is K, and then the predetermined length may be K +1, where K is a positive integer greater than 0. Then the processor 10 accumulates all the point cloud data in the queue into continuous frame point cloud data, and calculates a ground normal vector according to the continuous frame point cloud data and a first function prestored in a point cloud base, when a frame of current frame point cloud data is obtained each time, the queue changes, so that the ground normal vector is calculated again according to the continuous frame point cloud data obtained by accumulating all the point cloud data in the queue after the change, each calculation of the ground normal vector is obtained according to the current frame point cloud data and the point cloud data of the preset frame number, and the accuracy of the obtained ground normal vector is high.
Referring to fig. 9, in some embodiments, the path generation method further includes:
015: calculating a fitting straight line according to multi-frame road edge point cloud data based on a random sampling consistency algorithm;
016: when the distance between the road edge point cloud data and the fitting straight line is smaller than a first preset distance, determining that the road edge point cloud data is interior point cloud data;
017: when the number of the internal point cloud data accounts for a ratio of the multiple frames of road edge point cloud data to be greater than a preset ratio, determining projection points of each internal point cloud data on a fitting straight line, and calculating the distance between adjacent projection points; and
018: and when the maximum distance between the points is smaller than a second preset distance, storing the fitted straight line as a road edge section, wherein the road edge section comprises a starting point coordinate, an end point coordinate and direction data indicating one side of the two sides of the road edge section close to the road.
Referring to fig. 2, in some embodiments, the processor 10 may be configured to perform steps 015, 016, 017 and 018. That is, the processor 10 may be configured to calculate a fitting straight line from the multi-frame road edge point cloud data based on a random sampling consistency algorithm; when the distance between the road edge point cloud data and the fitting straight line is smaller than a first preset distance, determining that the road edge point cloud data is interior point cloud data; when the number of the internal point cloud data accounts for a ratio of the multiple frames of road edge point cloud data to be greater than a preset ratio, determining projection points of each internal point cloud data on a fitting straight line, and calculating the distance between adjacent projection points; and when the maximum distance between the points is smaller than a second preset distance, storing the fitted straight line as a road edge section, wherein the road edge section comprises a starting point coordinate, an end point coordinate and direction data indicating one side close to the road in the two sides of the road edge section.
Referring to fig. 10, in some embodiments, the path generating apparatus 200 further includes a first calculating module 215, a second determining module 216, a second calculating module 217, and a storing module 218. The first calculation module 215, the second determination module 216, the second calculation module 217 and the storage module 218 are used to implement step 015, step 016, step 017 and step 018, respectively. That is, the first calculating module 215 is configured to calculate a fitting straight line according to the multiple frames of road edge point cloud data based on a random sampling consistency algorithm; the second determining module 216 is configured to determine that the road edge point cloud data is interior point cloud data when the distance between the road edge point cloud data and the fitted straight line is smaller than a first predetermined distance; the second calculation module 217 is configured to determine projection points of each internal point cloud data on the fitting straight line and calculate an inter-point distance between adjacent projection points when the number of the internal point cloud data accounts for a ratio of the multiple frames of road edge point cloud data, which is greater than a predetermined ratio; the storage module 218 is configured to store the fitted straight line as a road edge segment when the maximum inter-point distance is smaller than a second predetermined distance, where the road edge segment includes a start point coordinate, an end point coordinate, and direction data indicating one side of two sides of the road edge segment closer to the road.
Specifically, after multiple frames of road edge point cloud data are identified, based on a random sampling consistency algorithm, the processor 10 may fit the multiple frames of road edge point cloud data to obtain a fitted straight line, so as to determine whether the road edge point cloud data is interior point cloud data by determining a distance from the road edge point cloud data to the fitted straight line, it can be understood that the closer the distance is, the higher the correlation between the road edge point cloud data and the fitted straight line is, when the distance from the road edge point cloud data to the fitted straight line is smaller than a first predetermined distance, the road edge point cloud data may be determined to be interior point cloud data, and the first predetermined distance is a preset value (for example, the first predetermined distance is 0.08 m, 0.09 m, 0.1 m, 0.2m, and the like). When the number of the internal point cloud data accounts for a large proportion of the multiple frames of road edge point cloud data (for example, the proportion is greater than or equal to a predetermined proportion (for example, 80%, 85%, 90% and the like)), the correlation between the point cloud data of the predetermined proportion in the multiple frames of road edge point cloud data and the fitting straight line is high, which indicates that the determination of the fitting straight line is accurate.
At this time, the processor 10 may determine the projection point of each piece of interior point cloud data on the fitting straight line, and calculate the inter-point distance between adjacent projection points, and since the road edge points are continuous, the inter-point distance between adjacent projection points is not too large, when the maximum value of the inter-point distance between adjacent projection points is also less than or equal to the second predetermined distance (for example, the second predetermined distance is 0.1 meter, 0.2 meter, 0.3 meter, 0.6 meter, etc.), it is described that the road edge point cloud data corresponding to the fitting straight line is continuous, the processor 10 may store the fitting straight line as a structural body of a road edge segment, the structural body uses the start point of the continuous multi-frame point cloud data (the continuous multi-frame point cloud data is continuous in time, the first obtained frame point cloud data is the start point of the multi-frame point cloud data) corresponding to the fitting straight line as the start point coordinate, and uses the end point of the continuous multi-frame point cloud (the last obtained frame point cloud data is the end point in the last obtained frame cloud data) corresponding to the fitting straight line as The direction indicating one of the two sides of the road edge section on the side of the road is used as the direction data. And when the distance between two adjacent projection points is too large (for example, greater than a second predetermined distance), it is indicated that the road edge points corresponding to the two projection points are not continuous any more, and the road edge should be disconnected here, at this time, the fitted straight line may be stored as two road edge segments, in the two road edge segments, the starting point of the first road edge segment is the starting point of the continuous multi-frame point cloud data corresponding to the fitted straight line, the end point of the first road edge segment is the point cloud data corresponding to the projection point located on the first road edge segment in the two projection points at the disconnection point, the starting point of the second road edge segment is the point cloud data corresponding to the projection point located on the second road edge segment in the two projection points at the disconnection point, and the end point of the second road edge segment is the end point of the continuous multi-frame point data corresponding to the fitted. In this manner, the processor 10 is able to accurately determine the road edge segments and store the structure of the road edge segments.
Referring to fig. 11, in some embodiments, the path generation method further includes the steps of:
019: judging whether the two adjacent road edge sections are disconnected or not according to the starting point coordinates and the end point coordinates of the two adjacent road edge sections; and
020: and when the two adjacent road edge sections are not disconnected, connecting the two adjacent road edge sections into an integrated road edge section.
Referring to fig. 2, in some embodiments, the processor 10 may be configured to perform the steps 019 and 020. That is, the processor 10 may be configured to determine whether two adjacent road edge segments are disconnected according to the start point coordinates and the end point coordinates of the two adjacent road edge segments; and when the two adjacent road edge sections are not disconnected, connecting the two adjacent road edge sections into an integrated road edge section.
Referring to fig. 12, in some embodiments, the path generating apparatus 200 further includes a determining module 219 and a connecting module 220. The determining module 219 and the connecting module 220 may be configured to perform step 019 and step 020, respectively. That is, the determining module 219 may be configured to determine whether two adjacent road edge segments are disconnected according to the start point coordinates and the end point coordinates of the two adjacent road edge segments; the connection module 220 is used for connecting two adjacent road edge sections into an integrated road edge section when the two adjacent road edge sections are not disconnected.
Specifically, because the continuous multiple frames of road edge point cloud data used for generating the fitted straight line may be a part of the road edge point cloud data corresponding to one road edge, and one complete road edge may be obtained by continuously connecting a plurality of road edge segments, the processor 10 may determine whether a distance between an end point of the first road edge segment and a start point of the second road edge segment in two adjacent road edge segments (including a first road edge segment and a second road edge segment generated successively) is greater than a third predetermined distance (for example, the third predetermined distance may be 0.2m, 0.3m, 0.5m, and the like), and when the distance is greater than the third predetermined distance, it indicates that the end point of the first road edge segment and the start point of the second road edge segment are discontinuous, and the first road edge segment and the second road edge segment are disconnected and are located on different road edges, respectively. And when the distance between the end point of the first road edge segment and the starting point of the second road edge segment is smaller than or equal to a third preset distance, the end point of the first road edge segment and the starting point of the second road edge segment are continuous, and the first road edge segment and the second road edge segment are not disconnected and are positioned on the same road edge. The processor 10 may connect two non-disconnected adjacent road edge segments into an integrated road edge segment, and when connecting, the end point of the first road edge segment and the start point of the second road edge segment may be connected together by the bezier curve, so that the transition between the adjacent road edge segments is smooth. When the two road edge sections are subsequently judged to be disconnected, the processor 10 still judges whether the integrated road edge section and the adjacent road edge section are disconnected again, and when the two road edge sections are not disconnected, the integrated road edge section and the non-disconnected road edge section are connected again to form a new integrated road edge section, so that all road edge sections corresponding to a complete road edge on the road are connected, and the finally generated integrated road edge section is a complete road edge on the corresponding road, so that the continuity of the moving path of the intelligent robot 100 is ensured.
Referring to fig. 13, in some embodiments, step 020 includes the steps of:
0201: determining a first drop foot of the terminal point of the first road edge section on the second road edge section and a second drop foot of the starting point of the second road edge section on the first road edge section;
0202: when the first drop foot is positioned on the extension line of the second road edge section and the second drop foot is positioned on the extension line of the first road edge section, connecting the terminal point of the first road edge section and the starting point of the second road edge section to generate an integrated road edge section; and
0203: and when the first drop foot is positioned on the second road edge section and the second drop foot is positioned on the first road edge section, deleting a line segment between the first drop foot and the starting point of the second road edge section in the second road edge section and deleting a line segment between the second drop foot and the terminal point of the first road edge section in the first road edge section, and connecting the first drop foot and the second drop foot to generate an integrated road edge section.
Referring to FIG. 2, in some embodiments, processor 10 is further configured to perform step 0201, step 0202, and step 0203. That is, the processor 10 is configured to determine a first drop foot of the end point of the first edge segment on the second edge segment, and a second drop foot of the start point of the second edge segment on the first edge segment; when the first drop foot is positioned on the extension line of the second road edge section and the second drop foot is positioned on the extension line of the first road edge section, connecting the terminal point of the first road edge section and the starting point of the second road edge section to generate an integrated road edge section; and when the first drop foot is positioned on the second road edge section and the second drop foot is positioned on the first road edge section, deleting a line segment between the first drop foot and the starting point of the second road edge section in the second road edge section and deleting a line segment between the second drop foot and the terminal point of the first road edge section in the first road edge section, and connecting the first drop foot and the second drop foot to generate an integrated road edge section.
Referring to fig. 12, in some embodiments, connection module 220 is further configured to perform step 0201, step 0202, and step 0203. That is, the connection module 220 is configured to determine a first drop foot of the end point of the first edge segment on the second edge segment, and a second drop foot of the start point of the second edge segment on the first edge segment; when the first drop foot is positioned on the extension line of the second road edge section and the second drop foot is positioned on the extension line of the first road edge section, connecting the terminal point of the first road edge section and the starting point of the second road edge section to generate an integrated road edge section; and when the first drop foot is positioned on the second road edge section and the second drop foot is positioned on the first road edge section, deleting a line segment between the first drop foot and the starting point of the second road edge section in the second road edge section and deleting a line segment between the second drop foot and the terminal point of the first road edge section in the first road edge section, and connecting the first drop foot and the second drop foot to generate an integrated road edge section.
Specifically, when two adjacent road edge segments (a first road edge segment and a second road edge segment generated successively) which are not disconnected are connected into an integrated road edge segment, as shown in fig. 14, the processor 10 may first determine a first drop-foot P2 of the end point P1 of the first edge segment a on the second edge segment B, and a second drop-foot P4 of the start point P3 of the second edge segment B on the first edge segment a, if the first edge segment a and the second edge segment B are not staggered (i.e., the first drop-foot P2 is located on the extension line of the second edge segment B, and the second drop-foot P4 is located on the extension line of the first edge segment a), then the end point P1 of the first edge segment a and the start point P3 of the second edge segment B may be directly connected together or the end point P1 of the first edge segment a and the start point P3 of the second edge segment B may be connected together by a bezier curve (e curve (e.g., an arc-dashed line in fig. 14) to generate an integrated edge segment.
In order to prevent the connection between the road edge sections from being too abrupt, as shown in fig. 15, when two road edge sections are staggered together (i.e., when the first drop foot P2 is located on the second road edge section B and the second drop foot P4 is located on the first road edge section a), if the end point P1 of the first road edge section a and the start point P3 of the second road edge section B are directly connected, the connection of the two road edge sections forms a "z" shaped transition, while the general transition of the road edge is smooth and the above "z" shaped transition does not occur, at this time, the processor 10 directly connects the first drop foot P2 and the second drop foot P4 directly connects the first drop foot P2 and the second drop foot P4 together or connects the first drop foot P2 and the second drop foot P4 together through a bezier curve (e.g. an arc-shaped chain line in fig. 15), and then connects the first drop foot P2 and the start point P3 of the first road edge section B (i.e. the start point P3 of the first road edge section B, P2P 3), and the line segment (i.e., P1P 4) between the second drop foot P4 and the end point P1 of the first edge segment a in the first edge segment a, are deleted, so that the connection of the first edge segment a and the second edge segment B is smoother.
Referring to fig. 16, in some embodiments, step 013 includes:
0131: offsetting the integrated road section by a preset width according to the direction data;
0132: connecting two adjacent integration road sections which are disconnected after being biased to generate a moving path; and
0133: and controlling the intelligent robot 100 to move according to the moving path.
Referring to fig. 2, in some embodiments, processor 10 is further configured to implement steps 0131, steps 0132, and steps 0133. That is, the processor 10 may be operable to offset the integrated road along the segment by a predetermined width based on the directional data; connecting two adjacent integration road sections which are disconnected after being biased to generate a moving path; and controlling the intelligent robot 100 to move according to the moving path.
Referring to fig. 3, in some embodiments, the processing module 213 is further configured to implement step 0131, step 0132 and step 0133. The processing module 213 may be configured to offset the integrated road along the segment by a predetermined width based on the directional data; connecting two adjacent integration road sections which are disconnected after being biased to generate a moving path; and controlling the intelligent robot 100 to move according to the moving path.
Specifically, after a plurality of integrated road sections are determined, in order to enable the intelligent robot 100 to move along the integrated road sections and prevent collision with the road edges due to too close proximity to the road edges, the integrated road sections need to be offset by a predetermined width in a direction toward one side of the road, where the direction toward one side of the integrated road sections can be determined according to direction data included in a structure of the road sections. Due to the difference of the direction data of the integrated road edge sections, the biased adjacent integrated road edge sections may be disconnected or may be changed from the disconnected state to the non-disconnected state due to the bias, wherein the direction data of the integrated road edge sections may be determined according to the direction data of a plurality of road edge sections corresponding to the integrated road edge sections. After the offset, whether two adjacent integrated road edge sections are disconnected or not is judged again, the two non-disconnected integrated road edge sections are connected into a new integrated road edge section (please refer to steps 0201 to 0203), the two disconnected integrated road edge sections are also connected through a bezier curve, so that all integrated road edge sections are connected together, the integrity of the generated moving path is ensured, and after the moving path is obtained, the processor 10 can control the intelligent robot 100 to move along the moving path, so that the intelligent robot 100 can not only move along the road edge to execute a cleaning task, but also prevent the intelligent robot 100 from being too close to the road edge to collide the road edge.
Referring to fig. 2, in some embodiments, after obtaining a plurality of road edge segments, the processor 10 may bias each road edge segment according to the direction data of each road edge segment, then integrate two adjacent road edge segments that are not disconnected to generate an integrated road edge segment, and finally connect the two adjacent integrated road edge segments through a bezier curve.
Therefore, the direction data of the integrated road edge sections do not need to be calculated, the calculation amount is saved, compared with the situation that the direction data of the integrated road edge sections are determined according to the direction data of a plurality of road edge sections corresponding to the integrated road edge sections, the direction data corresponding to each road edge section before integration is more accurate, and the accuracy of offset can be improved.
Referring to fig. 2 again, the memory 20 is used for storing a computer program that can run on the processor 10, and the processor 10 executes the computer program to implement the path generation method in any of the above embodiments.
Memory 20 may comprise high-speed RAM memory 20, and may also include non-volatile memory 20, such as at least one disk memory 20. Further, the intelligent robot 100 may further include a communication interface 30, and the communication interface 30 is used for communication between the memory 20 and the processor 10.
If the memory 20, the processor 10 and the communication interface 30 are implemented independently, the communication interface 30, the memory 20 and the processor 10 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 2, but it is not intended that there be only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 20, the processor 10, and the communication interface 30 are integrated on a chip, the memory 20, the processor 10, and the communication interface 30 may complete communication with each other through an internal interface.
The processor 10 may be a Central Processing Unit (CPU) 10, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
Referring to fig. 17, a non-transitory computer-readable storage medium 300 according to an embodiment of the present application includes computer-executable instructions 301, which when executed by one or more processors 400, cause the processors 400 to perform a path generation method according to any embodiment of the present application.
For example, referring to fig. 1 and 2, when the computer executable instructions 301 are executed by the processor 400, the processor 400 is configured to perform the steps of:
011: acquiring continuous multi-frame point cloud data
012: identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data is point cloud data corresponding to points on road edges of roads;
013: and generating a moving path according to the road edge point cloud data and controlling the intelligent robot 100 to move according to the moving path.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor 10-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a random access memory 20(RAM), a read-only memory 20(ROM), an erasable programmable read-only memory 20(EPROM or flash memory 20), an optical fiber device, and a portable compact disc read-only memory 20 (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in the computer memory 20.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory 20 and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module 213, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory 20, a magnetic or optical disk, etc.
In the description herein, reference to the description of the terms "certain embodiments," "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, unless specifically limited otherwise.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (11)

1. A path generation method is applied to an intelligent robot and is characterized by comprising the following steps:
acquiring continuous multi-frame point cloud data;
identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data are point cloud data corresponding to points on road edges of roads; and
generating a moving path according to the road edge point cloud data and controlling the intelligent robot to move according to the moving path;
the identifying the road edge point cloud data in the multi-frame point cloud data comprises the following steps:
calculating a ground normal vector according to current frame point cloud data and point cloud data of a preset frame number before the current frame point cloud data;
calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and
and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
2. The path generation method according to claim 1, wherein the acquiring of the continuous multi-frame point cloud data includes:
acquiring three-dimensional coordinate data of multi-frame data in a first coordinate system;
determining a transformation matrix of the first coordinate system and the second coordinate system based on the pose of the intelligent robot; and
and converting the three-dimensional coordinate data into the point cloud data under the second coordinate system according to the conversion matrix to obtain the multi-frame point cloud data.
3. The path generation method according to claim 1, wherein the calculating a ground normal vector from the current frame point cloud data and a predetermined number of frames of point cloud data preceding the current frame point cloud data includes:
storing the current frame point cloud data and the point cloud data of the preset frame number into a queue with a preset length, wherein the preset length is determined according to the preset frame number;
accumulating all point cloud data in the queue into continuous frame point cloud data; and
and calculating the ground normal vector according to the continuous frame point cloud data and a first function prestored in a point cloud base.
4. The path generation method according to claim 1, characterized by further comprising:
and when the inner product of the normal vector of the current frame and the ground normal vector is larger than a preset inner product threshold value, the height of the current frame is out of a preset height range, the curvature of the current frame is smaller than a preset curvature threshold value, or the neighborhood second derivative corresponding to the point cloud data of the current frame is smaller than a preset second derivative threshold value, determining that the point cloud data of the current frame is noise point cloud data.
5. The path generation method according to claim 1, characterized by further comprising:
calculating a fitting straight line according to the multiple frames of road edge point cloud data based on a random sampling consistency algorithm;
when the distance between the road edge point cloud data and the fitting straight line is smaller than a first preset distance, determining that the road edge point cloud data is interior point cloud data;
when the number of the interior point cloud data accounts for a ratio of the multiple frames of road edge point cloud data to be greater than a preset ratio, determining projection points of each interior point cloud data on the fitting straight line, and calculating the distance between adjacent projection points; and
and when the maximum distance between the points is smaller than a second preset distance, storing the fitted straight line as a road edge section, wherein the road edge section comprises a starting point coordinate, an end point coordinate and direction data indicating one side close to the road in two sides of the road edge section.
6. The path generation method according to claim 5, characterized by further comprising:
judging whether the two adjacent road edge sections are disconnected or not according to the starting point coordinates and the end point coordinates of the two adjacent road edge sections; and
and when the two adjacent road edge sections are not disconnected, connecting the two adjacent road edge sections into an integrated road edge section.
7. The path generation method according to claim 6, wherein two adjacent road edge segments are a first road edge segment and a second road edge segment, respectively, and the connecting the two adjacent road edge segments into one integrated road edge segment comprises:
determining a first drop foot of the end point of the first road edge section on the second road edge section and a second drop foot of the start point of the second road edge section on the first road edge section;
when the first drop foot is positioned on the extension line of the second road edge section and the second drop foot is positioned on the extension line of the first road edge section, connecting the end point of the first road edge section and the starting point of the second road edge section to generate the integrated road edge section; and
and when the first drop foot is positioned on the second road edge section and the second drop foot is positioned on the first road edge section, deleting a line segment between the first drop foot and the starting point of the second road edge section in the second road edge section and deleting a line segment between the second drop foot and the ending point of the first road edge section in the first road edge section, and connecting the first drop foot and the second drop foot to generate the integrated road edge section.
8. The path generation method according to claim 6, wherein the generating a movement path according to the road edge point cloud data and controlling the intelligent robot to move according to the movement path comprises:
offsetting the integrated road section by a preset width according to the direction data;
connecting adjacent two of the integrated road segments that are biased to open to generate the movement path; and
and controlling the intelligent robot to move according to the moving path.
9. A path generation apparatus for an intelligent robot, the path generation apparatus comprising:
the acquisition module is used for acquiring continuous multi-frame point cloud data;
the identification module is used for identifying road edge point cloud data in the multi-frame point cloud data, wherein the road edge point cloud data is point cloud data corresponding to points on road edges of roads;
the processing module is used for generating a moving path according to the road edge point cloud data and controlling the intelligent robot to move according to the moving path;
the identification module is further configured to:
calculating a ground normal vector according to current frame point cloud data and point cloud data of a preset frame number before the current frame point cloud data;
calculating a normal vector of the current frame, the curvature of the current frame, the height of the current frame and a second derivative of a neighborhood of the current frame according to the point cloud data of the current frame; and
and when the inner product of the normal vector of the current frame and the ground normal vector is smaller than a preset inner product threshold value, the height of the current frame is in a preset height range, the curvature of the current frame is larger than a preset curvature threshold value, and the neighborhood second derivative corresponding to the point cloud data of the current frame is larger than a preset second derivative threshold value, determining the point cloud data of the current frame as the road edge point cloud data.
10. An intelligent robot, comprising:
one or more processors, memory; and
one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the path generation method of any of claims 1 to 8.
11. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the path generation method of any one of claims 1 to 8.
CN202010084825.1A 2020-02-10 2020-02-10 Path generation method and generation device, intelligent robot and storage medium Active CN110928320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010084825.1A CN110928320B (en) 2020-02-10 2020-02-10 Path generation method and generation device, intelligent robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010084825.1A CN110928320B (en) 2020-02-10 2020-02-10 Path generation method and generation device, intelligent robot and storage medium

Publications (2)

Publication Number Publication Date
CN110928320A true CN110928320A (en) 2020-03-27
CN110928320B CN110928320B (en) 2020-08-04

Family

ID=69854724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010084825.1A Active CN110928320B (en) 2020-02-10 2020-02-10 Path generation method and generation device, intelligent robot and storage medium

Country Status (1)

Country Link
CN (1) CN110928320B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111474946A (en) * 2020-05-27 2020-07-31 苏州高之仙自动化科技有限公司 Edge detection method and device and control method and device for robot edge cleaning
CN111552289A (en) * 2020-04-28 2020-08-18 苏州高之仙自动化科技有限公司 Detection method, virtual radar device, electronic apparatus, and storage medium
CN111665845A (en) * 2020-06-24 2020-09-15 北京百度网讯科技有限公司 Method, device, equipment and storage medium for planning path
CN112050805A (en) * 2020-09-02 2020-12-08 上海高仙自动化科技发展有限公司 Path planning method and device, electronic equipment and storage medium
CN112162559A (en) * 2020-09-30 2021-01-01 杭州海康机器人技术有限公司 Method, device and storage medium for multi-robot mixing
CN113534823A (en) * 2021-09-16 2021-10-22 季华实验室 Planting robot path planning method and device, electronic equipment and storage medium
CN113762011A (en) * 2020-11-25 2021-12-07 北京京东乾石科技有限公司 Road tooth detection method, device, equipment and storage medium
CN114425774A (en) * 2022-01-21 2022-05-03 深圳优地科技有限公司 Method and apparatus for recognizing walking path of robot, and storage medium
WO2023231640A1 (en) * 2022-05-31 2023-12-07 深圳市普渡科技有限公司 Dynamic edge path generation method and apparatus, computer device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842231A (en) * 2016-11-08 2017-06-13 长安大学 A kind of road edge identification and tracking
CN110376604A (en) * 2019-08-09 2019-10-25 北京智行者科技有限公司 Curb detection method based on single line laser radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842231A (en) * 2016-11-08 2017-06-13 长安大学 A kind of road edge identification and tracking
CN110376604A (en) * 2019-08-09 2019-10-25 北京智行者科技有限公司 Curb detection method based on single line laser radar

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GANGQIANG ZHAO等: "CURB DETECTION AND TRACKING USING 3D-LIDAR SCANNER", 《IEEE》 *
刘禾: "《数字图像处理及应用》", 31 January 2006 *
张海啸等: "顾及平面特征的车载激光扫描系统外参数标定法", 《测绘学报》 *
甘能: "基于道路整体信息的激光雷达路沿识别算法研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *
赵海鹏等: "基于车载激光扫描数据的城区道路自动提取", 《中国科学院大学学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552289B (en) * 2020-04-28 2021-07-06 苏州高之仙自动化科技有限公司 Detection method, virtual radar device, electronic apparatus, and storage medium
CN111552289A (en) * 2020-04-28 2020-08-18 苏州高之仙自动化科技有限公司 Detection method, virtual radar device, electronic apparatus, and storage medium
CN111474946A (en) * 2020-05-27 2020-07-31 苏州高之仙自动化科技有限公司 Edge detection method and device and control method and device for robot edge cleaning
CN111474946B (en) * 2020-05-27 2021-04-23 苏州高之仙自动化科技有限公司 Edge detection method and device and control method and device for robot edge cleaning
CN111665845A (en) * 2020-06-24 2020-09-15 北京百度网讯科技有限公司 Method, device, equipment and storage medium for planning path
CN111665845B (en) * 2020-06-24 2023-09-22 阿波罗智能技术(北京)有限公司 Method, apparatus, device and storage medium for planning path
CN112050805A (en) * 2020-09-02 2020-12-08 上海高仙自动化科技发展有限公司 Path planning method and device, electronic equipment and storage medium
CN112162559B (en) * 2020-09-30 2021-10-15 杭州海康机器人技术有限公司 Method, device and storage medium for multi-robot mixing
CN112162559A (en) * 2020-09-30 2021-01-01 杭州海康机器人技术有限公司 Method, device and storage medium for multi-robot mixing
CN113762011A (en) * 2020-11-25 2021-12-07 北京京东乾石科技有限公司 Road tooth detection method, device, equipment and storage medium
CN113534823A (en) * 2021-09-16 2021-10-22 季华实验室 Planting robot path planning method and device, electronic equipment and storage medium
CN114425774A (en) * 2022-01-21 2022-05-03 深圳优地科技有限公司 Method and apparatus for recognizing walking path of robot, and storage medium
CN114425774B (en) * 2022-01-21 2023-11-03 深圳优地科技有限公司 Robot walking road recognition method, robot walking road recognition device, and storage medium
WO2023231640A1 (en) * 2022-05-31 2023-12-07 深圳市普渡科技有限公司 Dynamic edge path generation method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN110928320B (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN110928320B (en) Path generation method and generation device, intelligent robot and storage medium
CN110553652B (en) Robot multi-sensor fusion positioning method and application thereof
CN109829351B (en) Method and device for detecting lane information and computer readable storage medium
CN109001757B (en) Parking space intelligent detection method based on 2D laser radar
JP4573977B2 (en) Distance correction device for monitoring system and vanishing point correction device for monitoring system
CN111582566B (en) Path planning method and device, intelligent robot and storage medium
JP2009041972A (en) Image processing device and method therefor
CN112051844B (en) Self-moving robot and control method thereof
CN110412619B (en) Region traversing method of laser robot and laser main control chip
JP2000357233A (en) Body recognition device
KR20210141668A (en) Detection, 3D reconstruction and tracking of multiple orthopedic objects moving relative to each other
US20200193193A1 (en) Image processing device and image processing method
JP2011022995A (en) Vanishing point estimation device and program
US20200193184A1 (en) Image processing device and image processing method
CN112004732B (en) Railway track recognition device
KR20130023433A (en) Apparatus for managing map of mobile robot based on slam and method thereof
JP2008027046A (en) Lane recognition device
CN111168685A (en) Robot control method, robot, and readable storage medium
KR20130095967A (en) Lane detecting method and apparatus thereof
CN112902911B (en) Ranging method, device, equipment and storage medium based on monocular camera
CN112426105A (en) Charging seat, calibration method and device for position of charging seat and sweeping system
JP7142468B2 (en) mobile tracking device
JP6014867B2 (en) Image processing device
JP5903901B2 (en) Vehicle position calculation device
CN111780744B (en) Mobile robot hybrid navigation method, equipment and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant