CN117315183B - Method for constructing three-dimensional map and analyzing operation based on laser radar - Google Patents

Method for constructing three-dimensional map and analyzing operation based on laser radar Download PDF

Info

Publication number
CN117315183B
CN117315183B CN202311617726.5A CN202311617726A CN117315183B CN 117315183 B CN117315183 B CN 117315183B CN 202311617726 A CN202311617726 A CN 202311617726A CN 117315183 B CN117315183 B CN 117315183B
Authority
CN
China
Prior art keywords
point cloud
operation object
laser radar
converted
vehicle body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311617726.5A
Other languages
Chinese (zh)
Other versions
CN117315183A (en
Inventor
胡勋
黄蛟
谭浩天
张平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Dinghong Zhidian Equipment Technology Co ltd
Original Assignee
Sichuan Dinghong Zhidian Equipment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Dinghong Zhidian Equipment Technology Co ltd filed Critical Sichuan Dinghong Zhidian Equipment Technology Co ltd
Priority to CN202311617726.5A priority Critical patent/CN117315183B/en
Publication of CN117315183A publication Critical patent/CN117315183A/en
Application granted granted Critical
Publication of CN117315183B publication Critical patent/CN117315183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a method for constructing a three-dimensional map and analyzing operation based on a laser radar, which comprises the following steps: filtering the point cloud obtained by laser radar scanning to obtain an original point cloud; cutting the point cloud of the engineering mechanical part in the original point cloud to obtain a cut point cloud; determining a coordinate system origin, and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud; determining a three-dimensional size range of the point cloud according to the converted coordinates of the point cloud; calculating the total number of each voxel cell grid; performing voxel gridding downsampling on the converted point cloud, and placing the downsampled point cloud into a grid map; the laser radar repeats the scanning process and updates the new point cloud in the grid map; performing operation analysis on the operation object based on the grid map to construct a three-dimensional high-precision map in real time, so as to analyze the topography, the operation target, the obstacle and the like in the operation scene of the engineering machinery; and realizing real-time three-dimensional map construction of the operation scene.

Description

Method for constructing three-dimensional map and analyzing operation based on laser radar
Technical Field
The invention relates to the technical field of laser radar operation, in particular to a method for constructing a three-dimensional map and analyzing operation based on a laser radar.
Background
The laser radar can be used for constructing a three-dimensional map of a working scene, and has important significance for applications such as automatic driving, robot navigation, industrial production and the like. However, the existing laser radar construction scene needs to rely on the acquired high-precision map, and as the operation scene of the engineering machinery is generally a sand and stone plant, a mine, a tunnel and the like, the acquired high-precision map is lacking, so that the engineering machinery is guided to operate through the laser radar, and the problems of inaccurate positioning and lower operation completion degree exist.
In view of the above, the invention provides a method for constructing a three-dimensional map and analyzing operations based on a laser radar, so as to construct a three-dimensional high-precision map in real time and analyze terrains, operation targets, obstacles and the like in an engineering machinery operation scene.
Disclosure of Invention
The invention aims to provide a method for constructing a three-dimensional map and analyzing operation based on a laser radar, which comprises the following steps: filtering the point cloud obtained by laser radar scanning to obtain an original point cloud; cutting the point cloud of the engineering machinery part in the original point cloud to obtain a cut point cloud; determining an origin of a coordinate system, and converting coordinates of the point cloud after cutting based on the origin of the coordinate system to obtain a converted point cloud; determining a three-dimensional size range of the point cloud according to the coordinates of the converted point cloud; calculating the total number of each voxel cell grid; performing voxel gridding downsampling on the converted point cloud, and placing the downsampled point cloud into a grid map; the laser radar repeats the scanning process and updates the new point cloud in the grid map; and performing operation analysis on the operation object based on the grid map to determine the characteristics of the operation object and the positions of the obstacles, and performing operation based on the characteristics of the operation object and the positions of the obstacles.
Further, determining a coordinate system origin, and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud, including: taking a first position on the engineering machinery vehicle body as an origin, and taking a coordinate system as a right rule; the first position comprises a vehicle body hinge point or a vehicle body rear axle center; acquiring the mounting pose of the laser radar mounting position and the vehicle body pose of the engineering machinery; the mounting pose comprises a translation amount and a rotation amount for mounting the laser radar; the vehicle body posture comprises the rotation amount of the engineering machinery vehicle body; and converting the coordinates of the point cloud after cutting into the coordinates of the converted point cloud by taking the vehicle body as a coordinate system through rotation and translation means.
Further, determining a coordinate system origin, and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud, including: acquiring global positioning and body posture of a engineering machinery body, and calculating the body posture of the body through rotation and translation based on the global positioning and the body posture of the body; the vehicle body pose comprises a translation amount and a rotation amount of the vehicle body; acquiring the mounting pose of the laser radar mounting position; the mounting pose comprises a translation amount and a rotation amount for mounting the laser radar; and according to the vehicle body pose and the mounting pose, converting the coordinates of the cut point cloud into the coordinates of the converted point cloud by taking the global absolute position as a coordinate system through rotation and translation.
Further, according to the coordinates of the converted point cloud, determining the three-dimensional size range of the point cloud as the maximum value of the coordinates of the corresponding axis of the converted point cloud; the calculation formula of the three-dimensional size range of the point cloud is as follows:
wherein,representing the transformed point cloud at xMaximum on axis; />Representing the minimum value of the converted point cloud on the x axis; />Representing the maximum value of the converted point cloud on the y axis; />Representing the minimum value of the converted point cloud on the y axis; />Representing the maximum value of the converted point cloud on the z axis; />Representing the minimum of the transformed point cloud in the z-axis.
Further, the method further comprises the step of operating the operation object based on the grid map, and comprises the following steps: extracting the upper and lower edge contours of the operation object; extracting the front edge outline of the operation object; determining a position and a volume of the work object based on the upper and lower edge profiles and the front edge profile; and performing operation on the operation object through the engineering machinery based on the position and the volume of the operation object.
Further, extracting the upper and lower edge contours of the operation object includes: dividing a part higher than the ground plane from the grid map to obtain point clouds above the ground plane; obtaining point cloud clusters of different objects through a clustering algorithm according to the point cloud information; the point cloud information comprises characteristics and size of point cloud; extracting the point clouds of the obstacle and the operation object according to the characteristics of the point cloud cluster; in the xy plane, projecting the point cloud of the operation object on a first straight line to obtain the projected point cloud; the first straight line is formed by a preset angle and a point projected on an xy plane by the highest point of the point cloud of the operation object; the x and y values of the projected point cloud are all on the first straight line; the preset angle is the direction of a laser radar or a vehicle body; rotating the projected point cloud by a negative value of the preset angle to obtain a rotated point cloud; traversing the rotated point cloud along the x-axis, and taking out the three-dimensional coordinates of the point with the maximum z value to obtain an upper edge point cloud; filtering the upper edge point cloud to obtain an upper edge contour line; and rotating the upper edge contour line by the preset angle to obtain a lower upper edge contour line.
Further, extracting point clouds of the obstacle and the operation object to align the preset angle with the object pile, extracting point cloud clusters under the angle, taking the point cloud cluster with the largest point cloud quantity as the operation object, and taking the point cloud clusters except the operation object as the obstacle.
Further, extracting the point cloud of the obstacle and the operation object is to fuse the point cloud cluster with the image recognition, recognize the operation object by a deep learning method, and take the point cloud cluster except the operation object as the obstacle.
Further, extracting the front edge contour of the operation object includes: in the xy plane, determining a second straight line; the second straight line is a straight line formed by the minimum value and the maximum value of the x-axis of the upper edge profile; cutting off the point cloud positioned at the rear side of the second straight line, and reserving the point cloud positioned at the front side of the second straight line to obtain a front point cloud; rotating the negative value of the preset angle for the two endpoints of the second straight line, and obtaining a median value based on the rotated endpoints; rotating the front point cloud by a negative value of a preset angle to obtain a rotated front point cloud, traversing the rotated front point cloud along an x-axis, and subtracting the median from a y value of the rotated front point cloud to obtain a front point cloud difference value; taking the largest difference value in the front point cloud difference values as a front edge contour point, and taking a curve formed by the front edge contour point set as a front edge contour line; the preset angle is the direction of a laser radar or a vehicle body; and rotating the front edge contour line by the preset angle to obtain the front edge contour line under the angle.
Further, performing, by the work machine, a work on the work object based on the position and the volume of the work object, including: positioning the center position of the operation object through the front edge contour and the upper edge contour; estimating the volume of the operation object in the visual field range through the point cloud of the operation object; and planning a working position through the front edge contour or the central position.
The technical scheme of the embodiment of the invention has at least the following advantages and beneficial effects:
and constructing a three-dimensional map of the operation scene in real time.
The work object can be analyzed in real time, including contours, volumes, obstructions, center points, and the like.
The two-dimensional vector is adopted for storing the point cloud map, so that the memory space is saved, and the storage speed and the extraction speed are increased.
A front view and an actual angle view of both edge profiles may be obtained.
In the extraction of the front edge contour, the working object is divided by creatively utilizing a straight line formed by the end points of the upper edge, so that the calculation amount is reduced by at least half.
Drawings
FIG. 1 is an exemplary schematic diagram of a radar mounting location provided by the present invention;
FIG. 2 is an exemplary schematic diagram of a laser radar scan area in the XY plane of the present invention;
FIG. 3 is an exemplary schematic diagram of voxel gridding downsampling of the present invention;
FIG. 4 is an exemplary schematic diagram of a grid map of the present invention;
fig. 5 is an exemplary schematic illustration of the contour rotation of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
The invention provides a method for constructing a three-dimensional map and analyzing operation based on a laser radar, which comprises the following steps:
the laser radar continuously scans the surrounding environment to obtain a point cloud of the surrounding environment; and filtering the point cloud obtained by laser radar scanning to obtain an original point cloud P1.
And cutting the point cloud of the engineering machinery part in the original point cloud to obtain a cut point cloud P2. Since the working device enters the visible range of the laser radar during operation of the engineering machinery, the point cloud of the vehicle body part needs to be cut off, and the cut part is different according to the type and the size of the engineering machinery. For example: the work load occupies the range of the X axis during the movement: range of X1- > X2, Y axis: y1- > Y2, range of Z axis: z1- > Z2, then the portion to be cut is all the point clouds within the range of (X1- > X2, Y1- > Y2, Z1- > Z2), resulting in cut point cloud P2.
And determining an origin of a coordinate system, and converting coordinates of the cut point cloud based on the origin of the coordinate system to obtain a converted point cloud. Since the point cloud P2 after cutting uses the radar itself as a coordinate system, it is necessary to convert all the point cloud coordinates into coordinates required in actual use. The system can be converted into a vehicle body relative coordinate system taking the engineering machinery vehicle body as a coordinate system, or converted into a global absolute position coordinate system by global positioning, such as an northeast day (ENU) coordinate system.
The method comprises the steps of determining a coordinate system origin, and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud, wherein the converted point cloud comprises the following steps. Taking a first position on the engineering machinery vehicle body as an origin, and taking a coordinate system as a right rule; the first position includes a vehicle body hinge point or a vehicle body rear axle center. Acquiring the mounting pose of the laser radar mounting position and the vehicle body pose of the engineering machinery; the mounting pose comprises a translation amount and a rotation amount for mounting the laser radar; the vehicle body posture includes a rotation amount of the engineering machinery vehicle body. And converting the coordinates of the cut point cloud into the coordinates P4 of the converted point cloud taking the vehicle body as a coordinate system through rotation and translation means. The mounting positions of the lidar are shown in fig. 1, and may be mounted on two sides of the front body of the engineering machine, which is close to the rear body, namely, the lidar mounting position 1 and the lidar mounting position 2.
In some embodiments, determining a coordinate system origin, and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud, including: acquiring global positioning and body posture of a engineering machinery body, and calculating the body posture of the body through rotation and translation based on the global positioning and the body posture of the body; the vehicle body pose comprises a translation amount and a rotation amount of the vehicle body. Acquiring the mounting pose of the laser radar mounting position; the mounting pose comprises a translation amount and a rotation amount of the laser radar. And according to the vehicle body pose and the installation pose, converting the coordinates of the cut point cloud into the coordinates P3 of the converted point cloud taking the global absolute position as a coordinate system through rotation and translation.
And determining the three-dimensional size range of the point cloud according to the coordinates of the converted point cloud.
According to the coordinates of the converted point cloud, determining the three-dimensional size range of the point cloud as the maximum value of the coordinates of the corresponding axis of the converted point cloud; the calculation formula of the three-dimensional size range of the point cloud is as follows:
wherein,representing the maximum value of the converted point cloud on the x axis; />Representing the minimum value of the converted point cloud on the x axis; />Representing the maximum value of the converted point cloud on the y axis; />Representing the minimum value of the converted point cloud on the y axis; />Representing the maximum value of the converted point cloud on the z axis; />Representing the minimum of the transformed point cloud in the z-axis.
Since the scanning of the lidar is not completed once, there is a case of circularly scanning and circularly determining the three-dimensional size range of the point cloud, however, the three-dimensional size range of the point cloud determined by the first scanning may not completely include the point cloud data updated by the second cycle, and therefore, if the newly acquired point cloud exceeds the three-dimensional size range of the point cloud determined by the previous cycle when the point cloud is updated by the second cycle, the map of the three-dimensional size range of the point cloud constructed by the previous cycle is first moved to the part exceeding the point cloud updated by the second cycle, so that the map of the three-dimensional size range of the new point cloud is updated and the point cloud map of the area which is not scanned is memorized. Taking fig. 2 as an example (XY plane): let the area scanned by the laser radar be 4X 4 grid, the area scanned for the first time is only the area with broken line (dense), then scan to X axis negative direction, the area scanned for the second time is the area with broken line (thin), then this composition is calculated:
the scope of the new map becomesTo->The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>The minimum value of the point cloud on the x-axis updated for the second cycle; />Representing a minimum of the first circularly updated point cloud on the x-axis; and the maximum value of the Xmax point cloud three-dimensional size range in the x-axis direction.
The total number of grids per voxel unit is calculated.
For example, assume that the voxel cell grid has a size of (m, m, m), where m=0.05. The number of grids for each axis is then:
wherein,representing the maximum value of the three-dimensional size range of the point cloud in the x-axis direction; />Representing the minimum value of the three-dimensional size range of the point cloud in the x-axis direction; />Representing the maximum value of the three-dimensional size range of the point cloud in the y-axis direction; />Representing a minimum value of the three-dimensional size range of the point cloud in the y-axis direction; />Representing the maximum value of the three-dimensional size range of the point cloud in the z-axis direction; />Representing the minimum value of the three-dimensional size range of the point cloud in the z-axis direction, and m represents the size of the voxel cell grid. The range of the grid is set as (a, b, c).
And performing voxel gridding downsampling on the converted point cloud, wherein the size of the downsampled grid is (m, m, m), as shown in fig. 3, and putting the downsampled point cloud into a grid map.
Wherein placing the downsampled point cloud into the grid map comprises: a two-dimensional vector is established, wherein the first dimension index of the vector is (X-axis coordinate/m), the second dimension index of the vector is (Y-axis coordinate/m), and the value stored in the two-dimensional vector is the value of the Z axis of the coordinate.
For example, the downsampled point cloud is P5 (x, y, z); first dimension indexThe method comprises the steps of carrying out a first treatment on the surface of the Second dimension index->The method comprises the steps of carrying out a first treatment on the surface of the Two-dimensional vector->The method comprises the steps of carrying out a first treatment on the surface of the Finally, as shown in fig. 4, the constructed three-dimensional map is stored in a two-dimensional Vector.
The laser radar repeats the scanning process, updates a new point cloud in the grid map, and stores the grid map in a two-dimensional Vector.
Performing a job analysis on a job object based on the grid map to determine a feature of the job object and a position of an obstacle, and performing a job based on the feature of the job object and the position of the obstacle, comprising:
and extracting the upper and lower edge contours of the operation object.
Wherein extracting the upper and lower edge contours of the work object comprises: and dividing a part higher than the ground plane from the grid map to obtain a point cloud P7 above the ground plane, wherein the height (namely the z value) of the ground plane is set according to the operation scene.
Obtaining point cloud clusters of different objects through a clustering algorithm according to the point cloud information; the point cloud information includes characteristics and sizes of the point cloud.
And extracting the point clouds (such as material piles, ore piles and the like) P8 of the barriers and the operation objects according to the characteristics of the point cloud clusters.
And extracting a point cloud P8 of an obstacle and an operation object, namely aiming the preset angle at the object pile, extracting point cloud clusters under the angle, taking the point cloud with the largest point cloud quantity in the point cloud clusters as the operation object, and taking the point cloud clusters except the operation object as the obstacle. The preset angle is the radar direction or the vehicle body direction.
In some embodiments, the point cloud P8 of the obstacle and the operation object is extracted, the point cloud clusters are fused with the image recognition, the operation object is recognized by a deep learning method, and the point cloud clusters outside the operation object are used as the obstacle.
After obtaining a point cloud P8 of a work object, projecting the point cloud P8 of the work object on a first straight line L1 in an xy plane to obtain a projected point cloud P8'; the first straight line L1 is formed by a preset angleA point projected on the xy plane from the highest point (z value maximum point) of the point cloud P8 of the work object; the x and y values of the projected point cloud P8' are on the first straight line L1; the preset angle is the direction of a laser radar or a vehicle body; wherein->
As shown in fig. 5, assuming that a, B, and C are three points of the job target point cloud P8 mapped on the first straight line L1, the coordinates of a, B, and C are respectively:
the coordinates of A ', B ' and C ' are respectively as follows:
rotating the projected point cloud by a negative value of the preset angleA rotated point cloud is obtained (as shown in fig. 5).
Traversing all z values of the rotated point cloud along the x axis, and taking out the three-dimensional coordinates of the point with the maximum z value to obtain the upper edge point cloud.
And filtering the upper edge point cloud to obtain an upper edge contour line. As shown in FIG. 5, the upper edge contour lines are curves formed by A ', B ', and C '.
Rotating the upper edge contour line by the preset angleThe upper edge profile at this angle (lidar/car body angle) is obtained.
In some embodiments, the projection method may not be used, the point cloud P8 of the operation object may be directly rotated, all the point clouds may be traversed, and the highest point (the maximum point of the z value) may be extracted, and the upper edge contour may be obtained.
And extracting the front edge outline of the operation object.
Extracting the front edge contour of the operation object comprises the following steps:
in the xy plane, a second straight line L2 is determined; the second straight line is a straight line formed by the minimum value and the maximum value of the x-axis of the upper edge profile; that is, the straight line of the two side end points of the upper edge contour line formed in the xy plane, the minimum end point is recorded asThe maximum endpoint is marked->
And calculating whether all points of the point cloud P8 of the operation object are positioned on the front side or the rear side of the second straight line L2, cutting off the point cloud positioned on the rear side of the second straight line L2, and reserving the point cloud positioned on the front side of the second straight line L2 to obtain a front point cloud P9.
And rotating the negative value of the preset angle for the two endpoints of the second straight line, and obtaining a median value based on the rotated endpoints. For example, a median value is derived
Rotating the front point cloud P9 by a negative value of a preset angleTraversing the front point cloud after rotation along the x axis, and subtracting the median from the y value of the front point cloud after rotation to obtain a front point cloud difference delta y; taking the largest difference value in the front point cloud difference value delta y as a front edge contour point, and taking a curve formed by the front edge contour point set as a front edge contour line; the preset angle is the orientation of the laser radar or the vehicle body.
In some embodiments, the front point cloud P9 may be rotatedA similar front edge profile can also be obtained by taking the minimum value of |z|, i.e. the point closest to the xy plane (ground), but if there is a protrusion in the middle of the object, the point cloud of the protrusion will not be included in the front edge profile.
Rotating the front edge contour line by the preset angleThe leading edge profile at this angle is obtained.
The position and volume of the work object are determined based on the upper and lower edge profiles and the front edge profile.
Wherein, based on the position and the volume of the operation object, the operation object is operated by the engineering machine, comprising:
and positioning the center position of the operation object through the front edge contour and the upper edge contour. For example, the centers of the front and top edge contours are calculated separately, and 2 center points are averaged.
And estimating the volume of the operation object in the visual field range through the point cloud of the operation object. For example, all points are traversed: the volume is approximately equal to the grid size m multiplied by the height of the point, and the volume of all the points is added to obtain the estimated volume.
And planning a working position through the front edge contour or the central position. For example, in actual operation, a portion with a protruding front edge is generally operated first.
And performing operation on the operation object through the engineering machinery based on the position and the volume of the operation object.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A method for constructing a three-dimensional map and job analysis based on lidar, comprising:
filtering the point cloud obtained by laser radar scanning to obtain an original point cloud;
cutting the point cloud of the engineering machinery part in the original point cloud to obtain a cut point cloud;
determining an origin of a coordinate system, and converting coordinates of the point cloud after cutting based on the origin of the coordinate system to obtain a converted point cloud;
determining a three-dimensional size range of the point cloud according to the coordinates of the converted point cloud;
calculating the total number of each voxel cell grid;
performing voxel gridding downsampling on the converted point cloud, and placing the downsampled point cloud into a grid map;
the laser radar repeats the scanning process and updates the new point cloud in the grid map;
performing operation analysis on an operation object based on the grid map to determine the characteristics of the operation object and the positions of the obstacles, and performing operation based on the characteristics of the operation object and the positions of the obstacles;
wherein performing job analysis on the job object based on the grid map includes:
extracting the upper and lower edge contours of the operation object;
extracting the front edge outline of the operation object; comprising the following steps: in the xy plane, determining a second straight line; the second straight line is a straight line formed by the minimum value and the maximum value of the x-axis of the upper edge profile; cutting off the point cloud positioned at the rear side of the second straight line, and reserving the point cloud positioned at the front side of the second straight line to obtain a front point cloud; rotating the two endpoints of the second straight line by a negative value of a preset angle, and obtaining a median value based on the rotated endpoints; rotating the front point cloud by a negative value of a preset angle to obtain a rotated front point cloud, traversing the rotated front point cloud along an x-axis, and subtracting the median from a y value of the rotated front point cloud to obtain a front point cloud difference value; taking the largest difference value in the front point cloud difference values as a front edge contour point, and taking a curve formed by the front edge contour point set as a front edge contour line; the preset angle is the direction of a laser radar or a vehicle body; rotating the front edge contour line by the preset angle to obtain a front edge contour line under the angle;
determining a position and a volume of the work object based on the upper and lower edge profiles and the front edge profile;
performing an operation on the operation object through the engineering machinery based on the position and the volume of the operation object; comprising the following steps: positioning the center position of the operation object through the front edge contour and the upper edge contour; estimating the volume of the operation object in the visual field range through the point cloud of the operation object; and planning a working position through the front edge contour or the central position.
2. The method for constructing a three-dimensional map and performing job analysis based on a laser radar according to claim 1, wherein determining a coordinate system origin and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud comprises:
taking a first position on the engineering machinery vehicle body as an origin, and taking a coordinate system as a right rule; the first position comprises a vehicle body hinge point or a vehicle body rear axle center;
acquiring the mounting pose of the laser radar mounting position and the vehicle body pose of the engineering machinery; the mounting pose comprises a translation amount and a rotation amount for mounting the laser radar; the vehicle body posture comprises the rotation amount of the engineering machinery vehicle body;
and converting the coordinates of the point cloud after cutting into the coordinates of the converted point cloud by taking the vehicle body as a coordinate system through rotation and translation means.
3. The method for constructing a three-dimensional map and performing job analysis based on a laser radar according to claim 1, wherein determining a coordinate system origin and converting coordinates of the cut point cloud based on the coordinate system origin to obtain a converted point cloud comprises:
acquiring global positioning and body posture of a engineering machinery body, and calculating the body posture of the body through rotation and translation based on the global positioning and the body posture of the body; the vehicle body pose comprises a translation amount and a rotation amount of the vehicle body;
acquiring the mounting pose of the laser radar mounting position; the mounting pose comprises a translation amount and a rotation amount for mounting the laser radar;
and according to the vehicle body pose and the mounting pose, converting the coordinates of the cut point cloud into the coordinates of the converted point cloud by taking the global absolute position as a coordinate system through rotation and translation.
4. The method for constructing a three-dimensional map and performing operation analysis based on the laser radar according to claim 1, wherein the three-dimensional size range of the point cloud is determined as a maximum value of the three-dimensional size range by taking the maximum value of the coordinates of the corresponding axes of the point cloud after conversion as the maximum value of the three-dimensional size range; the calculation formula of the three-dimensional size range of the point cloud is as follows:
wherein,representing the maximum value of the converted point cloud on the x axis; />Representing the minimum value of the converted point cloud on the x axis; />Representing the maximum value of the converted point cloud on the y axis; />Representing the minimum value of the converted point cloud on the y axis; />Representing the maximum value of the converted point cloud on the z axis; />Representing the minimum of the transformed point cloud in the z-axis.
5. The method for constructing a three-dimensional map and job analysis based on a lidar according to claim 1, wherein extracting the upper and lower edge contours of the job object comprises:
dividing a part higher than the ground plane from the grid map to obtain point clouds above the ground plane;
obtaining point cloud clusters of different objects through a clustering algorithm according to the point cloud information; the point cloud information comprises characteristics and size of point cloud;
extracting the point clouds of the obstacle and the operation object according to the characteristics of the point cloud cluster;
in the xy plane, projecting the point cloud of the operation object on a first straight line to obtain the projected point cloud; the first straight line is formed by a preset angle and a point projected on an xy plane by the highest point of the point cloud of the operation object; the x and y values of the projected point cloud are on the first straight line; the preset angle is the direction of a laser radar or a vehicle body;
rotating the projected point cloud by a negative value of the preset angle to obtain a rotated point cloud;
traversing the rotated point cloud along the x-axis, and taking out the three-dimensional coordinates of the point with the maximum z value to obtain an upper edge point cloud;
filtering the upper edge point cloud to obtain an upper edge contour line;
and rotating the upper edge contour line by the preset angle to obtain a lower upper edge contour line.
6. The method for constructing a three-dimensional map and analyzing an operation based on the laser radar according to claim 5, wherein the point cloud of the obstacle and the operation object is extracted as the object pile aiming at the preset angle, the point cloud cluster is extracted at the angle, the point cloud cluster with the largest point cloud quantity is used as the operation object, and the point cloud clusters other than the operation object are used as the obstacle.
7. The method for constructing a three-dimensional map and performing analysis based on a laser radar according to claim 5, wherein the point cloud of the obstacle and the operation object is extracted to integrate the point cloud cluster with the image recognition, the operation object is recognized by a deep learning method, and the point cloud clusters other than the operation object are used as the obstacle.
CN202311617726.5A 2023-11-30 2023-11-30 Method for constructing three-dimensional map and analyzing operation based on laser radar Active CN117315183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311617726.5A CN117315183B (en) 2023-11-30 2023-11-30 Method for constructing three-dimensional map and analyzing operation based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311617726.5A CN117315183B (en) 2023-11-30 2023-11-30 Method for constructing three-dimensional map and analyzing operation based on laser radar

Publications (2)

Publication Number Publication Date
CN117315183A CN117315183A (en) 2023-12-29
CN117315183B true CN117315183B (en) 2024-02-23

Family

ID=89274175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311617726.5A Active CN117315183B (en) 2023-11-30 2023-11-30 Method for constructing three-dimensional map and analyzing operation based on laser radar

Country Status (1)

Country Link
CN (1) CN117315183B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102661736A (en) * 2012-05-17 2012-09-12 天津市星际空间地理信息工程有限公司 Highway reorganization and expansion surveying method
EP2625845A2 (en) * 2010-10-04 2013-08-14 Gerard Dirk Smits System and method for 3-d projection and enhancements for interactivity
CN105448184A (en) * 2015-11-13 2016-03-30 北京百度网讯科技有限公司 Map road drawing method and map road drawing device
EP3086283A1 (en) * 2015-04-21 2016-10-26 Hexagon Technology Center GmbH Providing a point cloud using a surveying instrument and a camera device
EP3483554A1 (en) * 2017-11-13 2019-05-15 Topcon Corporation Surveying device, and calibration checking method and calibration checking program for surveying device
CN111508023A (en) * 2020-04-23 2020-08-07 畅加风行(苏州)智能科技有限公司 Laser radar assisted container alignment method for port unmanned container truck
CN111551958A (en) * 2020-04-28 2020-08-18 北京踏歌智行科技有限公司 Mining area unmanned high-precision map manufacturing method
CN111709981A (en) * 2020-06-22 2020-09-25 高小翎 Registration method of laser point cloud and analog image with characteristic line fusion
CN111713245A (en) * 2020-05-11 2020-09-29 江苏大学 Mountain orchard profiling autonomous obstacle avoidance mower and control method thereof
CN114089377A (en) * 2021-10-21 2022-02-25 江苏大学 Point cloud processing and object identification system and method based on laser radar
CN115032648A (en) * 2022-06-06 2022-09-09 上海大学 Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115290097A (en) * 2022-09-30 2022-11-04 安徽建筑大学 BIM-based real-time accurate map construction method, terminal and storage medium
CN115902839A (en) * 2022-10-24 2023-04-04 福建中科云杉信息技术有限公司 Port laser radar calibration method and device, storage medium and electronic equipment
CN115980783A (en) * 2022-11-29 2023-04-18 上海友道智途科技有限公司 Elevated side wall extraction method based on laser radar point cloud

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282914B1 (en) * 2015-07-17 2019-05-07 Bao Tran Systems and methods for computer assisted operation
US20190073827A1 (en) * 2017-09-06 2019-03-07 Josen Premium LLC Method and System for Converting 3-D Scan Displays with Optional Telemetrics, Temporal and Component Data into an Augmented or Virtual Reality BIM

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2625845A2 (en) * 2010-10-04 2013-08-14 Gerard Dirk Smits System and method for 3-d projection and enhancements for interactivity
CN102661736A (en) * 2012-05-17 2012-09-12 天津市星际空间地理信息工程有限公司 Highway reorganization and expansion surveying method
EP3086283A1 (en) * 2015-04-21 2016-10-26 Hexagon Technology Center GmbH Providing a point cloud using a surveying instrument and a camera device
CN105448184A (en) * 2015-11-13 2016-03-30 北京百度网讯科技有限公司 Map road drawing method and map road drawing device
EP3483554A1 (en) * 2017-11-13 2019-05-15 Topcon Corporation Surveying device, and calibration checking method and calibration checking program for surveying device
CN111508023A (en) * 2020-04-23 2020-08-07 畅加风行(苏州)智能科技有限公司 Laser radar assisted container alignment method for port unmanned container truck
CN111551958A (en) * 2020-04-28 2020-08-18 北京踏歌智行科技有限公司 Mining area unmanned high-precision map manufacturing method
CN111713245A (en) * 2020-05-11 2020-09-29 江苏大学 Mountain orchard profiling autonomous obstacle avoidance mower and control method thereof
CN111709981A (en) * 2020-06-22 2020-09-25 高小翎 Registration method of laser point cloud and analog image with characteristic line fusion
CN114089377A (en) * 2021-10-21 2022-02-25 江苏大学 Point cloud processing and object identification system and method based on laser radar
CN115032648A (en) * 2022-06-06 2022-09-09 上海大学 Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN115290097A (en) * 2022-09-30 2022-11-04 安徽建筑大学 BIM-based real-time accurate map construction method, terminal and storage medium
CN115902839A (en) * 2022-10-24 2023-04-04 福建中科云杉信息技术有限公司 Port laser radar calibration method and device, storage medium and electronic equipment
CN115980783A (en) * 2022-11-29 2023-04-18 上海友道智途科技有限公司 Elevated side wall extraction method based on laser radar point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多传感器融合的巡检机器人导航辅助模块设计与实现;康浩;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20210115(第1期);C042-1589 *
基于点云数据的钢卷物流库目标检测系统设计与实现;於小林;《中国优秀硕士学位论文全文数据库 工程科技I辑》;20220415(第4期);B023-10 *

Also Published As

Publication number Publication date
CN117315183A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
US9133600B2 (en) Method for selecting an attack pose for a working machine having a bucket
WO2021237667A1 (en) Dense height map construction method suitable for legged robot planning
CN113781582B (en) Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration
CN111596665B (en) Dense height map construction method suitable for leg-foot robot planning
CN111862214B (en) Computer equipment positioning method, device, computer equipment and storage medium
CN111469127B (en) Cost map updating method and device, robot and storage medium
CN115372989A (en) Laser radar-based long-distance real-time positioning system and method for cross-country automatic trolley
CN114004869A (en) Positioning method based on 3D point cloud registration
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
CN112161622A (en) Robot footprint planning method and device, readable storage medium and robot
JP2021001436A (en) Work machine
CN117315183B (en) Method for constructing three-dimensional map and analyzing operation based on laser radar
CN116256752A (en) Automatic generation method of port lifting tool template point cloud
CN116679307A (en) Urban rail transit inspection robot positioning method based on three-dimensional laser radar
CN114419046B (en) Method and device for recognizing weld of H-shaped steel, electronic equipment and storage medium
CN116380039A (en) Mobile robot navigation system based on solid-state laser radar and point cloud map
CN112127417B (en) Device for generating environmental data around construction machine and construction machine comprising same
CN114648571A (en) Method for filtering obstacles in driving area in high-precision mapping of robot
CN113703438A (en) AUV autonomous navigation path planning method for water delivery tunnel inspection
CN113610910A (en) Obstacle avoidance method for mobile robot
CN117419704A (en) Map area updating method and system for surface mine loading area
Kwak et al. Vision-based Payload Volume Estimation for Automatic Loading
CN114897967B (en) Material form identification method for autonomous operation of excavating equipment
EP4006588A1 (en) Method and a processing unit for reconstructing the surface topology of a ground surface in an environment of a motor vehicle and motor vehicle comprising such a processing unit
Dong et al. Point cloud data segmentation method of work surface based on RANSAC and octree voxel region growth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant