CN114897967A - Material form recognition method for autonomous operation of excavating equipment - Google Patents

Material form recognition method for autonomous operation of excavating equipment Download PDF

Info

Publication number
CN114897967A
CN114897967A CN202210378526.8A CN202210378526A CN114897967A CN 114897967 A CN114897967 A CN 114897967A CN 202210378526 A CN202210378526 A CN 202210378526A CN 114897967 A CN114897967 A CN 114897967A
Authority
CN
China
Prior art keywords
point cloud
point
data
filtering
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210378526.8A
Other languages
Chinese (zh)
Other versions
CN114897967B (en
Inventor
冯晨龙
王继新
沈雨鹰
索喆
李想
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Jiyan Intelligent Technology Co ltd
Original Assignee
Yancheng Jiyan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Jiyan Intelligent Technology Co ltd filed Critical Yancheng Jiyan Intelligent Technology Co ltd
Priority to CN202210378526.8A priority Critical patent/CN114897967B/en
Publication of CN114897967A publication Critical patent/CN114897967A/en
Application granted granted Critical
Publication of CN114897967B publication Critical patent/CN114897967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a material form identification method for autonomous operation of excavating equipment, which aims to solve the problems of high noise of acquired data and difficulty in material identification caused by surrounding environment and self vibration of the equipment in an autonomous operation process. The method mainly comprises the following steps: step (1): and (5) jointly calibrating the sensors. Step (2): and collecting material point cloud data and images, and performing multi-sensor fusion by combining attitude information to obtain point cloud data under a world coordinate system. And (3): and carrying out combined filtering processing on the point cloud data. And (4): and C, resampling the point cloud obtained in the step three. And (5): and fitting the curved surface model of the point cloud by using a least square method. The method provided by the invention has the advantages of high running speed and high robustness.

Description

Material form recognition method for autonomous operation of excavating equipment
Technical Field
The invention relates to the field of excavating equipment and multi-sensor fusion, in particular to a material form identification method for autonomous operation of excavating equipment.
Background
The material identification refers to a process of scanning a stock pile by using a laser radar to obtain point cloud information, reconstructing the surface of the stock pile to obtain a stock pile mathematical model, and the mathematical model established by the material identification provides necessary environmental parameters for track planning content of the excavation equipment for autonomous operation, so that the method is the premise that the excavation equipment can be intelligentized and unmanned; the accuracy of material form recognition also determines the working efficiency and quality of the autonomous operation of the excavating equipment.
The prior art does not solve the problems of inaccurate material form identification, low algorithm robustness and long operation time caused by the influence of environmental factors and self vibration of equipment in the data acquisition process.
The invention provides a material form identification method for autonomous operation of excavating equipment, aiming at the current scenes of harsh environment, poor visual field and complex and changeable material form of a surface mine.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in an open-air environment, when the material is identified by the excavating equipment for autonomous operation, the collected data contains a plurality of noise points, and the identification result is inaccurate, so that the method for identifying the material form for autonomous operation of the excavating equipment is provided.
The technical scheme is as follows: the invention relates to a material form identification method for autonomous operation of excavating equipment, which comprises the following steps:
step (1): a camera, a laser radar and an inertial sensor are installed on a vehicle body, and the sensors are jointly calibrated to obtain a conversion matrix.
Step (2): and collecting material point cloud data and images collected by a camera by using a laser radar, and performing multi-sensor fusion by combining vehicle body attitude information to obtain point cloud data under a world coordinate system.
And (3): and carrying out combined filtering processing on the point cloud data.
And (4): and C, resampling the point cloud obtained in the step three.
And (5): and fitting the curved surface model of the point cloud by using a least square method.
Further, in the step (1), the specific content includes:
and (1.1) mounting a camera, a laser radar and an inertial sensor at a position on the vehicle body, which has no shielding, wide visual field and less movement relative to the equipment.
(1.2) measuring the distance between the camera, the lidar and the inertial sensor.
And (1.3) obtaining a position conversion matrix from the laser radar to the inertial sensor according to the distance between the laser radar and the inertial sensor.
And (1.4) calibrating the camera and the laser radar by using calibration paper, and determining a conversion matrix from point cloud acquired by the laser radar to image data acquired by the camera.
Further, in the step (2), the specific content includes:
and (2.1) collecting material point cloud data by a laser radar, collecting image data by a camera, and collecting posture information of the vehicle body by an inertial sensor.
And (2.2) projecting the collected point cloud into the collected image according to the conversion matrix obtained in the step (1.4), and performing semantic segmentation on the point cloud by using a convolutional neural network to segment point cloud information of the material.
And (2.3) converting the point cloud of the material into a world coordinate system according to the attitude information of the vehicle body and the calibration information of the laser radar and the inertial sensor.
Further, in the step (3), the specific content includes:
and (3.1) carrying out iteration-based statistical filtering processing on the point cloud.
Setting the collected point cloud to contain n points, setting the number of points near the target point to be inquired, namely neighborhood k, and calculating each neighborhood kDistance md from query point to target point j Obtaining the average distance S from k points in the neighborhood to the target point i ,。
Figure BDA0003591223970000021
The average distance u within k neighborhood of all points inside the point cloud is calculated.
Figure BDA0003591223970000022
The average distance S in the k neighborhood i If the mean distance S of k neighborhoods of the target points meets the Gaussian distribution, the multiple alpha of the standard deviation sigma of the Gaussian distribution and the standard deviation threshold value a are set i <u + alpha sigma, the target point is a noise point, and the noise point is removed from the environmental point cloud data. Recalculating standard deviation sigma of Gaussian distribution for the point cloud after the primary filtering processing is finished, and if sigma is calculated<a, completing the statistical filtering process, if sigma>and a, adjusting the values of k and alpha, and performing statistical filtering again.
And (3.2) carrying out radius filtering processing on the point cloud.
And (4) further processing the environmental point cloud data by adopting a radius filtering mode for the point cloud data subjected to the statistical filtering processing in the step (3.2), and setting the radius R of the radius filtering and the number threshold Ks of the points in the target point query range according to the requirement. And drawing a sphere by using the set radius R with the target point as the center, if the number of points contained in the sphere is less than the set threshold value Ks, judging that the target point is a noise point, and deleting the noise point from the point cloud data.
Further, in the step (4), the specific content includes:
(4.1) processing the point cloud data by using a voxel downsampling method; the environment point cloud data is divided into n m x m three-dimensional voxel grids, and the center of gravity point in each voxel grid is used for representing the data points of the point cloud in the grid.
Further, in the step (5), the specific content includes:
(5.1) detecting whether invalid points exist in the point cloud coordinate data after combined filtering processing, eliminating the invalid points, extracting the coordinate data of the point cloud, and organizing the acquired point cloud into three column vectors, wherein elements in the three column vectors respectively represent the positions along the x axis, the y axis and the z axis in a world coordinate system.
And (5.2) building a high-order polynomial surface mathematical model, and solving the mathematical model by using a least square method.
And (5.3) extracting coefficients of the polynomial surface, and drawing a fitted surface mathematical model.
The invention has the following innovation points:
(1) the invention adopts a method of multi-sensor fusion with a camera, and can accurately extract material point cloud information from a large amount of environmental point cloud data.
(2) The invention optimizes the statistical filtering algorithm, adjusts the Gaussian threshold value according to the filtering effect and can achieve better point cloud noise reduction effect.
(3) The method optimizes the algorithm of the curved surface reconstruction, utilizes the polynomial response surface fitting method to reconstruct the curved surface, and improves the running speed of the algorithm.
Compared with the existing method for establishing the map by sensing the non-structural environment, the method has the advantages that:
(1) the application range is wide: the existing algorithm for reconstructing environment perception is mostly concentrated on a structured road, is suitable for intelligent development of passenger vehicles, but cannot well play a perception role in a complex working environment of engineering machinery. The invention solves the problem of rebuilding the non-structural operation environment by the intelligent engineering machine, can effectively reduce the influence of severe environment on the environment perception of the engineering machine, improves the working efficiency of the engineering machine, and has more practicability.
(2) The noise reduction effect is obvious: according to the invention, the environment point cloud is subjected to combined noise reduction treatment by utilizing the statistical filter and the radius filter with set thresholds, so that a better noise reduction effect can be achieved, the robustness of the whole scheme is improved, and the accuracy of environment reconstruction is improved.
(3) The operation speed is fast, the robustness is good: the invention mainly considers the operation speed of the algorithm, reduces the number of data to the maximum extent and reduces the difficulty of data processing on the premise of keeping the environmental characteristics.
(4) The curved surface fitting speed is high, and the precision is high: the existing method for reconstructing the curved surface of the point cloud mainly utilizes triangulation and interpolation to reconstruct the curved surface. The triangulation surface reconstruction method is greatly influenced by the acquired data, and the stability of the algorithm is not strong; the interpolation method for reconstructing the curved surface has high precision, but has low running speed and cannot process the change of the working environment of the engineering machinery in time. The polynomial surface fitting method adopted by the method is less influenced by noise points in point clouds, the precision of surface fitting is high, the running speed is high, and a mathematical model of a curved surface can be obtained, so that the next planning is carried out.
Drawings
Fig. 1 is a form recognition method for autonomous operation materials of excavating equipment.
Fig. 2 is a schematic view of the camera, lidar and inertial sensor mounting locations.
FIG. 3 is a flow chart of point cloud noise reduction.
Detailed Description
The present invention will be described in further detail below in the form of specific embodiments with reference to the attached drawings. The exemplary embodiments of the present invention and the description thereof are only for explaining the present invention and do not limit the present invention.
As shown in fig. 1, a material form identification method for autonomous operation of excavating equipment in the present embodiment includes the following steps:
step (1): and (5) jointly calibrating the sensors.
And (1.1) mounting a camera, a laser radar and an inertial sensor at a position on the vehicle body, which has no shielding, wide visual field and less movement relative to the equipment. In the present embodiment, the relative positional relationship of the installation of the camera 2, the laser radar 1, and the inertial sensor 3 is as shown in fig. 2.
(1.2) measuring the distance between the Camera, the lidar and the inertial sensorSeparation: in the embodiment, a Cartesian coordinate system for distance calibration is established by taking the gravity center of an inertial sensor as an origin O, the direction in which the inertial sensor points at a camera is taken as the positive direction of an X axis, the direction in which the inertial sensor points at a laser radar is taken as the positive direction of a Z axis, and the direction perpendicular to an XOY plane is taken as the Y direction, and the coordinates of the inertial sensor, which are obtained by measurement, are G (0,0,0), J (10, 100), and X (10, 100), respectively j (60,0,10) obtaining a coordinate transformation matrix A of the laser radar and the inertial sensor by the formula 1
X j *A 1 =G;
Namely:
Figure BDA0003591223970000041
coordinate transformation matrix A of inertial sensor and laser radar can be obtained 1
Figure BDA0003591223970000042
The coordinate transformation matrix A of the camera and the inertial sensor can be obtained in the same way 2
Figure BDA0003591223970000043
The coordinate transformation matrix A of the laser radar and the camera can be obtained by the same method 3
Figure BDA0003591223970000044
(1.3) Using coordinate transformation matrix A between Camera and inertial sensor 2 Obtaining a conversion matrix M of the position of the laser radar to the inertial sensor according to the formula (1-1) 1 . In the formula, Q represents the posture of the vehicle body (the inclination angle of the vehicle body) acquired by the inertial sensor.
M 1 =A 2 Q (1-1)
And (1.4) calibrating the camera and the laser radar by using calibration paper, and determining a conversion matrix from point cloud acquired by the laser radar to image data acquired by the camera. The calibration paper used in this example is as shown in fig. 3, the image and point cloud data of the calibration paper are collected, the coordinate transformation matrix between the camera and the laser radar is calculated according to the formula (1-2), the optical center of the camera is used as the origin, the z-axis coincides with the optical axis, namely, the z-axis points to the front of the camera, the positive directions of the x-axis and the y-axis are parallel to the object coordinate system, the horizontal direction is used as the x-axis, and the vertical direction is used as the y-axis, wherein (P) (P x 、P x 1) homogeneous form of point coordinates in a pixel coordinate system, f x 、f y Denotes the focal length of the camera in the x and y directions, P x0 、P y0 Representing the principal point coordinates of the camera, R and t representing the rotation matrix and translation matrix of the joint calibration requirement, L x ,L y ,L z The coordinate value of the laser radar coordinate system is expressed, and finally the conversion matrix M from the laser radar to the camera is obtained 2
Figure BDA0003591223970000051
Step (2): and collecting material point cloud data and images, and performing multi-sensor fusion by combining attitude information to obtain point cloud data under a world coordinate system.
And (2-1) acquiring point cloud data, image data and posture information of the vehicle body at the same time.
(2-2) the transformation matrix M obtained in step (1.4) 2 Projecting the collected point cloud information into the collected image to obtain a picture with depth information, putting the picture into a trained convolutional neural network, performing semantic segmentation on the point cloud data by using the convolutional neural network, and segmenting the material point cloud to be identified.
And (2.3) obtaining a material point cloud under a world coordinate system (with a camera as an origin) according to the attitude information of the vehicle body (the inclination angle of the vehicle body) and the calibration information of the laser radar and the inertial sensor.
And (3): and carrying out combined filtering processing on the point cloud data.
And (3.1) carrying out iteration-based statistical filtering processing on the point cloud data.
Setting the collected point cloud to contain n points, setting the number of points near the target point to be inquired, namely neighborhood k, and calculating the distance md from each inquiry point in the neighborhood k to the target point j Obtaining the average distance S from k points in the neighborhood to the target point i ,。
Figure BDA0003591223970000052
The average distance u within k neighborhood of all points inside the point cloud is calculated.
Figure BDA0003591223970000053
The average distance S in the k neighborhood i Conforming to the Gaussian distribution, setting a multiple alpha of a standard deviation sigma of the Gaussian distribution and a standard deviation threshold value a if the average distance S of k neighborhoods of the target points i <u + alpha sigma, the target point is a noise point, and the noise point is removed from the environmental point cloud data. Recalculating standard deviation sigma of Gaussian distribution for the point cloud after the primary filtering processing is finished, and if sigma is calculated<a, completing the statistical filtering process, if sigma>and a, adjusting the values of k and alpha, and performing statistical filtering again.
In this example, the selected k value is 5, and the s value is 0.2, and if the condition for setting the threshold is not satisfied, the operation of adding 5 to k is performed, and the operation of adding 0.1 to s is performed.
(3.2) carrying out radius filtering processing on the point cloud; and (4) further processing the environmental point cloud data by adopting a radius filtering mode for the point cloud data subjected to the statistical filtering processing in the step (3.2), and setting the radius R of the radius filtering and the number threshold Ks of the points in the target point query range according to the requirement. And drawing a sphere by using the set radius R with the target point as the center, if the number of points contained in the sphere is less than the set threshold value Ks, judging that the target point is a noise point, and deleting the noise point from the point cloud data. The set radius R is 0.1cm, and the point cloud number threshold Ks is 5.
And (4): and (4) resampling the point cloud obtained in the step (3).
(4.1) processing the point cloud data by using a voxel down-sampling method; the environment point cloud data is divided into n m x m three-dimensional voxel grids, and the center of gravity point in each voxel grid is used for representing the data points of the point cloud in the grid. In this example, the voxel grid size is selected as: 1X 1 (cm). As shown in fig. 3.
And (5): the least square method is utilized to fit the curved surface model of the point cloud to complete the identification of the material form
(5.1) detecting whether invalid points exist in the point cloud coordinate data after combined filtering processing, eliminating the invalid points, and organizing the acquired point cloud into three column vectors, wherein elements in the three column vectors respectively represent the positions along the x, y and z axes in the world coordinate system. The point cloud is represented here as
Figure BDA0003591223970000061
Where N denotes the number of point clouds, x ═ x 1 ,x 2 ,…,x N ] T ,y=[y 1 ,y 2 ,…,y N ] T ,z=[z 1 ,z 2 ,…,z N ] T
And (5.2) building a high-order polynomial surface mathematical model, and solving the mathematical model by using a least square method. The surface equation selected in this example is a 5 th order polynomial surface equation, the expression form of which is shown in formula (5-1), wherein k is ij For constructed polynomial coefficients
Figure BDA0003591223970000062
Written in matrix form as shown in equation (5-2):
Figure BDA0003591223970000063
namely:
MK=Z (5-3)
the coefficient matrix K in the formula (5-3) is an unknown matrix, and the K matrix is estimated to obtain
Figure BDA0003591223970000064
And (4) matrix. According to the principle of least squares, will utilize
Figure BDA0003591223970000065
Matrix derived estimated value
Figure BDA0003591223970000066
With the actual value z i And (3) making a difference to obtain a sum of squares epsilon, namely:
Figure BDA0003591223970000067
calculating when epsilon reaches a minimum value min (epsilon)
Figure BDA0003591223970000071
The values of the matrix. According to the principle that the first derivative function of the function at the maximum point is zero, the epsilon pair is divided into
Figure BDA0003591223970000072
And (6) derivation.
Figure BDA0003591223970000073
Can find out
Figure BDA0003591223970000074
The values of the matrix are:
Figure BDA0003591223970000075
according to coefficient matrix
Figure BDA0003591223970000076
Can obtainAnd fitting a 5 th order polynomial surface model.

Claims (3)

1. The method is characterized in that the method carries out combined filtering on data subjected to heterogeneous fusion to quickly and effectively filter noise, the combined filtering carries out iterative optimization on parameters on the basis of traditional filtering, and finally a high-order response surface with higher processing speed is used for fitting a mathematical model of the material.
Wherein the heterogeneous data mainly comprises: point cloud data collected by a laser radar and image data collected by a camera;
the method for performing fusion processing by using heterogeneous data comprises the following steps:
1) installing a camera, a laser radar and an inertial sensor on a vehicle body, and carrying out combined calibration on the sensors to obtain a conversion matrix;
2) and collecting material point cloud data and images collected by a camera by using a laser radar, and performing multi-sensor fusion by combining attitude information to obtain point cloud data under a world coordinate system.
The method for performing iterative optimized combined filtering on the fused heterogeneous data and fitting the mathematical model of the material by using the high-order response surface comprises the following steps:
1) carrying out combined filtering processing on the point cloud data;
carrying out statistical filtering processing on point clouds based on iteration;
secondly, radius filtering processing is carried out on the point cloud;
2) resampling the point cloud obtained in the previous step;
3) and fitting the curved surface model of the point cloud by using a least square method to complete the identification of the material form.
2. The method for identifying the material form oriented to the autonomous operation of the excavating equipment as claimed in claim 1, wherein the data is filtered by adopting combined filtering which is subjected to iterative optimization, and the specific content of the filtering process comprises:
1) carrying out statistical filtering processing based on iteration on the fused point cloud;
setting the collected point cloud to contain n points, setting the number of points near the target point to be inquired, namely the field k, and calculating the distance md from each inquiry point to the target point in the field k j Obtaining the average distance S from k points in the neighborhood to the target point i ,;
Calculating the average distance u in the k field of all points in the fused point cloud;
the average distance S in the k-field i Conforming to the Gaussian distribution, setting the multiple alpha of the standard deviation sigma of the Gaussian distribution and the standard deviation threshold value a if the average distance S of the k fields of the target point i <u + alpha sigma, the target point is a noise point, and the noise point is removed from the environmental point cloud data; recalculating standard deviation sigma of Gaussian distribution for the point cloud after the primary filtering processing is finished, and if sigma is calculated<a, completing the statistical filtering process, if σ>a, adjusting the values of k and alpha, and performing statistical filtering again;
2) carrying out radius filtering processing on the point cloud;
further processing the environmental point cloud data by adopting a radius filtering mode for the point cloud data subjected to the statistical filtering processing in the step 1), and setting the radius R of the radius filtering and the number threshold Ks of the points in the target point query range according to the requirement; and drawing a sphere by using the set radius R with the target point as the center, if the number of points contained in the sphere is less than the set threshold value Ks, judging that the target point is a noise point, and deleting the noise point from the point cloud data.
3. The method for identifying the material form oriented to the autonomous operation of the excavating equipment as claimed in claim 1, wherein the method for modeling by polynomial estimation is used for fitting a mathematical model of the material by using a high-order response surface, and the specific contents include:
1) detecting whether invalid points exist in the point cloud coordinate data after combined filtering processing, eliminating the invalid points, and extracting the coordinate data of the point cloud;
2) and (3) constructing a high-order response surface mathematical model by adopting a polynomial estimation modeling method, and solving the mathematical model by utilizing a least square method.
CN202210378526.8A 2022-04-12 2022-04-12 Material form identification method for autonomous operation of excavating equipment Active CN114897967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210378526.8A CN114897967B (en) 2022-04-12 2022-04-12 Material form identification method for autonomous operation of excavating equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210378526.8A CN114897967B (en) 2022-04-12 2022-04-12 Material form identification method for autonomous operation of excavating equipment

Publications (2)

Publication Number Publication Date
CN114897967A true CN114897967A (en) 2022-08-12
CN114897967B CN114897967B (en) 2024-04-26

Family

ID=82717495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210378526.8A Active CN114897967B (en) 2022-04-12 2022-04-12 Material form identification method for autonomous operation of excavating equipment

Country Status (1)

Country Link
CN (1) CN114897967B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116663761A (en) * 2023-06-25 2023-08-29 昆明理工大学 Pseudo-ginseng chinese-medicinal material low-loss excavation system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
WO2019242174A1 (en) * 2018-06-21 2019-12-26 华南理工大学 Method for automatically detecting building structure and generating 3d model based on laser radar
CN112150388A (en) * 2020-09-30 2020-12-29 大连华锐重工集团股份有限公司 Continuous ship unloader ship and material identification sensing method
CN113971692A (en) * 2021-10-28 2022-01-25 中冶赛迪上海工程技术有限公司 Three-dimensional reconstruction method and system for scrap steel stock yard
US11282291B1 (en) * 2021-02-09 2022-03-22 URC Ventures, Inc. Determining object structure using fixed-location cameras with only partial view of object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019242174A1 (en) * 2018-06-21 2019-12-26 华南理工大学 Method for automatically detecting building structure and generating 3d model based on laser radar
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN112150388A (en) * 2020-09-30 2020-12-29 大连华锐重工集团股份有限公司 Continuous ship unloader ship and material identification sensing method
US11282291B1 (en) * 2021-02-09 2022-03-22 URC Ventures, Inc. Determining object structure using fixed-location cameras with only partial view of object
CN113971692A (en) * 2021-10-28 2022-01-25 中冶赛迪上海工程技术有限公司 Three-dimensional reconstruction method and system for scrap steel stock yard

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
林万誉;周思跃;袁政鹏;: "基于最小截取二乘法的点云数据去噪方法研究", 计量与测试技术, no. 12, 30 December 2016 (2016-12-30) *
薛连杰;齐臣坤;张彪;张霄远;吴长征;: "基于3维点云欧氏聚类和RANSAC边界拟合的目标物体尺寸和方位识别", 机械设计与研究, no. 05, 20 October 2018 (2018-10-20) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116663761A (en) * 2023-06-25 2023-08-29 昆明理工大学 Pseudo-ginseng chinese-medicinal material low-loss excavation system
CN116663761B (en) * 2023-06-25 2024-04-23 昆明理工大学 Pseudo-ginseng chinese-medicinal material low-loss excavation system

Also Published As

Publication number Publication date
CN114897967B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN111929699B (en) Laser radar inertial navigation odometer considering dynamic obstacle and map building method and system
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN110796728B (en) Non-cooperative spacecraft three-dimensional reconstruction method based on scanning laser radar
CN109752701A (en) A kind of road edge detection method based on laser point cloud
CN112162297B (en) Method for eliminating dynamic obstacle artifacts in laser point cloud map
CN115372989A (en) Laser radar-based long-distance real-time positioning system and method for cross-country automatic trolley
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN114332348B (en) Track three-dimensional reconstruction method integrating laser radar and image data
CN111932614B (en) Laser radar instant positioning and mapping method based on clustering center characteristics
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111829514B (en) Road surface working condition pre-aiming method suitable for vehicle chassis integrated control
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN114897967B (en) Material form identification method for autonomous operation of excavating equipment
CN116758234A (en) Mountain terrain modeling method based on multipoint cloud data fusion
CN113487631B (en) LEGO-LOAM-based adjustable large-angle detection sensing and control method
CN116543032B (en) Impact object ranging method, device, ranging equipment and storage medium
CN116878542A (en) Laser SLAM method for inhibiting height drift of odometer
CN117029870A (en) Laser odometer based on road surface point cloud
CN116893425A (en) Ultrahigh-precision positioning method for orchard picking robot
CN114648571A (en) Method for filtering obstacles in driving area in high-precision mapping of robot
CN114581519A (en) Laser autonomous positioning mapping method for quadruped robot in cross-country environment
CN115046543A (en) Multi-sensor-based integrated navigation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant