CN115690203A - Bucket pose sensing method and system and storable medium - Google Patents

Bucket pose sensing method and system and storable medium Download PDF

Info

Publication number
CN115690203A
CN115690203A CN202211186802.7A CN202211186802A CN115690203A CN 115690203 A CN115690203 A CN 115690203A CN 202211186802 A CN202211186802 A CN 202211186802A CN 115690203 A CN115690203 A CN 115690203A
Authority
CN
China
Prior art keywords
bucket
point cloud
cloud data
perceived
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211186802.7A
Other languages
Chinese (zh)
Inventor
毕林
徐梓菁
周蕴卓
张玉昊
赵子瑜
谭笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202211186802.7A priority Critical patent/CN115690203A/en
Publication of CN115690203A publication Critical patent/CN115690203A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a bucket pose sensing method, a bucket pose sensing system and a storable medium, and relates to the technical field of point cloud processing, wherein the method comprises the following steps: s1: acquiring feature points of a bucket to be perceived, and constructing a template point cloud base of the bucket to be perceived by using the feature points; s2: acquiring scene point cloud data of the bucket to be perceived, and obtaining bucket point cloud data of the bucket to be perceived according to the scene point cloud data; s3: acquiring point cloud data of the bucket in real time and performing tracking processing to obtain a tracking processing result; s4: realizing the pose estimation information of the bucket to be perceived based on the tracking processing result obtained in the S3; according to the method, the spatial pose of the bucket is accurately acquired, an operator can accurately perceive the spatial pose of the bucket through the method, accurate shoveling operation is realized, and shoveling operation is ensured to be carried out according to design requirements.

Description

Bucket pose sensing method and system and storage medium
Technical Field
The invention relates to the technical field of point cloud processing, in particular to a bucket pose sensing method and system and a storable medium.
Background
At present, the accurate positioning of the working mechanism of the shoveling equipment is important for the accurate shoveling in the fields of mining, capital construction and the like, on one hand, the pose estimation of the working mechanism of the shoveling equipment can assist an operator to realize the accurate shoveling operation and improve the shoveling efficiency, and on the other hand, the real-time acquisition of the pose of the bucket is an important basis for realizing the remote control and the accurate guiding of the shoveling position of the shoveling operation.
However, the existing method for acquiring the posture of the bucket is implemented by installing various sensors, generally, an operating mechanism of the shoveling and loading equipment has multiple degrees of freedom and a complex structure, in order to accurately acquire the posture changed when the operating mechanism shovels and loads materials, a plurality of sensors (such as angle sensors, displacement sensors and the like) of different types need to be installed, and the sensors are difficult to install, easy to damage and maintain, complex to calibrate and calculate, and loose and worn structural parts easily cause errors.
Therefore, how to provide a bucket pose sensing method with high sensing precision is a problem that needs to be solved urgently by the technical personnel in the field.
Disclosure of Invention
In view of this, the invention provides a method and a system for sensing the position and orientation of a bucket, and a storable medium, so that the spatial position and orientation of the bucket can be accurately obtained, an operator can realize accurate sensing of the spatial position and orientation of the bucket through the method, accurate shoveling and loading operation can be realized, and shoveling and loading operation can be performed according to design requirements.
In order to achieve the purpose, the invention adopts the following technical scheme:
a bucket posture sensing method comprises the following steps:
s1: acquiring feature points of a bucket to be perceived, and constructing a template point cloud base of the bucket to be perceived by using the feature points;
s2: acquiring scene point cloud data of the bucket to be perceived, and obtaining bucket point cloud data of the bucket to be perceived according to the scene point cloud data;
s3: acquiring point cloud data of the bucket in real time and carrying out tracking processing to obtain a tracking processing result;
s4: and realizing the pose estimation information of the bucket to be perceived based on the tracking processing result obtained in the step S3.
Preferably, S1 specifically includes:
s11: acquiring a three-dimensional model of the bucket to be sensed, and acquiring corresponding bucket template point cloud data according to the three-dimensional model;
s12: and downsampling the bucket template point cloud data to construct a template point cloud base of the bucket to be sensed.
Preferably, the S2 specifically includes:
s21: acquiring scene point cloud data of the bucket to be sensed;
s22: converting the scene point cloud data into depth image data, and extracting bucket boundary data of the bucket to be sensed according to the depth image data;
s23: and extracting characteristic point pairs corresponding to the bucket boundary data and the bucket template point cloud data to form bucket point cloud data.
Preferably, the S3 specifically includes:
s31: establishing a Kalman filtering tracker by utilizing a Kalman filtering method;
s32: tracking the position information of the bucket point cloud data identified in the first frame as an initial value, and sequentially tracking and detecting the bucket point cloud data of the rest frames to obtain bucket position information;
s33: and updating parameters of the Kalman filtering tracker according to the bucket position information to obtain a tracking result.
Preferably, S4 specifically includes:
s41: acquiring a rotation matrix and a displacement matrix of the bucket point cloud data;
s42: and obtaining the posture change of the bucket according to the rotation matrix and the displacement matrix, and further obtaining the posture estimation information of the bucket to be perceived.
Further, the present invention provides a sensing system using any one of the above methods for sensing a posture of a bucket, including:
the acquisition module is used for acquiring the characteristic points of the bucket to be perceived and constructing a template point cloud base of the bucket to be perceived by using the characteristic points;
the computing module is used for acquiring scene point cloud data of the bucket to be perceived and obtaining bucket point cloud data of the bucket to be perceived according to the scene point cloud data;
the tracking module is used for acquiring the point cloud data of the bucket in real time and carrying out tracking processing to obtain a tracking result;
and the perception module is used for realizing the pose estimation information of the bucket to be perceived based on the tracking processing result.
Further, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the bucket posture sensing method according to any one of the above.
According to the technical scheme, compared with the prior art, the bucket position and pose sensing method, the system and the storage medium are provided, the point cloud data of the bucket are tracked by adopting a Kalman filtering tracker method, the position and pose estimation speed is improved, the method is more suitable for identifying a continuously moving working mechanism under the background of shoveling operation, and the problem that the target is lost due to shielding in the moving process of the target can be effectively solved; compared with the method for acquiring the pose of the working mechanism based on the traditional sensor, the method provided by the invention has the advantages of small error, high reliability, difficult damage and the like; secondly, compared with the technology of target identification based on deep learning, the method provided by the invention does not need a large amount of sample data, only needs the working mechanism model and the point cloud template library thereof, and improves the calculation speed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is an overall flowchart of a bucket pose sensing method provided by the invention;
fig. 2 is a schematic structural diagram of a bucket posture sensing system provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to the attached drawing 1, the embodiment of the invention discloses a bucket pose sensing method, which comprises the following steps:
s1: acquiring feature points of the bucket to be perceived, and constructing a template point cloud base of the bucket to be perceived by using the feature points;
s2: acquiring scene point cloud data of a bucket to be perceived, and acquiring bucket point cloud data of the bucket to be perceived according to the scene point cloud data;
s3: acquiring point cloud data of the bucket in real time and carrying out tracking processing to obtain a tracking processing result;
s4: and realizing the pose estimation information of the bucket to be perceived based on the tracking processing result obtained in the step S3.
In a specific embodiment, S1 specifically includes:
s11: acquiring a three-dimensional model of the bucket to be sensed, and acquiring corresponding bucket template point cloud data according to the three-dimensional model;
s12: and (4) downsampling the bucket template point cloud data to construct a template point cloud database of the bucket to be sensed.
Specifically, a standard three-dimensional model of the bucket is obtained, the function in the pcl library is utilized to realize the reading of the model and the generation of a complete bucket template point cloud, the template point cloud is downsampled through a space voxel grid for establishing the point cloud, then the gravity center in each voxel represents a feature point F in the voxel, and a template point cloud library of the bucket is constructed, wherein the specific expression is as follows:
Figure BDA0003867873520000041
wherein n is the number of the point cloud data of the template, q xj ,q yj ,q zj The coordinate values along the x, y, z axes for each point.
In a specific embodiment, S2 specifically includes:
s21: scanning the working environment of the bucket to be sensed through a laser radar, and acquiring scene point cloud data of the bucket to be sensed;
s22: converting the scene point cloud data into depth image data, comparing the distance change of each sampling point and the adjacent points thereof in the depth image point by point, extracting the bucket boundary data of the bucket to be perceived based on the discontinuous change of the distance data from an object to a background, and acquiring the bucket boundary data of the bucket to be perceived;
s23: and extracting characteristic point pairs corresponding to the bucket boundary data and the bucket template point cloud data to form bucket point cloud data.
Specifically, on the basis of bucket boundary data, a feature point extraction method which is the same as that of template point cloud data is adopted for scene point cloud data, local feature descriptors of feature points are calculated, a local coordinate system of the scene feature points is constructed, and the local coordinate system of the feature is established, so that the feature has rotation invariance and translation invariance, and the local coordinate system of the same feature point is unique in a scene or a model; therefore, a structure search algorithm can be used, descriptors similar to the template in the scene are found based on a certain matching principle such as Euclidean distance and the like so as to find a three-dimensional corresponding relation between scene point cloud data and template point cloud data, then based on an algorithm such as Hough voting and the like, the existence of a peak value is a correctly corresponding characteristic point pair, pseudo-correspondence is removed, the correctly corresponding characteristic point pair between the template point cloud data and the scene point cloud data is found, point cloud data matched with the template point cloud data in the scene is obtained, and point cloud bucket data identification is completed.
In a specific embodiment, S3 specifically includes:
s31: establishing a Kalman filtering tracker by utilizing a Kalman filtering method, wherein a Kalman filtering algorithm state prediction equation is as follows:
x k =Fx k-1 +Bu k
in the formula, x k And x k-1 Is a state variable at time k and time k-1, u k For the input variable at time k, F is the state transition matrix and B is the input gain matrix
S32: tracking the position information of the bucket point cloud data identified in the first frame as an initial value, and sequentially tracking and detecting the bucket point cloud data of the rest frames to obtain bucket position information;
s33: and updating parameters of the Kalman filtering tracker according to the position information of the bucket, namely updating a state transition matrix and an input gain matrix to obtain a tracking result.
Specifically, S32-S33 are repeated, the detected target bucket is continuously used for updating the Kalman filtering tracker, when the bucket is shielded, the Kalman filtering method predicts the position of the current bucket according to the state of the bucket in the previous frame, the problem that the target is lost due to shielding in the moving process of a working mechanism is effectively solved, millisecond-level response can be realized in the reaction time of the tracking method, real-time capture of bucket point cloud is realized, and bucket information in each frame of scene point cloud is quickly acquired.
In a specific embodiment, S4 specifically includes:
s41: acquiring a rotation matrix and a displacement matrix of bucket point cloud data;
s42: and obtaining the position and attitude change of the bucket according to the rotation matrix and the displacement matrix, and further obtaining position and attitude estimation information of the bucket to be sensed.
Specifically, feature point extraction is carried out based on bucket point clouds captured by each frame, a method for determining template-scene feature point pairs in S1 is utilized, then feature point pairs correctly corresponding to adjacent frames are found based on a Hough voting algorithm, rough registration is carried out based on a random sample consistency algorithm, then accurate registration is carried out by an ICP algorithm, and a rotation matrix R and a displacement matrix T between the adjacent frame point clouds are calculated, wherein the rotation matrix R is as follows:
Figure BDA0003867873520000061
the displacement matrix T is:
T=(t 1 ,t 2 ,t 3 ) T
and calculating the pose change of the bucket in the adjacent frames based on the rotation matrix and the displacement matrix by taking the bucket point cloud information extracted from the first frame after the radar starts scanning as the initial pose of the bucket, and further obtaining the pose estimation information of the bucket scanned by the laser radar at the current moment.
Estimating the pose with six degrees of freedom under a scene point cloud coordinate system as follows:
Figure BDA0003867873520000062
wherein, (x, y, z) = (t) 1 ,t 2 ,t 3 ),
Figure BDA0003867873520000063
Referring to fig. 2, an embodiment of the present invention further provides a sensing system using a bucket pose sensing method according to any one of the above embodiments, including:
the acquisition module is used for acquiring the feature points of the bucket to be perceived and constructing a template point cloud base of the bucket to be perceived by using the feature points;
the computing module is used for acquiring scene point cloud data of the bucket to be perceived and obtaining bucket point cloud data of the bucket to be perceived according to the scene point cloud data;
the tracking module is used for acquiring point cloud data of the bucket in real time and performing tracking processing to obtain a tracking result;
and the perception module is used for realizing the pose estimation information of the bucket to be perceived based on the tracking processing result.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the bucket pose sensing method according to any one of the above embodiments is implemented.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A bucket posture sensing method is characterized by comprising the following steps:
s1: acquiring feature points of a bucket to be sensed, and constructing a template point cloud base of the bucket to be sensed by using the feature points;
s2: acquiring scene point cloud data of the bucket to be perceived, and obtaining bucket point cloud data of the bucket to be perceived according to the scene point cloud data;
s3: acquiring point cloud data of the bucket in real time and carrying out tracking processing to obtain a tracking processing result;
s4: and realizing the pose estimation information of the bucket to be perceived based on the tracking processing result obtained in the step S3.
2. The bucket pose sensing method according to claim 1, wherein the S1 specifically comprises:
s11: acquiring a three-dimensional model of the bucket to be sensed, and acquiring corresponding bucket template point cloud data according to the three-dimensional model;
s12: and downsampling the bucket template point cloud data to construct a template point cloud base of the bucket to be sensed.
3. The bucket pose sensing method according to claim 2, wherein the S2 specifically comprises:
s21: acquiring scene point cloud data of the bucket to be sensed;
s22: converting the scene point cloud data into depth image data, and extracting bucket boundary data of the bucket to be perceived according to the depth image data;
s23: and extracting characteristic point pairs corresponding to the bucket boundary data and the bucket template point cloud data to form bucket point cloud data.
4. The bucket pose sensing method according to claim 2, wherein the S3 specifically comprises:
s31: establishing a Kalman filtering tracker by using a Kalman filtering method;
s32: tracking the position information of the bucket point cloud data identified in the first frame as an initial value, and sequentially tracking and detecting the bucket point cloud data of the rest frames to obtain bucket position information;
s33: and updating parameters of the Kalman filtering tracker according to the bucket position information to obtain a tracking result.
5. The bucket pose sensing method according to claim 3, wherein the S4 specifically comprises:
s41: acquiring a rotation matrix and a displacement matrix of the bucket point cloud data;
s42: and obtaining the posture change of the bucket according to the rotation matrix and the displacement matrix, and further obtaining the posture estimation information of the bucket to be perceived.
6. A sensing system using the bucket pose sensing method according to any one of claims 1 to 5, comprising:
the acquisition module is used for acquiring the feature points of the bucket to be perceived and constructing a template point cloud base of the bucket to be perceived by using the feature points;
the computing module is used for acquiring scene point cloud data of the bucket to be perceived and obtaining bucket point cloud data of the bucket to be perceived according to the scene point cloud data;
the tracking module is used for acquiring point cloud data of the bucket in real time and performing tracking processing to obtain a tracking result;
and the perception module is used for realizing the pose estimation information of the bucket to be perceived based on the tracking processing result.
7. A computer-readable storage medium, characterized in that a computer program is stored thereon, which when executed by a processor implements the bucket pose sensing method according to any one of claims 1 to 5.
CN202211186802.7A 2022-09-28 2022-09-28 Bucket pose sensing method and system and storable medium Pending CN115690203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211186802.7A CN115690203A (en) 2022-09-28 2022-09-28 Bucket pose sensing method and system and storable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211186802.7A CN115690203A (en) 2022-09-28 2022-09-28 Bucket pose sensing method and system and storable medium

Publications (1)

Publication Number Publication Date
CN115690203A true CN115690203A (en) 2023-02-03

Family

ID=85064927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211186802.7A Pending CN115690203A (en) 2022-09-28 2022-09-28 Bucket pose sensing method and system and storable medium

Country Status (1)

Country Link
CN (1) CN115690203A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903337A (en) * 2019-02-28 2019-06-18 北京百度网讯科技有限公司 Method and apparatus for determining the pose of the scraper bowl of excavator
CN112669385A (en) * 2020-12-31 2021-04-16 华南理工大学 Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
CN113506326A (en) * 2021-07-15 2021-10-15 上海三一重机股份有限公司 Bucket three-dimensional pose tracking method, device and system and excavator
CN114463384A (en) * 2022-01-07 2022-05-10 武汉理工大学 Laser radar tracking method and device, electronic equipment and storage medium
CN114743259A (en) * 2022-02-28 2022-07-12 华中科技大学 Pose estimation method, pose estimation system, terminal, storage medium and application

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903337A (en) * 2019-02-28 2019-06-18 北京百度网讯科技有限公司 Method and apparatus for determining the pose of the scraper bowl of excavator
CN112669385A (en) * 2020-12-31 2021-04-16 华南理工大学 Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
CN113506326A (en) * 2021-07-15 2021-10-15 上海三一重机股份有限公司 Bucket three-dimensional pose tracking method, device and system and excavator
CN114463384A (en) * 2022-01-07 2022-05-10 武汉理工大学 Laser radar tracking method and device, electronic equipment and storage medium
CN114743259A (en) * 2022-02-28 2022-07-12 华中科技大学 Pose estimation method, pose estimation system, terminal, storage medium and application

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕强;王晓龙;刘峰;夏凡;: "基于点云配准的室内移动机器人6自由度位姿估计", 装甲兵工程学院学报, no. 04, pages 1 - 6 *
张凯霖;张良;: "复杂场景下基于C-SHOT特征的3D物体识别与位姿估计", 计算机辅助设计与图形学学报, no. 05, pages 1 - 8 *

Similar Documents

Publication Publication Date Title
CN111258313B (en) Multi-sensor fusion SLAM system and robot
CN110782483B (en) Multi-view multi-target tracking method and system based on distributed camera network
CN112837352B (en) Image-based data processing method, device and equipment, automobile and storage medium
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
CN105469405A (en) Visual ranging-based simultaneous localization and map construction method
CN112734852A (en) Robot mapping method and device and computing equipment
JP2023106284A (en) Digital twin modeling method and system for teleoperation environment of assembly robot
KR20130084849A (en) Method and apparatus for camera tracking
EP3420532B1 (en) Systems and methods for estimating pose of textureless objects
Ferguson et al. A 2d-3d object detection system for updating building information models with mobile robots
CN113640756A (en) Data calibration method, system, device, computer program and storage medium
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN115690203A (en) Bucket pose sensing method and system and storable medium
CN116091998A (en) Image processing method, device, computer equipment and storage medium
Chen et al. Inertial aided 3d lidar slam with hybrid geometric primitives in large-scale environments
Zhang et al. An improved RGB-D SLAM algorithm based on kinect sensor
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser
Ding et al. Stereo vision SLAM-based 3D reconstruction on UAV development platforms
EP2093713A2 (en) A method of estimating a motion of a multiple camera system, a multiple camera system and a computer program product
Moreno et al. An efficient closed-form solution to probabilistic 6D visual odometry for a stereo camera
CN115700507B (en) Map updating method and device
CN117649619B (en) Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium
Xia et al. YOLO-Based Semantic Segmentation for Dynamic Removal in Visual-Inertial SLAM
JP2013069149A (en) Image similarity determination device, image similarity determination method and image similarity determination program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination