CN114004899B - Pallet pose recognition method, storage medium and equipment - Google Patents

Pallet pose recognition method, storage medium and equipment Download PDF

Info

Publication number
CN114004899B
CN114004899B CN202111339700.XA CN202111339700A CN114004899B CN 114004899 B CN114004899 B CN 114004899B CN 202111339700 A CN202111339700 A CN 202111339700A CN 114004899 B CN114004899 B CN 114004899B
Authority
CN
China
Prior art keywords
pallet
point cloud
point
template
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111339700.XA
Other languages
Chinese (zh)
Other versions
CN114004899A (en
Inventor
曾斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Jaten Robot and Automation Co Ltd
Original Assignee
Guangdong Jaten Robot and Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Jaten Robot and Automation Co Ltd filed Critical Guangdong Jaten Robot and Automation Co Ltd
Priority to CN202111339700.XA priority Critical patent/CN114004899B/en
Publication of CN114004899A publication Critical patent/CN114004899A/en
Application granted granted Critical
Publication of CN114004899B publication Critical patent/CN114004899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30161Wood; Lumber

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

According to the pallet pose recognition method, the storage medium and the equipment, point cloud projection images are generated by projecting point clouds to a unit normal vector plane, point cloud index values are stored into pixel values corresponding to the pixel coordinates according to the pixel coordinates of the point clouds on the projection images, point cloud index images are generated, template matching is carried out on the projection images and the manufactured pallet template, three-dimensional coordinates of a left point, a center point and a right point of the pallet on the front end face of the pallet are obtained by combining the point cloud index images, accurate three-dimensional coordinates are obtained by adopting a neighborhood average idea, and therefore the pose of the pallet is calculated. The invention generates a point cloud projection image, and still keeps stability for the position and posture identification of the pallet under the working conditions of angle inclination and distance in actual use; the point cloud index image generated by the invention can directly obtain the three-dimensional coordinates of the left point, the center point and the right point of the pallet, can rapidly calculate the pose of the pallet, and has high accuracy of recognizing the pose of the pallet.

Description

Pallet pose recognition method, storage medium and equipment
Technical Field
The invention belongs to the field of computer three-dimensional vision, and particularly relates to a pallet pose recognition method, a storage medium and equipment.
Background
With the intelligent development of the current society, the industry has a higher demand for unmanned factories and intelligent warehouse technology, and the pose detection of pallets for placing goods is one of the cores of the technology. Existing pallet pose recognition techniques can be divided into two categories: the method is based on artificial features, namely, a plurality of datum point labels or two-dimensional code labels are manually attached to the pallet, then the datum points or the two-dimensional codes are identified through a visual method to position the pallet, and the method has the defect of large manual workload and is troublesome to operate.
Disclosure of Invention
The invention aims to overcome the defects of the existing pallet pose recognition method and provides a pallet pose recognition method, a storage medium and equipment which are convenient for users to use.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a pallet pose recognition method comprises the following steps:
a. and acquiring a point cloud collection p1 by using a point cloud acquisition device.
B. and extracting a point cloud set p2 which possibly exists a pallet range according to the actual working condition for the p 1.
C. And performing European clustering on the p2 to obtain one or more clustering results.
D. And c, screening the clustering results 3 in the step c according to the width of the pallet, traversing all the clustering results, leaving the clustering results with the width value between the maximum threshold and the minimum threshold, carrying out the step e when finding out one clustering result meeting the condition, and returning to the step a if the number of the clustering results is 0 after all the traversing.
E. And obtaining a unit normal vector of the clustering result through the PCL point cloud library to obtain a unit normal vector plane.
F. and d, projecting the clustering result obtained in the step d to the unit normal vector plane obtained in the step e to generate a point cloud projection image, wherein the point cloud projection image is used for matching with the pallet template, and meanwhile, generating a point cloud index image with the same size as the projection image and the pixel value being the point cloud index by using the clustering result obtained in the step d.
G. Manufacturing a pallet template according to an actual working condition, wherein the image size of the pallet template is (template_cols, template_rows), the pixel coordinate of a pallet center point in a pallet template image is (template_cols/2, template_rows/2), the pixel coordinate of a pallet left point in the pallet template image is (template_cols, template_rows/2), the pixel coordinate of a pallet right point in the pallet template image is (template_cols, template_rows/2), and the value of the template_rows/2) is set to be A, and the value of the template_rows is B, wherein B=1-A, and 0 < A is smaller than 1; and f, performing template matching on the point cloud projection image in the step f and the manufactured pallet template to obtain the pallet center point and the pixel coordinates of left and right pallet points in the point cloud projection image.
H. And d, respectively recording the pixel coordinates of the pallet center point and the pallet left and right points in the point cloud projection image, which are obtained in the step g, of a 3X3 area with the pallet center point pixel coordinate as the center to obtain 9 pixel coordinates, recording the pixel coordinates of the 3X3 area with the pallet left point pixel coordinate as the center to obtain 9 pixel coordinates, and recording the pixel coordinates of the 3X3 area with the pallet right point pixel coordinate as the center to obtain 9 pixel coordinates.
I. And (3) directly obtaining point cloud index values of 9 points by combining the point cloud index image generated in the step (f) for the 9 pixel coordinates taking the pixel coordinates of the central point of the pallet as the center, so as to obtain a three-dimensional coordinate system of the 9 points, averaging the three-dimensional coordinates of the 9 points to serve as the three-dimensional coordinates (x_center, y_center, z_center) of the central point of the pallet, and obtaining the three-dimensional coordinate system (x_left, y_left, z_left) of the left point of the pallet and the three-dimensional coordinates (x_right, y_right, z_right) of the right point of the pallet in a similar way.
J. And (d) calculating the pitch angle of the pallet according to the pitch= -arctan ((z_right-z_left)/(x_right-x_left)) for the three-dimensional coordinates of the pallet center point and the pallet left and right points obtained in the step j, and finally obtaining the position t (x_center, y_center, z_center) and the posture r (0, pitch, 0) of the pallet coordinate system under the coordinate system of the point cloud acquisition equipment.
K. And (d) according to the known pose of the coordinate system of the point cloud acquisition equipment relative to the coordinate system of the forklift, combining the result of the step (j), and thus obtaining the pose of the pallet coordinate system relative to the coordinate system of the forklift.
Compared with the prior art, the pallet pose recognition method has the advantages that the point cloud is projected to the unit normal vector plane to generate the point cloud projection image, the point cloud index value is stored into the pixel value corresponding to the pixel coordinate according to the pixel coordinate of the point cloud on the projection image to generate the point cloud index image, the projection image is subjected to template matching with the manufactured pallet template, the three-dimensional coordinates of the left point of the pallet, the center point of the pallet and the right point of the pallet on the front end surface of the pallet are obtained by combining the point cloud index image, and the accurate three-dimensional coordinates are obtained by adopting the neighborhood average idea, so that the pose of the pallet is calculated. The invention generates a point cloud projection image, and still keeps stability for the position and posture identification of the pallet under the working conditions of angle inclination and distance in actual use; the point cloud index image generated by the invention can directly obtain the three-dimensional coordinates of the left point, the center point and the right point of the pallet, can rapidly calculate the pose of the pallet, and has high accuracy of recognizing the pose of the pallet.
Further, the method for calculating the unit normal vector plane in the step e comprises the following steps of: obtaining a point cloud normal vector of the result of the step d by using a PCL point cloud library; step e2: c, carrying out cluster analysis on the point cloud normal vector in the step e1, and averaging and normalizing the clustering result with the most number to obtain a unit normal vector (A, B and C); step e3: for the unit normal vector obtained in the step e2, obtaining a unit normal vector plane as ax+by+cz=0; by the arrangement, the unit normal vector plane is calculated in a simple mode, and accuracy is high.
Further, the method for calculating the pose of the pallet coordinate system relative to the forklift coordinate system in the step k is as follows, the pose of the point cloud acquisition equipment coordinate system relative to the forklift coordinate system is known to be (T, R), and the position T '=T+R×t and the pose R' =r×r in the pose of the pallet coordinate system relative to the forklift coordinate system are combined with the (T, R) in the result of the step j; through the arrangement, the position of the pallet in the pose relative to the forklift is better obtained, and the forklift is convenient to fork and take the pallet.
A storage medium storing a computer program configured to implement the steps of the pallet pose recognition method when called by a processor.
A pallet pose recognition device, comprising: the point cloud acquisition device, the forklift and the storage medium.
Drawings
FIG. 1 is a flow chart of a pallet pose recognition method
Detailed Description
The technical scheme of the invention is described below with reference to the accompanying drawings
Embodiment one:
referring to fig. 1, the pallet pose recognition method of the invention comprises the following steps:
a. And adopting point cloud acquisition equipment to acquire point cloud data, and performing statistical filtering on the acquired point cloud data to obtain a point cloud set p1 after noise is removed.
B. and extracting a point cloud set p2 which possibly exists a pallet range according to the actual working condition for the p 1.
C. And performing European clustering on the p2 to obtain one or more clustering results.
D. And c, screening the clustering result 3 in the step c according to the width of the pallet, traversing all the clustering results, leaving the clustering result with the width value between the maximum threshold and the minimum threshold, carrying out the step e when finding out one clustering result meeting the condition, returning to the step a if the number of the clustering results is 0 after all the traversing, and re-collecting the collected point cloud data.
E. and selecting and solving a unit normal vector of the clustering result from the PCL point cloud library to obtain a unit normal vector plane.
F. And d, projecting the clustering result obtained in the step d to the unit normal vector plane obtained in the step e to generate a point cloud projection image, wherein the point cloud projection image is a binary image and is used for matching with the pallet template, and simultaneously, generating a point cloud index image which has the same size as the projection image and has the same pixel value as the point cloud index of the point cloud projection image by the clustering result obtained in the step d.
G. Manufacturing a pallet template according to an actual working condition, wherein the image size of the pallet template is (template_cols, template_rows), the pixel coordinate of a pallet center point in a pallet template image is (template_cols/2, template_rows/2), the pixel coordinate of a pallet left point in the pallet template image is (template_cols, template_rows/2), the pixel coordinate of a pallet right point in the pallet template image is (template_cols, template_rows/2), the value of the template_rows/2 is set to be A, the value of the template_right is set to be B, wherein B=1-A, 0 < A is smaller than 1, for example, the value of the template_left is set to be 0.1, and the value of the template_right is set to be 1-0.9=1.1; and f, performing template matching on the point cloud projection image in the step f and the manufactured pallet template to obtain the pallet center point and the pixel coordinates of left and right pallet points in the point cloud projection image.
H. And d, respectively recording the pixel coordinates of the pallet center point and the pallet left and right points in the point cloud projection image, which are obtained in the step g, of a 3X3 area with the pallet center point pixel coordinate as the center to obtain 9 pixel coordinates, recording the pixel coordinates of the 3X3 area with the pallet left point pixel coordinate as the center to obtain 9 pixel coordinates, and recording the pixel coordinates of the 3X3 area with the pallet right point pixel coordinate as the center to obtain 9 pixel coordinates.
I. And (3) directly obtaining point cloud index values of 9 points by combining the point cloud index image generated in the step (f) for the 9 pixel coordinates taking the pixel coordinates of the central point of the pallet as the center, so as to obtain a three-dimensional coordinate system of the 9 points, averaging the three-dimensional coordinates of the 9 points to serve as the three-dimensional coordinates (x_center, y_center, z_center) of the central point of the pallet, and obtaining the three-dimensional coordinate system (x_left, y_left, z_left) of the left point of the pallet and the three-dimensional coordinates (x_right, y_right, z_right) of the right point of the pallet in a similar way.
J. And (d) calculating the pitch angle of the pallet according to the pitch= -arctan ((z_right-z_left)/(x_right-x_left)) for the three-dimensional coordinates of the pallet center point and the pallet left and right points obtained in the step j, and finally obtaining the position t (x_center, y_center, z_center) and the posture r (0, pitch, 0) of the pallet coordinate system under the coordinate system of the point cloud acquisition equipment.
K. And (d) according to the known pose of the coordinate system of the point cloud acquisition equipment relative to the coordinate system of the forklift, combining the result of the step (j), and thus obtaining the pose of the pallet coordinate system relative to the coordinate system of the forklift.
Compared with the prior art, the pallet pose recognition method has the advantages that the point cloud is projected to the unit normal vector plane to generate the point cloud projection image, the point cloud index value is stored into the pixel value corresponding to the pixel coordinate according to the pixel coordinate of the point cloud on the projection image to generate the point cloud index image, the projection image is subjected to template matching with the manufactured pallet template, the three-dimensional coordinates of the left point of the pallet, the center point of the pallet and the right point of the pallet on the front end surface of the pallet are obtained by combining the point cloud index image, and the accurate three-dimensional coordinates are obtained by adopting the neighborhood average idea, so that the pose of the pallet is calculated. The invention generates a point cloud projection image, and still keeps stability for the position and posture identification of the pallet under the working conditions of angle inclination and distance in actual use; the point cloud index image generated by the invention can directly obtain the three-dimensional coordinates of the left point, the center point and the right point of the pallet, can rapidly calculate the pose of the pallet, and has high accuracy of recognizing the pose of the pallet.
In one embodiment, the method for calculating the unit normal vector plane in step e is as follows: obtaining a point cloud normal vector of the result of the step d by using a PCL point cloud library; step e2: c, carrying out cluster analysis on the point cloud normal vector in the step e1, and averaging and normalizing the clustering result with the most number to obtain a unit normal vector (A, B and C); step e3: for the unit normal vector obtained in the step e2, obtaining a unit normal vector plane as ax+by+cz=0; by the arrangement, the unit normal vector plane is calculated in a simple mode, and accuracy is high.
In one embodiment, the method for calculating the pose of the pallet coordinate system relative to the forklift coordinate system in the step k is as follows, the pose of the point cloud acquisition equipment coordinate system relative to the forklift coordinate system is known to be (T, R), and the position T '=t+r×t and the pose R' =r×r in the pose of the pallet coordinate system relative to the forklift coordinate system are combined with the (T, R) in the result of the step j; through the arrangement, the position of the pallet in the pose relative to the forklift is better obtained, and the forklift is convenient to fork and take the pallet.
Embodiment two:
The main object of the present embodiment is to provide a storage medium to which the pallet pose recognition method of the first embodiment is applied, the storage medium storing a computer program configured to implement the steps of the pallet pose recognition method when called by a processor.
Compared with the prior art, the storage medium of the invention projects the point cloud to the unit normal vector plane to generate a point cloud projection image, stores the point cloud index value into the pixel value of the corresponding pixel coordinate to generate the point cloud index image according to the pixel coordinate of the point cloud on the projection image, carries out template matching on the projection image and the manufactured pallet template, combines the point cloud index image to obtain the three-dimensional coordinates of the left point of the pallet, the central point of the pallet and the right point of the pallet on the front end surface of the pallet, and adopts the neighborhood average idea to obtain the accurate three-dimensional coordinates, thereby calculating the pose of the pallet. The invention generates a point cloud projection image, and still keeps stability for the position and posture identification of the pallet under the working conditions of angle inclination and distance in actual use; the point cloud index image generated by the invention can directly obtain the three-dimensional coordinates of the left point, the center point and the right point of the pallet, can rapidly calculate the pose of the pallet, and has high accuracy of recognizing the pose of the pallet.
Embodiment III:
The main object of the present embodiment is to provide a pallet pose recognition apparatus applying the storage medium of the second embodiment, including: the point cloud acquisition device, the forklift and the storage medium.
Compared with the prior art, the pallet pose recognition device provided by the invention has the advantages that the point cloud is projected to the unit normal vector plane to generate the point cloud projection image, the point cloud index value is stored into the pixel value corresponding to the pixel coordinate according to the pixel coordinate of the point cloud on the projection image to generate the point cloud index image, the projection image is subjected to template matching with the manufactured pallet template, the three-dimensional coordinates of the left point of the pallet, the center point of the pallet and the right point of the pallet on the front end surface of the pallet are obtained by combining the point cloud index image, and the accurate three-dimensional coordinates are obtained by adopting the neighborhood average idea, so that the pose of the pallet is calculated. The invention generates a point cloud projection image, and still keeps stability for the position and posture identification of the pallet under the working conditions of angle inclination and distance in actual use; the point cloud index image generated by the invention can directly obtain the three-dimensional coordinates of the left point, the center point and the right point of the pallet, can rapidly calculate the pose of the pallet, and has high accuracy of recognizing the pose of the pallet.
Variations and modifications to the above would be obvious to persons skilled in the art to which the invention pertains from the foregoing description and teachings. Therefore, the invention is not limited to the specific embodiments disclosed and described above, but some modifications and changes of the invention should be also included in the scope of the claims of the invention. In addition, although specific terms are used in the present specification, these terms are for convenience of description only and do not limit the present invention in any way.

Claims (5)

1. The pallet pose recognition method is characterized by comprising the following steps of:
a. Acquiring a point cloud set p1 by using point cloud acquisition equipment;
b. for p1, extracting a point cloud set p2 with a pallet range possibly existing according to the actual working condition;
c. performing European clustering on the p2 to obtain one or more clustering results;
d. C, screening the clustering results 3 in the step c according to the width of the pallet, traversing all the clustering results, leaving the clustering results with the width value between the maximum threshold and the minimum threshold, carrying out the step e when finding out one clustering result meeting the condition, and returning to the step a if the number of the clustering results is 0 after all the traversing;
e. Obtaining a unit normal vector of the clustering result through a PCL point cloud library to obtain a unit normal vector plane;
f. C, projecting the clustering result obtained in the step d to the unit normal vector plane obtained in the step e to generate a point cloud projection image, wherein the point cloud projection image is used for matching with the pallet template, and meanwhile, generating a point cloud index image with the same size as the projection image and a pixel value being a point cloud index value by the clustering result obtained in the step d;
g. Manufacturing a pallet template according to an actual working condition, wherein the image size of the pallet template is (template_cols, template_rows), the pixel coordinate of a pallet center point in a pallet template image is (template_cols/2, template_rows/2), the pixel coordinate of a pallet left point in the pallet template image is (template_cols, template_rows/2), the pixel coordinate of a pallet right point in the pallet template image is (template_cols, template_rows/2), and the value of the template_rows/2) is set to be A, and the value of the template_rows is B, wherein B=1-A, and 0 < A is smaller than 1; performing template matching on the point cloud projection image in the step f and the manufactured pallet template to obtain the pixel coordinates of the pallet center point and the pallet left and right points in the point cloud projection image;
h. c, respectively recording the pixel coordinates of the pallet center point and the pallet left and right points in the point cloud projection image, which are obtained in the step g, of a 3X3 area with the pallet center point pixel coordinates as the center to obtain 9 pixel coordinates, recording the pixel coordinates of the 3X3 area with the pallet left point pixel coordinates as the center to obtain 9 pixel coordinates, and recording the pixel coordinates of the 3X3 area with the pallet right point pixel coordinates as the center to obtain 9 pixel coordinates;
i. for the 9 pixel coordinates taking the pixel coordinate of the central point of the pallet obtained in the step h as the center, directly obtaining the point cloud index values of 9 points by combining the point cloud index image generated in the step f, thereby obtaining a three-dimensional coordinate system of the 9 points, averaging the three-dimensional coordinates of the 9 points to be used as the three-dimensional coordinates (x_center, y_center, z_center) of the central point of the pallet, and obtaining the three-dimensional coordinate system (x_left, y_left, z_left) of the left point of the pallet and the three-dimensional coordinates (x_right, y_right, z_right) of the right point of the pallet in a similar way;
j. Calculating the pitch angle of the pallet according to pitch= -arctan ((z_right-z_left)/(x_right-x_left)) for the three-dimensional coordinates of the pallet center point and the pallet left and right points obtained in the step j, and finally obtaining the position t (x_center, y_center, z_center) and the posture r (0, pitch, 0) of the pallet coordinate system under the coordinate system of the point cloud acquisition equipment;
k. And (d) according to the known pose of the coordinate system of the point cloud acquisition equipment relative to the coordinate system of the forklift, combining the result of the step (j), and thus obtaining the pose of the pallet coordinate system relative to the coordinate system of the forklift.
2. The pallet pose recognition method according to claim 1, wherein the method for calculating the unit normal vector plane in the step e is as follows:
step e1: obtaining a point cloud normal vector of the result of the step d by using a PCL point cloud library;
Step e2: c, carrying out cluster analysis on the point cloud normal vector in the step e1, and averaging and normalizing the clustering result with the most number to obtain a unit normal vector (A, B and C);
Step e3: for the unit normal vector obtained in step e2, a unit normal vector plane is obtained as ax+by+cz=0.
3. The method for recognizing the position and the posture of the pallet according to claim 1, wherein the method for calculating the position and the posture of the pallet coordinate system relative to the forklift coordinate system in the step k is characterized in that the position and the posture of the point cloud acquisition equipment coordinate system relative to the forklift coordinate system are known to be (T, R), and the position T '=t+r×t and the posture R' =r×r in the position and the posture of the pallet coordinate system relative to the forklift coordinate system are combined with the (T, R) in the result of the step j.
4. A storage medium, characterized in that the storage medium stores a computer program configured to implement the steps of the pallet pose recognition method according to any of claims 1-3 when called by a processor.
5. A pallet pose recognition device, comprising: a point cloud acquisition device, a forklift truck, the storage medium of claim 4.
CN202111339700.XA 2021-11-12 2021-11-12 Pallet pose recognition method, storage medium and equipment Active CN114004899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111339700.XA CN114004899B (en) 2021-11-12 2021-11-12 Pallet pose recognition method, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111339700.XA CN114004899B (en) 2021-11-12 2021-11-12 Pallet pose recognition method, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN114004899A CN114004899A (en) 2022-02-01
CN114004899B true CN114004899B (en) 2024-05-14

Family

ID=79929039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111339700.XA Active CN114004899B (en) 2021-11-12 2021-11-12 Pallet pose recognition method, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN114004899B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758163B (en) * 2022-06-15 2022-09-16 福勤智能科技(昆山)有限公司 Forklift movement control method and device, electronic equipment and storage medium
CN116342695B (en) * 2023-05-29 2023-08-25 未来机器人(深圳)有限公司 Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610176A (en) * 2017-09-15 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN109087345A (en) * 2018-09-06 2018-12-25 上海仙知机器人科技有限公司 Pallet recognition methods and automated guided vehicle based on ToF imaging system
CN110889828A (en) * 2019-11-07 2020-03-17 浙江大华技术股份有限公司 Pallet identification method in predetermined scene, terminal device and computer storage medium
CN111598946A (en) * 2020-03-16 2020-08-28 华南理工大学 Object pose measuring method and device and storage medium
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system
WO2021181647A1 (en) * 2020-03-13 2021-09-16 日本電気株式会社 Image processing device, image processing method, and computer-readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610176A (en) * 2017-09-15 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN109087345A (en) * 2018-09-06 2018-12-25 上海仙知机器人科技有限公司 Pallet recognition methods and automated guided vehicle based on ToF imaging system
CN110889828A (en) * 2019-11-07 2020-03-17 浙江大华技术股份有限公司 Pallet identification method in predetermined scene, terminal device and computer storage medium
WO2021181647A1 (en) * 2020-03-13 2021-09-16 日本電気株式会社 Image processing device, image processing method, and computer-readable medium
CN111598946A (en) * 2020-03-16 2020-08-28 华南理工大学 Object pose measuring method and device and storage medium
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
应用激光雷达与相机信息融合的障碍物识别;黄兴;应群伟;;计算机测量与控制;20200125(第01期);全文 *
随机工件的点云场景CAD模型的快速识别与定位;赵刚;郭晓康;刘德政;王中任;;激光与红外;20191220(第12期);全文 *

Also Published As

Publication number Publication date
CN114004899A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN107610176B (en) Pallet dynamic identification and positioning method, system and medium based on Kinect
CN109658398B (en) Part surface defect identification and evaluation method based on three-dimensional measurement point cloud
CN114004899B (en) Pallet pose recognition method, storage medium and equipment
CN110335234B (en) Three-dimensional change detection method based on antique LiDAR point cloud
CN111462120B (en) Defect detection method, device, medium and equipment based on semantic segmentation model
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
JP2016161569A (en) Method and system for obtaining 3d pose of object and 3d location of landmark point of object
US20100259537A1 (en) Computer vision cad models
GB2528669A (en) Image Analysis Method
CN110047036B (en) Polar grid-based ground laser scanning data building facade extraction method
CN108447124A (en) A kind of ancient building contour extraction method and system based on cloud
CN108088444A (en) Indoor point cloud map generation system and method based on three-dimensional laser
Yogeswaran et al. 3d surface analysis for automated detection of deformations on automotive body panels
CN113362385A (en) Cargo volume measuring method and device based on depth image
CN112528781A (en) Obstacle detection method, device, equipment and computer readable storage medium
CN116188544A (en) Point cloud registration method combining edge features
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN115984759A (en) Substation switch state identification method and device, computer equipment and storage medium
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
Hebel et al. Automatic registration of laser point clouds of urban areas
Böhm et al. Curvature-based range image classification for object recognition
CN117115228A (en) SOP chip pin coplanarity detection method and device
CN116481460B (en) Apparent pit defect size detection method based on three-dimensional reconstruction model
CN115880442B (en) Three-dimensional model reconstruction method and system based on laser scanning
CN115984396A (en) Automatic cargo drawing generation method based on 3D structured light depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant