CN116128841A - Tray pose detection method and device, unmanned forklift and storage medium - Google Patents

Tray pose detection method and device, unmanned forklift and storage medium Download PDF

Info

Publication number
CN116128841A
CN116128841A CN202310075321.7A CN202310075321A CN116128841A CN 116128841 A CN116128841 A CN 116128841A CN 202310075321 A CN202310075321 A CN 202310075321A CN 116128841 A CN116128841 A CN 116128841A
Authority
CN
China
Prior art keywords
point cloud
cloud data
tray
target
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310075321.7A
Other languages
Chinese (zh)
Inventor
杨秉川
方牧
鲁豫杰
李陆洋
吴庭威
方晓曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionnav Robotics Shenzhen Co Ltd
Original Assignee
Visionnav Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionnav Robotics Shenzhen Co Ltd filed Critical Visionnav Robotics Shenzhen Co Ltd
Priority to CN202310075321.7A priority Critical patent/CN116128841A/en
Publication of CN116128841A publication Critical patent/CN116128841A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Forklifts And Lifting Vehicles (AREA)

Abstract

The utility model provides a tray position appearance detection method and device, unmanned fork truck, storage medium, and this method is applied to unmanned fork truck, and this method includes: under the condition that the unmanned forklift is positioned at the target detection position, initial point cloud data comprising a tray in the target detection position are collected; separating the ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data; removing offset points which do not accord with the normal angle range in the candidate tray point cloud data, and searching to obtain target tray surface point cloud data from the candidate tray point cloud data after removing the offset points; registering the target tray surface point cloud data to determine the pose corresponding to the tray in the target detection position. By implementing the embodiment of the application, the accuracy of the unmanned forklift in detecting the position and the posture of the tray can be improved, so that the transportation efficiency of the unmanned forklift is effectively improved.

Description

Tray pose detection method and device, unmanned forklift and storage medium
Technical Field
The application relates to the technical field of unmanned forklifts, in particular to a tray pose detection method and device, an unmanned forklifts and a storage medium.
Background
Currently, in the working scenario of warehouse logistics, the transportation of goods on a pallet by a forklift is a common transportation requirement. However, it is found in practice that when the unmanned forklift is used to realize intelligent transportation, it is often difficult for the unmanned forklift to accurately acquire specific data required in the transportation process, such as the position and the posture of the pallet, that is, the accuracy of the unmanned forklift in detecting the position and the posture of the pallet is not high, so that the transportation efficiency of the unmanned forklift is reduced.
Disclosure of Invention
The embodiment of the application discloses a tray pose detection method and device, unmanned fork truck, storage medium can promote unmanned fork truck and detect the accuracy nature of tray pose to effectively improve unmanned fork truck's conveying efficiency.
The first aspect of the embodiment of the application discloses a tray pose detection method, which is applied to an unmanned forklift and comprises the following steps:
acquiring initial point cloud data comprising a tray in a target detection position under the condition that the unmanned forklift is in the target detection position;
separating the ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data;
removing offset points which do not accord with the normal angle range from the candidate tray point cloud data, and searching from the candidate tray point cloud data after removing the offset points to obtain target tray surface point cloud data;
Registering the target tray surface point cloud data to determine the pose corresponding to the tray in the target detection position.
The second aspect of the embodiment of the application discloses a tray position appearance detection device, is applied to unmanned fork truck, tray position appearance detection device includes:
the point cloud acquisition unit is used for acquiring initial point cloud data comprising a tray in a target detection position under the condition that the unmanned forklift is positioned at the target detection position;
the ground separation unit is used for separating ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data;
the plane searching unit is used for eliminating offset points which do not accord with the normal angle range in the candidate tray point cloud data, and searching the candidate tray point cloud data with the offset points eliminated to obtain target tray plane point cloud data;
and the registration unit is used for registering the target tray surface point cloud data so as to determine the pose corresponding to the tray in the target detection position.
A third aspect of the embodiments of the present application discloses another unmanned forklift, including:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to execute all or part of the steps in any one of the tray pose detection methods disclosed in the first aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute all or part of the steps in any one of the tray pose detection methods disclosed in the first aspect of the embodiments of the present application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
according to the embodiment of the application, the unmanned forklift applying the tray pose detection method can acquire initial point cloud data of the tray in the target detection position under the condition that the unmanned forklift is in the target detection position, and separate ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data. The unmanned forklift can further reject offset points which do not accord with the normal angle range in the candidate pallet point cloud data, and search the candidate pallet point cloud data after the offset points are rejected to obtain target pallet surface point cloud data. On the basis, the unmanned forklift can register the target tray surface point cloud data so as to determine the pose corresponding to the tray in the target detection position. Therefore, by implementing the embodiment of the application, the tray used for transporting the goods can be detected by the unmanned forklift in the working scene of warehouse logistics, and the position, the posture and the like of the tray can be accurately acquired. In the related art, geometric features of the three-dimensional point cloud are often calculated through algorithms such as region growing segmentation, RANSAC model segmentation and clustering, so that tray detection is realized, but the algorithms based on the geometric features are easily affected by noise points, and can not accurately detect the tray under the interference of film winding and the like. Compared with the traditional algorithm based on the geometric features, the tray pose detection method can perform initial positioning and point cloud registration based on the geometric features, noise point interference in the detection process is effectively reduced, the method is particularly suitable for detecting irregularly-shaped trays or trays with interference, accuracy of detection of the tray pose by an unmanned forklift is improved, detection errors are reduced, and automatic transportation efficiency of the unmanned forklift is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings needed in the embodiments, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an unmanned forklift disclosed in an embodiment of the present invention;
fig. 2 is a schematic flow chart of a tray pose detection method disclosed in an embodiment of the present application;
FIG. 3 is a schematic diagram of separating ground point cloud data from initial point cloud data according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of removing offset points from candidate tray point cloud data according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of another tray pose detection method disclosed in an embodiment of the present application;
FIG. 6 is a flow chart of yet another tray pose detection method disclosed in embodiments of the present application;
FIG. 7 is a modular schematic diagram of an unmanned forklift as disclosed in an embodiment of the present application;
fig. 8 is a modular schematic view of yet another unmanned forklift disclosed in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, in the embodiments of the present application are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed.
The embodiment of the application discloses a tray pose detection method and device, unmanned fork truck, storage medium can promote unmanned fork truck and detect the accuracy nature of tray pose to effectively improve unmanned fork truck's conveying efficiency.
The following will describe in detail the embodiments and the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a tray pose detection method disclosed in an embodiment of the present application, which may include an unmanned forklift 10 (also referred to as an automatic transfer robot or an automatic guided vehicle, i.e. Automated Guided Vehicle, AGV) and a tray 20. In a working scenario of warehouse logistics, the unmanned forklift 10 can detect and identify the pallet 20 to acquire pose data required in a transportation process, such as positions and poses corresponding to the pallet 20, so as to accurately transport the pallet 20 and goods on the pallet 20.
As shown in fig. 1, the unmanned forklift 10 may include a perception module 11, and the perception module 11 may include a perception element and a processing element (not specifically shown). The sensing element may include a 3D laser radar, etc. for acquiring point cloud data near the unmanned forklift 10, especially in front of the unmanned forklift 10, so as to acquire initial point cloud data including a tray to be detected. The processing element may be configured to process the initial point cloud data to finally determine a pose of the tray.
The unmanned forklift 10 shown in fig. 1 is a vehicle, and this is merely an example, and in other embodiments, the unmanned forklift 10 may have other different forms, for example, a rail robot, a non-vehicle form of a trackless robot, etc., and the embodiment of the present application is not limited specifically. The sensing module 11 mounted on the unmanned forklift 10 may include various devices or systems including sensing elements and processing elements, such as a vehicle, a computer, and a point cloud scanning processing System based on a SoC (System-on-a-Chip), which is not particularly limited in the embodiment of the present application.
In the related art, geometric features of the three-dimensional point cloud are often required to be calculated through algorithms such as region growing segmentation, RANSAC model segmentation and clustering, so that tray detection is achieved. Such geometric feature-based algorithms are susceptible to noise and often fail to accurately detect the tray 20 under interference from film entanglement and the like.
In this embodiment of the present application, in order to transport the goods on the tray 20 in the working scene of the warehouse logistics, and overcome the difficulty that the pose of the tray 20 is often difficult to be accurately determined by the unmanned forklift 10 in the related art (especially may include the difficulty that the pose is easily affected by noise points in the detection process), the pose of the tray 20 may be obtained by the way of scanning the point cloud, and further the tray 20 and the goods on the tray 20 are positioned. For example, in a case where the unmanned forklift 10 is at the target detection position, the unmanned forklift 10 may collect initial point cloud data including the pallet 20 in the target detection position, and separate ground point cloud data from the initial point cloud data, resulting in candidate pallet point cloud data. Further, the unmanned forklift 10 may reject offset points in the candidate tray point cloud data, which do not conform to the normal angle range, and search for target tray surface point cloud data from the candidate tray point cloud data after the offset points are removed. The normal angle range can be used for judging whether the angle between the point cloud normal of the point in the candidate tray point cloud data and the designated axis is in a preset normal angle range or not, and then whether the corresponding point belongs to the offset point or not can be determined. Based on this, the unmanned forklift 10 may register the target pallet face point cloud data after the offset point is removed, so as to determine the pose corresponding to the pallet 20 in the target detection position.
Therefore, by implementing the tray pose detection method of the embodiment, in a working scene of warehouse logistics, the unmanned forklift 10 (the mounted sensing module 11) can be utilized to detect the tray 20 used for transporting goods, and pose data such as the position and the pose of the tray 20 can be accurately obtained. Compared with the traditional algorithm based on the geometric features, the tray pose detection method can perform initial positioning and point cloud registration based on the geometric features, noise interference in the detection process is effectively reduced, and the tray pose detection method is particularly suitable for detecting irregularly-shaped trays 20 or trays 20 with interference (such as film winding and the like), so that the accuracy of detecting the pose of the tray 20 by the unmanned forklift 10 is improved, detection errors are reduced, and the automatic transportation efficiency of the unmanned forklift 10 is effectively improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of a tray pose detection method disclosed in an embodiment of the present application, and the method may be applied to the above-mentioned unmanned forklift. As shown in fig. 2, the tray pose detection method may include the steps of:
202. and under the condition that the unmanned forklift is positioned at the target detection position, acquiring initial point cloud data comprising the tray in the target detection position.
In the embodiment of the application, the unmanned forklift can detect the tray at the target detection position through the carried sensing module so as to determine the corresponding pose data of the tray in the subsequent steps. Illustratively, the sensing module mounted on the unmanned forklift may include sensing elements, such as LiDAR (Light Detection And Ranging ) sensors (3D LiDAR), ultrasonic radar, etc., for acquiring initial point cloud data including a tray in a target detection location.
For example, the sensing module includes a LiDAR sensor, which may be disposed at a midpoint between the fork arm roots of the unmanned forklift (for example, disposed on the vehicle body or on the fork arm structure, as shown in fig. 1), or may be disposed at other positions according to actual conditions (for example, disposed at different adjustment positions based on the form and the warehouse space environment of the unmanned forklift). In some embodiments, an unmanned forklift may keep its LiDAR sensor on during operation and continuously collect point cloud data of the space in front of it. When the unmanned forklift moves to a target detection position where the tray is placed, the point cloud data acquired at the moment can be used as initial point cloud data for the subsequent step of determining the pose of the tray. In other embodiments, the unmanned forklift can trigger the LiDAR sensor to collect the point cloud data in the front space when the unmanned forklift runs to the target detection position, so that the initial point cloud data can be obtained.
204. And separating the ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data.
In the embodiment of the application, initial point cloud data collected by the unmanned forklift often includes ground in the target detection position, that is, includes certain ground point cloud data. In order to remove the influence of the ground point cloud data, the position and the posture of the tray are determined more accurately, and the unmanned forklift can separate the ground point cloud data from the initial point cloud data to obtain corresponding candidate tray point cloud data.
For example, in case the unmanned forklift is operated to a target detection position (for example, 1-3 meters from the pallet), the initial pose issued by the processing element of the sensing module on which it is mounted may be obtained. The initial pose may include, among other things, three-axis position coordinates (i.e., x-axis, y-axis, z-axis coordinates), rotation angles, etc. of the tray in the target detection position. The initial pose may be determined by the processing element according to the acquired initial point cloud data, or may be a default pose (acquired by the sensing module through communication with other devices, or stored in the sensing module in advance). Based on the initial pose, the unmanned forklift can determine an ROI (Region Of Interest, a region of interest) aiming at the initial point cloud data, and extract the initial point cloud data in the ROI to perform certain preprocessing, so that the part belonging to the ground point cloud data can be separated. In some embodiments, the unmanned forklift can separate planes corresponding to the ground of the target detection position by means of plane searching, so that separation of ground point cloud data is achieved. In other embodiments, the unmanned forklift may also separate the above-mentioned ground point cloud data in other manners, which is not specifically limited in the embodiments of the present application.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating separation of ground point cloud data from initial point cloud data according to an embodiment of the present application. As shown in fig. 3, the unmanned forklift can separate out points matched with the ground of the target detection position according to the initial point cloud data in the ROI area, and obtain candidate tray point cloud data. Optionally, the unmanned forklift may further refine the point cloud of the separated ground point cloud data, for example, the point cloud of the separated ground point cloud data may be used as a target point cloud candidate cluster, and based on specification data (such as a tray thickness, a number of columns, a height of columns, etc.) corresponding to a pre-acquired tray, the point cloud corresponding to the tray specification data may be further determined from the target point cloud candidate cluster, and used as candidate tray point cloud data.
206. And eliminating offset points which do not accord with the normal angle range in the candidate tray point cloud data, and searching the candidate tray point cloud data with the offset points eliminated to obtain target tray surface point cloud data.
The candidate tray point cloud data may still include points that do not belong to the corresponding point of the tray to be detected, which may be further determined based on the point cloud normal corresponding to each point in the candidate tray point cloud data. Wherein, the point cloud normal refers to a normal determined by a normal vector of a tangential plane where a point in the point cloud is located. In this embodiment of the present application, by setting a certain normal angle range, that is, an angle range to which an angle between the normal line of the point cloud and a specified reference axis (for example, the x axis) should belong, offset points that do not conform to the normal angle range may be removed from candidate tray point cloud data, so that target tray surface point cloud data corresponding to the tray may be further acquired.
Referring to fig. 4, fig. 4 is a schematic diagram of removing offset points from candidate tray point cloud data according to an embodiment of the present disclosure. As shown in fig. 4, the angle between the point cloud normal m corresponding to the point a and the x-axis is α (as shown by the enlarged P-region), and the angle between the point cloud normal n corresponding to the point B and the x-axis is β (as shown by the enlarged Q-region). If the alpha is satisfied to belong to X (X represents a preset normal angle range), the represented point A is not an offset point and can be reserved; if β does not belong to X, it means that point B belongs to the offset point and should be eliminated.
On the basis, the unmanned forklift can further perform plane search on candidate pallet point cloud data with offset points removed to obtain point clouds which correspond to target detection positions and are parallel to the ground, and the point clouds are used as target pallet surface point cloud data corresponding to pallets.
208. Registering the target tray surface point cloud data to determine the pose corresponding to the tray in the target detection position.
In the embodiment of the application, after the unmanned forklift acquires the target tray surface point cloud data, the position, the posture and the like data corresponding to the tray (specifically, for example, the point cloud center, the plane angle and the like corresponding to the tray) can be further determined in a registration mode. For example, the above registration process may include coarse registration and precise registration, that is, the unmanned forklift may perform coarse registration on the target pallet surface point cloud data first to obtain corresponding initial registration point cloud data; and further iterating the initial alignment point cloud data, and determining the point cloud center and the plane angle which meet preset conditions as the pose corresponding to the tray obtained through accurate alignment.
In some embodiments, the above-described registration process may be implemented based on templates. For example, in the case of performing coarse registration on target tray surface point cloud data, registration iteration may be performed based on pre-designated template point cloud data, so as to obtain a first rigid body transformation matrix between the target tray surface point cloud data and the template point cloud data. Under the condition of accurately registering the initial registration point cloud data, a second rigid body transformation matrix can be iteratively determined based on a certain template, and then the point cloud center and the plane angle corresponding to the tray are determined.
In other embodiments, the accurate registration of the initial registration point cloud data may be implemented through other types of iterative computation instead of the template, and the corresponding second rigid transformation matrix may be determined to further obtain the point cloud center and the plane angle corresponding to the tray.
Alternatively, the iterative process in the above registration may be embodied using SAC-IA (Sample Consensus Initial Aligment, sample consensus initial registration) algorithm, ICP (Iterative Closest Point ) algorithm, etc., as will be further described in the following embodiments.
Therefore, by implementing the tray pose detection method described in the embodiment, in the working scene of warehouse logistics, the unmanned forklift can be utilized to detect the trays used for transporting goods, and pose data such as the positions and the poses of the trays can be accurately acquired. The tray pose detection method can perform initial positioning and point cloud registration based on geometric features, effectively reduces noise interference in the detection process, and is particularly suitable for detecting irregularly-shaped trays or trays with interference, so that the accuracy of detection of the tray pose by the unmanned forklift is improved, detection errors are reduced, and the automatic transportation efficiency of the unmanned forklift is effectively improved.
Referring to fig. 5, fig. 5 is a schematic flow chart of another tray pose detection method disclosed in an embodiment of the present application, and the method may be applied to the above-mentioned unmanned forklift. As shown in fig. 5, the tray pose detection method may include the steps of:
502. and under the condition that the unmanned forklift is positioned at the target detection position, acquiring initial point cloud data comprising the tray in the target detection position.
Step 502 is similar to step 202, and will not be described herein.
504. And determining an interested area aiming at the initial point cloud data, and preprocessing the initial point cloud data in the interested area to obtain preprocessed point cloud data.
In the embodiment of the application, the unmanned forklift can determine the interested area, namely the ROI area, of the tray to be detected in the initial point cloud data based on the designated initial pose under the condition that the unmanned forklift is positioned at the target detection position. The initial pose may include, among other things, three-axis position coordinates (i.e., x-axis, y-axis, z-axis coordinates), rotation angles, etc. of the tray in the target detection position. For example, the unmanned forklift may extract the ROI from the initial point cloud data according to the initial pose and the preset ROI threshold, and determine the corresponding ROI region.
Based on the method, the unmanned forklift can further preprocess the initial point cloud data in the ROI area to obtain corresponding preprocessed point cloud data. Illustratively, the preprocessing may include noise filtering, downsampling, and the like. The noise filtering is realized through various outlier filtering algorithms (such as point cloud radius filtering, RANSAC filtering and the like), so that outlier noise in the ROI can be removed, and the aggregation of point clouds in the ROI can be ensured. The downsampling is realized through various voxel downsampling algorithms (such as mean downsampling, random downsampling and the like), so that the point cloud density can be reduced, the calculated amount of the subsequent steps is reduced under the condition that the accuracy is not affected as much as possible, and the efficiency of detecting the tray pose by the unmanned forklift is improved.
506. And separating ground point cloud data from the preprocessed point cloud data to obtain a target point cloud candidate cluster corresponding to the target detection position, wherein the ground point cloud data comprises points matched with the ground corresponding to the target detection position in the preprocessed point cloud data.
In the embodiment of the application, the unmanned forklift can separate the part belonging to the ground point cloud data from the preprocessed point cloud data preprocessed by noise filtering, downsampling and the like so as to acquire the target point cloud candidate cluster corresponding to the target detection position.
For example, the unmanned forklift can perform plane search on the preprocessed point cloud data to obtain plane point clouds parallel to the ground. The plane point cloud is parallel to the ground, and indicates that its plane normal vector is parallel to the z-axis. In some embodiments, the above-mentioned plane search may be implemented based on a RANSAC (RANdom SAmple Consensus ) plane search algorithm, that is, based on the RANSAC algorithm, plane fitting is performed on each point in the preprocessed point cloud data, and each plane in which a plane normal vector is parallel to the z-axis is determined, so that a corresponding plane point cloud may be obtained.
On the basis, the unmanned forklift can determine the plane point cloud matched with the ground corresponding to the target detection position as the ground point cloud data, and reject the ground point cloud data from the preprocessed point cloud data. Thereafter, the unmanned forklift can cluster the preprocessed point cloud data after the ground point cloud data is removed according to the target detection position based on a DBSCAN algorithm (Density-Based Spatial Clustering of Applications with Noise-based noisy spatial clustering algorithm), so as to obtain a target point cloud candidate cluster corresponding to the target detection position.
508. And determining candidate tray point cloud data from the target point cloud candidate clusters according to the specification data corresponding to the trays.
In this embodiment of the present application, after the unmanned forklift acquires the target point cloud candidate cluster of the separated ground point cloud data, the target point cloud candidate cluster may be further subjected to refinement processing to determine more specific candidate tray point cloud data. In some embodiments, the unmanned forklift may determine, based on the specification data corresponding to the pre-acquired tray, a point cloud corresponding to the specification data of the tray from the target point cloud candidate clusters, as candidate tray point cloud data.
Illustratively, the pallet specification data may include pallet thickness (or pallet height), number of posts (or "piers"), post height (or pier height), and the like. The unmanned forklift can determine the corresponding thickness of the reserved point cloud according to the specification data corresponding to the tray. The thickness of the reserved point cloud can be larger than the thickness of the tray or the sum of the thickness of the tray and the height of the stand column, so that the point cloud with enough thickness can be reserved, and the possible error influence of point cloud smear and the like can be dealt with. On the basis, the unmanned forklift can intercept corresponding plane point clouds from the target point cloud candidate clusters according to the thickness of the reserved point clouds to serve as candidate tray point cloud data.
510. And determining offset points with normal angles larger than or equal to the reference normal angles according to normal angles between point cloud normals of all points in the candidate tray point cloud data and the reference axes, and eliminating the offset points from the candidate tray point cloud data.
In the embodiment of the application, considering that the candidate tray point cloud data may still contain points which do not belong to the corresponding tray to be detected, offset points which do not belong to the target tray surface point cloud can be removed through a further normal filtering algorithm. The offset point may include a point whose angle between a normal line of the point cloud (a normal line determined by a normal vector of a tangential plane in which a point in the point cloud is located) and a specified reference axis (for example, the x-axis) exceeds a specified reference normal angle (i.e., does not conform to the normal angle range).
For example, the unmanned forklift can judge whether the normal angle exceeds a pre-designated reference normal angle according to the normal angle between the point cloud normal line and the reference axis of each point in the candidate tray point cloud data. If the normal angle is greater than or equal to the reference normal angle, determining a corresponding point as an offset point, and eliminating the offset point (point B shown in fig. 4) from the candidate tray point cloud data; if the normal angle is smaller than the reference normal angle, the corresponding point may be determined as a non-offset point, and the point (point a shown in fig. 4) may be reserved in the candidate tray point cloud data.
512. And carrying out plane search on the candidate tray point cloud data from which the offset points are removed, and obtaining target tray surface point cloud data which corresponds to the target detection position and is parallel to the ground.
After the candidate tray point cloud data with the offset points removed is obtained, the unmanned forklift can further perform plane search on the rest candidate tray point cloud data so as to obtain target tray surface point cloud data corresponding to the tray. In some embodiments, the above-mentioned plane search may be implemented based on the RANSAC plane search algorithm, which is not specifically limited in the embodiments of the present application.
It should be noted that, the point cloud data of the target tray surface obtained through the above-mentioned plane search, the point cloud center and the plane angle corresponding to the point cloud data, may be used as the initial pose of the tray to be registered, and applied to the registration in the subsequent step, so as to determine the actual pose corresponding to the tray in the target detection position.
514. Registering the target tray surface point cloud data to determine the pose corresponding to the tray in the target detection position.
Step 514 is similar to step 208, and will not be described here.
Therefore, the tray pose detection method described in the embodiment is implemented, in a working scene of warehouse logistics, the tray used for transporting goods is detected by using the unmanned forklift and initial positioning and point cloud registration are performed based on geometric features, so that noise interference in the detection process can be effectively reduced, the method is particularly suitable for detecting irregularly-shaped trays or trays with interference, pose data of positions and poses of the trays are acquired as accurately as possible, accuracy of detection of the tray pose by the unmanned forklift is improved, detection errors are reduced, and automatic transportation efficiency of the unmanned forklift is further effectively improved. In addition, non-target point clouds are gradually removed through geometric modes such as plane search, accuracy of the determined target tray surface point cloud data can be improved, erroneous judgment is reduced as much as possible, and refinement degree and reliability of storage logistics transportation of the unmanned forklift are improved.
Referring to fig. 6, fig. 6 is a schematic flow chart of another tray pose detection method disclosed in an embodiment of the present application, and the method may be applied to the above-mentioned unmanned forklift. As shown in fig. 6, the tray pose detection method may include the steps of:
602. and under the condition that the unmanned forklift is positioned at the target detection position, acquiring initial point cloud data comprising the tray in the target detection position.
Step 602 is similar to step 202, and will not be described herein.
604. And determining an interested area aiming at the initial point cloud data, and preprocessing the initial point cloud data in the interested area to obtain preprocessed point cloud data.
606. And separating ground point cloud data from the preprocessed point cloud data to obtain a target point cloud candidate cluster corresponding to the target detection position, wherein the ground point cloud data comprises points matched with the ground corresponding to the target detection position in the preprocessed point cloud data.
608. And determining candidate tray point cloud data from the target point cloud candidate clusters according to the specification data corresponding to the trays.
Step 604, step 606 and step 608 are similar to step 504, step 506 and step 508 described above, and will not be repeated here.
610. And determining offset points with normal angles larger than or equal to the reference normal angles according to normal angles between point cloud normals of all points in the candidate tray point cloud data and the reference axes, and eliminating the offset points from the candidate tray point cloud data.
612. And carrying out plane search on the candidate tray point cloud data from which the offset points are removed, and obtaining target tray surface point cloud data which corresponds to the target detection position and is parallel to the ground.
Step 610 and step 612 are similar to step 510 and step 512 described above, and are not repeated here.
614. And performing coarse registration on the target tray surface point cloud data based on the template point cloud data to obtain initial registration point cloud data.
In the embodiment of the application, in order to accurately determine the pose (such as the point cloud center, the plane angle, etc.) of the tray included in the target detection position, the registration may be implemented by registering the target tray surface point cloud data. The above-mentioned registration process may include coarse registration and accurate registration, that is, the unmanned forklift may sequentially perform coarse registration and further accurate registration on the above-mentioned target pallet surface point cloud data, and obtain the point cloud center and plane angle that are most suitable for the conditions, as the pose corresponding to the above-mentioned pallet.
In some embodiments, the rough registration of the target tray surface point cloud data may be performed based on pre-specified template point cloud data. The template point cloud data may be determined according to tray information input in advance by a sensing module mounted on the unmanned forklift. For example, the tray information may include a tray thickness (tray height), a pier width (column width), a pier height (column height), a pier distance (column distance), an upper and lower cardboard height, and the like corresponding to the tray serving as the template, and the sensing module may generate corresponding template point cloud data according to the input tray information and apply the template point cloud data to a subsequent registration process.
For example, the unmanned forklift may select N (N is a positive integer) template sampling points and N target sampling points from the above template point cloud data and the target tray surface point cloud data, and calculate first feature histogram data corresponding to the N template sampling points and second feature histogram data corresponding to the N target sampling points, respectively. The N template sampling points and the N target sampling points should have different FPFH features (Fast Point Feature Histogram ) as much as possible, and the distance between the sampling points should meet a pre-specified minimum distance threshold to ensure the validity and reliability of registration as much as possible. Optionally, before the sampling points are selected, the unmanned forklift can also perform downsampling (such as voxel downsampling) on the template point cloud data and the target tray surface point cloud data, so that the calculated amount can be further reduced, and the registration efficiency can be improved.
Based on the above, the unmanned forklift can obtain matching point pairs with similar FPFH characteristics from the template point cloud data and the target tray surface point cloud data according to the first characteristic histogram data and the second characteristic histogram data, and further can iteratively calculate a first rigid body transformation matrix between the target tray surface point cloud data and the template point cloud data according to the matching point pairs. In the process of each iteration, the accuracy of the application of the first rigid body transformation matrix to registration can be judged by calculating the distance error after the transformation of the first rigid body transformation matrix obtained by the current iteration. The iterative process described above may be exited after iterating until an optimal first rigid body transformation matrix is obtained, or after an upper limit of the number of iterations is reached.
Further, according to the first rigid body transformation matrix after iterative computation, the unmanned forklift can determine initial alignment point cloud data obtained by performing rough registration on the target tray surface point cloud data, and apply the initial alignment point cloud data to subsequent accurate registration.
616. And according to the initial alignment point cloud data, iteratively determining a point cloud center and a plane angle which meet the condition of the point closest to the iteration, and taking the point cloud center and the plane angle as the pose corresponding to the tray in the target detection position.
Illustratively, the iterative closest point condition may include an error threshold condition and/or an iteration number condition that are satisfied by performing closest point iterative computation on the initial registration point cloud data.
In the embodiment of the application, the unmanned forklift can further accurately register the initial registration point cloud data by adopting an ICP algorithm. For example, the unmanned forklift may take the initial registration point cloud data and the corresponding template point cloud data as the initial point set to be registered, where the template point cloud data may be the same as or different from the template point cloud data in step 614, and the embodiment of the present application is not specifically limited.
The closest point pair in the initial alignment point cloud data and the template point cloud data is searched to be used as a matching point pair, and after the mismatching point pair with unreasonable direction is removed, a corresponding second rigid body transformation matrix can be further obtained according to the matching point pair through calculation. On this basis, the iterative process can be exited by iterating the step of finding the closest point and calculating the second rigid-body transformation matrix, while satisfying the iterative closest point condition. According to the initial alignment point cloud data and the second rigid body transformation matrix obtained after iterative computation, the position and the orientation of the tray, which are applied to the center and the plane angle of the converted point cloud, can be determined and used as the position and orientation corresponding to the tray in the target detection position.
Therefore, the tray pose detection method described in the embodiment is implemented, in a working scene of warehouse logistics, the tray used for transporting goods is detected by using the unmanned forklift and initial positioning and point cloud registration are performed based on geometric features, so that noise interference in the detection process can be effectively reduced, the method is particularly suitable for detecting irregularly-shaped trays or trays with interference, pose data of positions and poses of the trays are acquired as accurately as possible, accuracy of detection of the tray pose by the unmanned forklift is improved, detection errors are reduced, and automatic transportation efficiency of the unmanned forklift is further effectively improved. In addition, through carrying out repeated iteration registration to target tray face point cloud data, be favorable to further promoting unmanned fork truck and confirm the accuracy of tray position appearance, realize accurate detection, guarantee storage commodity circulation's stable and reliable.
Referring to fig. 7, fig. 7 is a schematic modularized view of a tray pose detection device disclosed in an embodiment of the present application, where the tray pose detection device may be applied to the above-mentioned unmanned forklift. As shown in fig. 7, the tray pose detection apparatus may include a point cloud acquisition unit 701, a ground separation unit 702, a plane search unit 703, and a registration unit 704, wherein:
The point cloud acquisition unit 701 is configured to acquire initial point cloud data including a tray in a target detection position in a case where the unmanned forklift is at the target detection position;
a ground separation unit 702, configured to separate ground point cloud data from initial point cloud data, to obtain candidate tray point cloud data;
a plane search unit 703, configured to reject offset points in the candidate tray point cloud data that do not conform to the normal angle range, and search for target tray surface point cloud data from the candidate tray point cloud data from which the offset points are rejected;
and the registration unit 704 is used for registering the target tray surface point cloud data so as to determine the pose corresponding to the tray in the target detection position.
Therefore, the intelligent mattress described in the embodiment can be used for detecting the tray used for transporting goods by using the unmanned forklift in the working scene of warehouse logistics, and accurately acquiring the position, the posture and the like of the tray. The tray pose detection method can perform initial positioning and point cloud registration based on geometric features, effectively reduces noise interference in the detection process, and is particularly suitable for detecting irregularly-shaped trays or trays with interference (such as film winding and the like), so that the accuracy of detection of the tray pose by the unmanned forklift is improved, detection errors are reduced, and the transportation efficiency of the unmanned forklift is effectively improved.
In one embodiment, the above ground separation unit 702 may be specifically configured to:
determining an interesting area according to the initial point cloud data, and preprocessing the initial point cloud data in the interesting area to obtain preprocessed point cloud data;
separating ground point cloud data from the preprocessed point cloud data to obtain target point cloud candidate clusters corresponding to the target detection positions, wherein the ground point cloud data comprise points matched with the ground corresponding to the target detection positions in the preprocessed point cloud data;
and determining candidate tray point cloud data from the target point cloud candidate clusters according to the specification data corresponding to the tray to be detected.
In one embodiment, the ground separation unit 702 may specifically include:
performing plane search on the preprocessed point cloud data to obtain plane point clouds parallel to the ground;
determining a plane point cloud matched with the ground corresponding to the target detection position as ground point cloud data, and eliminating the ground point cloud data from the preprocessed point cloud data;
and clustering the preprocessed point cloud data after the ground point cloud data is removed according to the target detection position to obtain target point cloud candidate clusters corresponding to the target detection position.
In an embodiment, when determining candidate tray point cloud data from the target point cloud candidate cluster according to specification data corresponding to the tray to be detected, the ground separation unit 702 may specifically include:
determining the thickness of the reserved point cloud according to the specification data corresponding to the tray to be detected, wherein the thickness of the reserved point cloud is larger than that of the tray;
and according to the thickness of the reserved point cloud, intercepting the corresponding planar point cloud from the target point cloud candidate cluster to serve as candidate tray point cloud data.
In one embodiment, the plane search unit 703 may be specifically configured to:
determining offset points with normal angles larger than or equal to the reference normal angles according to normal angles between point cloud normals of all points in the candidate tray point cloud data and the reference axes, and eliminating the offset points from the candidate tray point cloud data;
and carrying out plane search on the candidate tray point cloud data with the offset points removed to obtain target tray surface point cloud data which corresponds to the target detection position and is parallel to the ground.
In one embodiment, the registering unit 704 may specifically be configured to:
performing coarse registration on target tray surface point cloud data based on the template point cloud to obtain initial registration point cloud data;
And according to the initial alignment point cloud data, iteratively determining a point cloud center and a plane angle which meet the iteration closest point condition as the pose corresponding to the tray in the target detection position, wherein the iteration closest point condition comprises an error threshold condition and/or an iteration frequency condition which are met by carrying out closest point iterative computation on the initial alignment point cloud data.
In one embodiment, the registering unit 704 performs coarse registration on the target tray surface point cloud data based on the template point cloud to obtain initial registration point cloud data, which may specifically include:
respectively selecting N template sampling points and N target sampling points from the template point cloud data and the target tray surface point cloud data, wherein N is a positive integer;
respectively calculating first characteristic histogram data corresponding to N template sampling points and second characteristic histogram data corresponding to N target sampling points;
according to the first characteristic histogram data and the second characteristic histogram data, matching point pairs are obtained from the template point cloud data and the target tray surface point cloud data;
according to the matching point pairs, a first rigid body transformation matrix between the target tray surface point cloud data and the template point cloud data is calculated in an iterative mode, and initial alignment point cloud data obtained by performing rough registration on the target tray surface point cloud data is determined according to the first rigid body transformation matrix after iterative calculation.
Therefore, the intelligent mattress described by the embodiment can be used for detecting the tray used for transporting goods by using the unmanned forklift in the working scene of warehouse logistics, and carrying out initial positioning and point cloud registration based on geometric features, so that noise interference in the detection process can be effectively reduced, the intelligent mattress can be particularly suitable for detecting the tray with an irregular shape or the tray with interference, position and gesture pose data of the tray can be acquired as accurately as possible, the accuracy of detecting the position and pose of the tray by using the unmanned forklift can be improved, detection errors can be reduced, and the automatic transportation efficiency of the unmanned forklift can be effectively improved. In addition, non-target point clouds are gradually removed through geometric modes such as plane search, accuracy of the determined target tray surface point cloud data can be improved, erroneous judgment is reduced as much as possible, and refinement degree and reliability of storage logistics transportation of the unmanned forklift are improved. In addition, through carrying out repeated iteration registration to target tray face point cloud data, be favorable to further promoting unmanned fork truck and confirm the accuracy of tray position appearance, realize accurate detection, guarantee storage commodity circulation's stable and reliable.
Referring to fig. 8, fig. 8 is a schematic modularized view of an unmanned forklift disclosed in an embodiment of the present application, where the unmanned forklift may include the above-mentioned sensing module (e.g. a vehicle, a computer, a point cloud scanning processing system based on SoC, etc.). As shown in fig. 8, the sensing module carried by the unmanned forklift may include:
a memory 801 storing executable program code;
a processor 802 coupled to the memory 801;
the processor 802 may call executable program codes stored in the memory 801, and may perform all or part of the steps in any of the tray pose detection methods described in the above embodiments.
Further, the embodiment of the application further discloses a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program makes a computer execute all or part of the steps in any of the tray pose detection methods described in the above embodiments.
Further, the embodiments further disclose a computer program product which, when run on a computer, enables the computer to perform all or part of the steps of any of the tray pose detection methods described in the above embodiments.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The above describes in detail a method and a device for detecting the pose of a tray, an unmanned forklift and a storage medium, and specific examples are applied to describe the principle and implementation of the application, and the description of the above embodiments is only used for helping to understand the method and core ideas of the application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The utility model provides a tray position appearance detection method which is characterized in that is applied to unmanned fork truck, the method includes:
acquiring initial point cloud data comprising a tray in a target detection position under the condition that the unmanned forklift is in the target detection position;
separating the ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data;
removing offset points which do not accord with the normal angle range from the candidate tray point cloud data, and searching from the candidate tray point cloud data after removing the offset points to obtain target tray surface point cloud data;
registering the target tray surface point cloud data to determine the pose corresponding to the tray in the target detection position.
2. The method of claim 1, wherein separating the ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data comprises:
determining an interested area aiming at the initial point cloud data, and preprocessing the initial point cloud data in the interested area to obtain preprocessed point cloud data;
separating ground point cloud data from the preprocessed point cloud data to obtain a target point cloud candidate cluster corresponding to the target detection position, wherein the ground point cloud data comprises points matched with the ground corresponding to the target detection position in the preprocessed point cloud data;
And determining candidate tray point cloud data from the target point cloud candidate clusters according to the specification data corresponding to the tray.
3. The method according to claim 2, wherein separating the ground point cloud data from the preprocessed point cloud data to obtain the target point cloud candidate cluster corresponding to the target detection position comprises:
performing plane search on the preprocessed point cloud data to obtain plane point clouds parallel to the ground;
determining a plane point cloud matched with the ground corresponding to the target detection position as ground point cloud data, and eliminating the ground point cloud data from the preprocessed point cloud data;
and clustering the preprocessed point cloud data after eliminating the ground point cloud data according to the target detection position to obtain a target point cloud candidate cluster corresponding to the target detection position.
4. The method of claim 3, wherein the determining candidate tray point cloud data from the target point cloud candidate clusters according to the specification data corresponding to the tray includes:
determining the thickness of the reserved point cloud according to the specification data corresponding to the tray, wherein the thickness of the reserved point cloud is larger than that of the tray;
And according to the thickness of the reserved point cloud, intercepting corresponding plane point clouds from the target point cloud candidate cluster to serve as candidate tray point cloud data.
5. The method of claim 1, wherein the removing offset points in the candidate tray point cloud that do not meet the normal angle range, and searching from the candidate tray point cloud after removing the offset points, includes:
determining offset points with normal angles larger than or equal to the reference normal angles according to normal angles between point cloud normals of all points in the candidate tray point cloud data and the reference axes, and eliminating the offset points from the candidate tray point cloud data;
and performing plane search on the candidate tray point cloud data from which the offset points are removed to obtain target tray surface point cloud data which corresponds to the target detection position and is parallel to the ground.
6. The method of any one of claims 1 to 5, wherein registering the target tray surface point cloud data to determine the pose corresponding to the tray in the target detection position comprises:
performing coarse registration on the target tray surface point cloud data based on the template point cloud data to obtain initial registration point cloud data;
And iteratively determining a point cloud center and a plane angle which meet the point cloud center and plane angle of the closest point cloud data according to the initial alignment point cloud data, wherein the point cloud center and plane angle are used as the pose corresponding to the tray in the target detection position, and the point cloud center and plane angle of the closest point cloud data comprise error threshold conditions and/or iteration times conditions which are met by performing closest point iterative computation on the initial alignment point cloud data.
7. The method of claim 6, wherein the performing coarse registration on the target tray surface point cloud data based on the template point cloud to obtain initial registration point cloud data comprises:
respectively selecting N template sampling points and N target sampling points from the template point cloud data and the target tray surface point cloud data, wherein N is a positive integer;
respectively calculating first characteristic histogram data corresponding to the N template sampling points and second characteristic histogram data corresponding to the N target sampling points;
acquiring matching point pairs from the template point cloud data and the target tray surface point cloud data according to the first characteristic histogram data and the second characteristic histogram data;
and according to the matching point pairs, iteratively calculating a first rigid body transformation matrix between the target tray surface point cloud data and the template point cloud data, and determining initial alignment point cloud data obtained by performing rough registration on the target tray surface point cloud data according to the first rigid body transformation matrix after iterative calculation.
8. The utility model provides a tray position appearance detection device, its characterized in that is applied to unmanned fork truck, tray position appearance detection device includes:
the point cloud acquisition unit is used for acquiring initial point cloud data comprising a tray in a target detection position under the condition that the unmanned forklift is positioned at the target detection position;
the ground separation unit is used for separating ground point cloud data from the initial point cloud data to obtain candidate tray point cloud data;
the plane searching unit is used for eliminating offset points which do not accord with the normal angle range in the candidate tray point cloud data, and searching the candidate tray point cloud data with the offset points eliminated to obtain target tray plane point cloud data;
and the registration unit is used for registering the target tray surface point cloud data so as to determine the pose corresponding to the tray in the target detection position.
9. An unmanned forklift comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any one of claims 1 to 7.
CN202310075321.7A 2023-01-11 2023-01-11 Tray pose detection method and device, unmanned forklift and storage medium Pending CN116128841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310075321.7A CN116128841A (en) 2023-01-11 2023-01-11 Tray pose detection method and device, unmanned forklift and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310075321.7A CN116128841A (en) 2023-01-11 2023-01-11 Tray pose detection method and device, unmanned forklift and storage medium

Publications (1)

Publication Number Publication Date
CN116128841A true CN116128841A (en) 2023-05-16

Family

ID=86309820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310075321.7A Pending CN116128841A (en) 2023-01-11 2023-01-11 Tray pose detection method and device, unmanned forklift and storage medium

Country Status (1)

Country Link
CN (1) CN116128841A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342858A (en) * 2023-05-29 2023-06-27 未来机器人(深圳)有限公司 Object detection method, device, electronic equipment and storage medium
CN116383229A (en) * 2023-06-02 2023-07-04 未来机器人(深圳)有限公司 Tray identification method, unmanned forklift and storage medium
CN117649450A (en) * 2024-01-26 2024-03-05 杭州灵西机器人智能科技有限公司 Tray grid positioning detection method, system, device and medium
CN118229772A (en) * 2024-05-24 2024-06-21 杭州士腾科技有限公司 Tray pose detection method, system, equipment and medium based on image processing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342858A (en) * 2023-05-29 2023-06-27 未来机器人(深圳)有限公司 Object detection method, device, electronic equipment and storage medium
CN116342858B (en) * 2023-05-29 2023-08-25 未来机器人(深圳)有限公司 Object detection method, device, electronic equipment and storage medium
CN116383229A (en) * 2023-06-02 2023-07-04 未来机器人(深圳)有限公司 Tray identification method, unmanned forklift and storage medium
CN117649450A (en) * 2024-01-26 2024-03-05 杭州灵西机器人智能科技有限公司 Tray grid positioning detection method, system, device and medium
CN117649450B (en) * 2024-01-26 2024-04-19 杭州灵西机器人智能科技有限公司 Tray grid positioning detection method, system, device and medium
CN118229772A (en) * 2024-05-24 2024-06-21 杭州士腾科技有限公司 Tray pose detection method, system, equipment and medium based on image processing

Similar Documents

Publication Publication Date Title
CN116128841A (en) Tray pose detection method and device, unmanned forklift and storage medium
US7376262B2 (en) Method of three dimensional positioning using feature matching
CN110794406B (en) Multi-source sensor data fusion system and method
CN110766758B (en) Calibration method, device, system and storage device
CN113253737B (en) Shelf detection method and device, electronic equipment and storage medium
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN115546202B (en) Tray detection and positioning method for unmanned forklift
US20230204776A1 (en) Vehicle lidar system and object detection method thereof
CN113050636A (en) Control method, system and device for autonomous tray picking of forklift
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN113989366A (en) Tray positioning method and device
Morris et al. A view-dependent adaptive matched filter for ladar-based vehicle tracking
CN114332219B (en) Tray positioning method and device based on three-dimensional point cloud processing
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CN112581519B (en) Method and device for identifying and positioning radioactive waste bag
CN116071547A (en) Tray pose detection method and device, equipment and storage medium
Billah et al. Calibration of multi-lidar systems: Application to bucket wheel reclaimers
Li et al. 2d lidar and camera fusion using motion cues for indoor layout estimation
CN115600118A (en) Tray leg identification method and system based on two-dimensional laser point cloud
CN109635692A (en) Scene based on ultrasonic sensor recognizer again
Bohacs et al. Mono Camera Based Pallet Detection and Pose Estimation for Automated Guided Vehicles
CN112907666A (en) Tray pose estimation method, system and device based on RGB-D
CN113706610A (en) Pallet pose calculation method based on RGB-D camera
CN116342695B (en) Unmanned forklift truck goods placing detection method and device, unmanned forklift truck and storage medium
Joo et al. A pallet recognition and rotation algorithm for autonomous logistics vehicle system of a distribution center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination