CN114897972A - Tray positioning method and device - Google Patents

Tray positioning method and device Download PDF

Info

Publication number
CN114897972A
CN114897972A CN202210690376.4A CN202210690376A CN114897972A CN 114897972 A CN114897972 A CN 114897972A CN 202210690376 A CN202210690376 A CN 202210690376A CN 114897972 A CN114897972 A CN 114897972A
Authority
CN
China
Prior art keywords
tray
point cloud
coordinate
determining
pallet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210690376.4A
Other languages
Chinese (zh)
Inventor
赵鹏
徐斌
刘伟
耿牛牛
康照奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jike Science and Technology Co Ltd
Original Assignee
Jike Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jike Science and Technology Co Ltd filed Critical Jike Science and Technology Co Ltd
Priority to CN202210690376.4A priority Critical patent/CN114897972A/en
Publication of CN114897972A publication Critical patent/CN114897972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a device for positioning a tray, which relate to the technical field of tray positioning and mainly aim to improve the calculation efficiency and accuracy of the tray positioning; the main technical scheme comprises: acquiring a two-dimensional image and point cloud data of the tray; determining a pixel position corresponding to the tray in the two-dimensional image; selecting point clouds corresponding to the pixel positions in the point cloud data as tray point clouds of interest; carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud; based on the precise pallet point cloud, determining positioning information of the pallet.

Description

Tray positioning method and device
The present application is a divisional application of a patent application entitled "a method and apparatus for positioning a tray", which was filed on 12 months and 27 days in 2021, and has an application number of 202111607658.5.
Technical Field
The invention relates to the technical field of tray positioning, in particular to a tray positioning method and device.
Background
Along with the development of science and technology, industrial automation uses and grows, for reducing human labor cost, adopts intelligent haulage equipment such as AGV fork truck under the storage environment usually to carry the goods. Intelligent handling equipment such as AGV fork truck carries is the tray that is used for placing the goods. The tray holds articles, and the tray is held up through intelligent handling equipment such as an AGV forklift and the like and is carried to a corresponding position.
The key point of the work of intelligent carrying equipment such as an AGV forklift is how to acquire the position of a tray in the space so as to guide the AGV to reach the position to complete the carrying of goods. Currently, methods for locating the position of the pallet in space include two-dimensional image processing techniques and three-dimensional point cloud processing techniques. The two-dimensional processing technology is limited by a storage environment, the difference between images is large due to external factors such as environment illumination and the like with the complex and changeable storage environment, interference is caused on feature extraction of a target, and the accuracy of tray positioning is low due to the fact that the robustness and adaptability of an algorithm are poor. The three-dimensional point cloud processing technology has low calculation efficiency due to large point cloud data volume, and is difficult to meet the real-time requirement of industrial application.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for positioning a tray, and mainly aims to improve the calculation efficiency and accuracy of positioning the tray.
In order to achieve the above purpose, the following technical scheme is mainly adopted:
in a first aspect, the present invention provides a method for positioning a tray, the method comprising:
acquiring a two-dimensional image and point cloud data of the tray;
determining a pixel position corresponding to the tray in the two-dimensional image;
selecting point clouds corresponding to the pixel positions in the point cloud data as tray point clouds of interest;
carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud; the tray template point cloud is manufactured on the basis of a tray vertical cylindrical surface;
determining positioning information of the pallet based on the precise pallet point cloud;
registering the interested tray point cloud and the tray template point cloud to obtain an accurate tray point cloud, which comprises the following steps:
determining a feature descriptor of the pallet template point cloud and a feature descriptor of the pallet point cloud of interest;
taking the tray template point cloud and the corresponding feature descriptor thereof, the interested tray point cloud and the corresponding feature descriptor thereof as the input of a first registration algorithm to obtain a transformation matrix between the tray template point cloud and the interested tray point cloud;
taking the tray template point cloud, the interested tray point cloud and the transformation matrix as the input of a second registration algorithm to obtain the accurate tray point cloud;
determining the positioning information of the pallet based on the accurate pallet point cloud if the positioning information includes a center coordinate, including:
selecting a maximum coordinate value and a minimum coordinate value corresponding to each coordinate axis of the tray in a camera coordinate system from the accurate tray point cloud;
for each of said coordinate axes: determining the average value of the maximum coordinate value and the minimum coordinate value corresponding to the coordinate axis as the central coordinate value of the tray in the coordinate axis;
generating a central coordinate corresponding to the center of the tray based on the central coordinate value corresponding to each coordinate axis;
if the positioning information includes pose data, determining the positioning information of the pallet based on the accurate pallet point cloud, including:
for each of the coordinate axes: projecting the accurate tray point cloud on a coordinate plane corresponding to the coordinate axis, fitting scattered points obtained by projection into a straight line, and determining an included angle between the coordinate axis and the straight line;
and generating pose data of the tray based on the included angle corresponding to each coordinate axis.
Optionally, selecting the point cloud corresponding to the pixel position in the point cloud data as an interesting tray point cloud, including:
calibrating the pixel position to obtain a target pixel position;
and selecting the point cloud corresponding to the target pixel position in the point cloud data as the tray point cloud of interest.
Optionally, selecting a point cloud corresponding to the target pixel position in the point cloud data as an interesting tray point cloud, including:
determining point clouds corresponding to the target pixel positions in the point cloud data as interest point clouds;
carrying out voxel filtering processing on the interest point cloud to obtain the interest tray point cloud;
and/or the presence of a gas in the gas,
calibrating the pixel positions to obtain target pixel positions, including:
and verifying the pixel position by utilizing respective corresponding verification values of the pixel point coordinate, the pixel width and the pixel height related to the pixel position to obtain the target pixel position, wherein the respective corresponding verification values of the pixel point coordinate, the pixel width and the pixel height are used for increasing or reducing respective existing values.
Optionally, the method further comprises:
determining a mass center and a center corresponding to the accurate tray point cloud, and coordinate values on the same coordinate axis of a camera coordinate system;
determining the placement condition of the tray based on the comparison result of the two coordinate values; the method specifically comprises the following steps:
if the coordinate value of a certain coordinate axis of the mass center of the tray point cloud is larger than the coordinate value of the center on the same coordinate axis, judging that the tray is placed in the forward direction, and carrying the tray by intelligent carrying equipment;
if the coordinate value of a certain coordinate axis of the mass center of the tray point cloud is smaller than the coordinate value of the center on the same coordinate axis, the tray is judged to be placed reversely or have other special conditions, the intelligent carrying equipment is inconvenient to carry, and a prompt needs to be sent to carry out subsequent processing.
In order to achieve the above purpose, the invention also provides the following scheme:
a pallet positioning apparatus, the apparatus comprising:
the acquisition unit is used for acquiring a two-dimensional image and point cloud data of the tray;
the first determining unit is used for determining the pixel position corresponding to the tray in the two-dimensional image;
the selecting unit is used for selecting the point cloud corresponding to the pixel position in the point cloud data as an interesting tray point cloud;
the registration unit is used for carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud; the tray template point cloud is manufactured on the basis of a tray vertical cylindrical surface;
a second determining unit for determining the positioning information of the pallet based on the accurate pallet point cloud;
the registration unit is specifically configured to determine a feature descriptor of the pallet template point cloud and a feature descriptor of the pallet point cloud of interest;
taking the tray template point cloud and the corresponding feature descriptor thereof, the interested tray point cloud and the corresponding feature descriptor thereof as the input of a first registration algorithm to obtain a transformation matrix between the tray template point cloud and the interested tray point cloud;
taking the tray template point cloud, the interested tray point cloud and the transformation matrix as the input of a second registration algorithm to obtain the accurate tray point cloud;
the second determination unit includes:
the first determining module is used for selecting the maximum coordinate value and the minimum coordinate value corresponding to each coordinate axis of the tray in a camera coordinate system from the accurate tray point cloud when the positioning information is the center coordinate; for each of said coordinate axes: determining the average value of the maximum coordinate value and the minimum coordinate value corresponding to the coordinate axis as the central coordinate value of the tray in the coordinate axis; generating a central coordinate corresponding to the center of the tray based on the central coordinate value corresponding to each coordinate axis;
the second determination unit includes:
a second determining module, configured to, when the positioning information is pose data, perform, for each coordinate axis: projecting the accurate tray point cloud on a coordinate plane corresponding to the coordinate axis, fitting scattered points obtained by projection into a straight line, and determining an included angle between the coordinate axis and the straight line; and generating pose data of the tray based on the included angle corresponding to each coordinate axis.
In order to achieve the above purpose, the invention also provides the following scheme: a computer-readable storage medium comprising a stored program, wherein the program, when executed, controls a device on which the storage medium is located to perform any of the tray positioning methods.
In order to achieve the above purpose, the invention also provides the following scheme:
a storage management device, the storage management device comprising:
a memory for storing a program;
a processor, coupled to the memory, for executing the program to perform any of the tray positioning methods.
By means of the technical scheme, the tray positioning method and the tray positioning device can obtain the two-dimensional image and the point cloud data of the tray when the tray needs to be carried. And then determining the pixel position corresponding to the tray in the two-dimensional image, and selecting the point cloud corresponding to the pixel position in the point cloud data as the point cloud of the tray of interest. And carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud. And finally, determining the positioning information of the pallet based on the accurate pallet point cloud. Therefore, the scheme provided by the invention simultaneously uses a two-dimensional image processing technology and a three-dimensional point cloud processing technology. The method comprises the steps of obtaining a pixel position of a tray in a two-dimensional image by using a two-dimensional image processing technology, then extracting a target point cloud of the tray in three-dimensional point cloud data through the pixel position, obtaining a corresponding interesting tray point cloud, screening interesting tray points of the tray by using the three-dimensional point cloud processing technology, and determining tray positioning information by using the screened accurate tray point cloud. Therefore, the scheme provided by the invention can accurately extract the point cloud of the region of interest of the tray, and effectively reduces the number of the point clouds required by determining the tray positioning information, thereby improving the calculation efficiency and the accuracy of the tray positioning.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for positioning a pallet according to an embodiment of the present invention;
FIG. 2 illustrates a schematic view of a tray provided in accordance with another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a tray positioning device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a tray positioning device according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Along with the development of science and technology, industrial automation comes to the end, for reducing the human labor cost, adopts intelligent haulage equipment such as AGV fork truck under the storage environment to carry the goods usually. Intelligent handling equipment such as AGV fork truck carries is the tray that is used for placing the goods. The tray holds articles, and the tray is held up through intelligent handling equipment such as an AGV forklift and the like and is carried to a corresponding position.
The key point of the work of intelligent carrying equipment such as an AGV forklift is how to acquire the position of a tray in the space so as to guide the AGV to reach the position to complete the carrying of goods. At present, the method for acquiring the position of the tray on the space includes four methods: first, a visual inspection algorithm is employed that segments the image of the pallet based on the characteristics of the color, edges, and corners. The method is easily interfered by factors such as external actual environment illumination and the like to extract the features, and the robustness and the adaptability of the algorithm are poor. Second, a visual label detection method is used to locate the pallet. According to the method, the visual mark is pasted on the surface of the upright column of the tray, the visual mark is generally a two-dimensional code or other easily-recognized bar codes, then the tray is found by searching the visual mark, the visual mark becomes an influence factor of tray detection, but when the visual mark is stained, the detection result of the tray is greatly influenced. Thirdly, the positioning of the tray is realized by adopting the characteristics based on Harr-like and LBP, the positioning method depends on the pose state of the current tray, and the optimal condition is to keep the vertical cylindrical surface of the tray parallel to the imaging surface of the sensor. However, in an actual application scenario, the pose uncertainty of the tray is large, and the relatively stable posture cannot be completely maintained, so that the related algorithm has large limitation, and the accuracy of positioning the tray is low. Fourthly, the tray is positioned by adopting a method based on three-dimensional point cloud, and in the method, due to the fact that the point cloud data volume is large, the calculation efficiency is low, and the real-time requirement of industrial application is difficult to meet.
Therefore, in order to overcome the above-mentioned drawbacks, embodiments of the present invention provide a method and an apparatus for positioning a tray, so as to improve the calculation efficiency and accuracy of positioning the tray. The following describes a method and an apparatus for positioning a tray according to an embodiment of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for positioning a tray, where the method mainly includes:
101. two-dimensional images and point cloud data of the tray are acquired.
The tray positioning method provided by the embodiment of the invention is used for positioning the trays in the warehousing-out environment so as to assist intelligent carrying equipment such as an AGV fork truck and the like to accurately carry the trays.
In practical application, the 3D camera is installed on intelligent handling equipment such as an AGV fork truck, and the 3D camera has the function of gathering two-dimensional image and some cloud data. When the intelligent carrying equipment reaches a fixed position, a request is sent to the upper computer software system, then the upper computer software system controls the 3D camera to shoot, and the two-dimensional image and the three-dimensional point cloud data of the tray are obtained. When the fixed position is determined that the distance between the intelligent carrying device and the tray is not less than the preset distance threshold, it is determined that the intelligent carrying device reaches the fixed position.
The two-dimensional image and the point cloud data of the tray are data collected in the same scene, and the two data have a corresponding relation. The camera coordinate system meets the right-hand rule, and when facing the camera lens, the camera coordinate system faces leftwards to be the positive direction of an X coordinate axis, downwards to be the positive direction of a Y coordinate axis and forwards to be the positive direction of a Z coordinate axis. Therefore, the value of each plane in the camera depth direction is determined as the value of the Z coordinate axis.
102. And determining the pixel position corresponding to the tray in the two-dimensional image.
The specific process for determining the pixel position corresponding to the tray in the two-dimensional image comprises the following steps: and carrying out tray detection on the two-dimensional image by using the deep learning tray target detection model to obtain a pixel position corresponding to the tray.
The deep learning tray object detection model described herein has the function of identifying the tray and returning its pixel position in the image. The deep learning tray target detection model is a pre-trained model and can be directly used when the pixel position of a tray in a two-dimensional image needs to be identified.
The deep learning pallet target detection model can be obtained by the following method: a large number of two-dimensional images for a pallet are collected, and the collected two-dimensional images are made into a data set in a VOC (Visual Object Classes) data format for deep learning target detection, and are processed according to a set proportion, such as 7: a scale of 3 is assigned to the training set and the test set. Then, a detection model is selected, training parameters of the model are set, and then the model is trained by using a training set as input of the detection model. And then testing the trained model by using a test set, wherein when the test meets the set accuracy, the model is successfully trained, and the successfully trained model is used as a deep learning tray target detection model. The specific type of the deep learning tray target detection model described herein is not particularly limited in the embodiments of the present invention, and may be selected based on specific business requirements. Illustratively, the detection model is a Yolov3 model. The two-dimensional images are images of the vertical cylindrical surface of the pallet, and the vertical cylindrical surface of the pallet is a forkable surface of the pallet by a forklift. As shown in fig. 2, 20 is a tray, and the surface a corresponding to the shadow is a vertical cylindrical surface of the tray.
The pixel position is the position of the tray in the two-dimensional image, and is represented by the following parameters: the image processing method comprises the steps of pixel point coordinates, pixel width and pixel height, wherein the pixel coordinate point is the coordinate of a target point in a tray in a two-dimensional image, the pixel width is the width of the tray in the two-dimensional image determined by taking the pixel coordinate point as a reference, and the pixel height is the height of the tray in the two-dimensional image determined by taking the pixel coordinate point as a reference. The target point may be determined based on the service requirement, and may be an upper left corner of the tray, a lower left corner of the tray, an upper right corner of the tray, and a lower right corner of the tray.
Illustratively, the pixel position is defined by the upper left corner of the tray, and the pixel position is (X, Y, H, W), the pixel point coordinate (X, Y) of the upper left corner of the tray, and the pixel width H and the pixel height W of the tray in the two-dimensional image.
103. And selecting the point cloud corresponding to the pixel position in the point cloud data as the tray point cloud of interest.
The method for selecting the point cloud of the tray of interest comprises the following two methods:
firstly, all point clouds corresponding to pixel positions in the point cloud data are selected as tray point clouds of interest.
Secondly, in order to ensure the accuracy of the pallet positioning, the specific method for selecting the interesting pallet point cloud comprises the following steps from one step to the second step:
calibrating the pixel position to obtain a target pixel position;
when the two-dimensional image is collected, the two-dimensional image may be affected by some environmental factors such as illumination and shielding, so that a certain error exists between the pixel position of the tray in the two-dimensional image and the real position of the tray. Therefore, in order to ensure the accuracy of tray positioning, the pixel positions need to be calibrated.
The specific method for calibrating the pixel position comprises the following steps: and verifying the pixel position by utilizing the respective corresponding verification values of the pixel point coordinate, the pixel width and the pixel height related to the pixel position to obtain the target pixel position. The calibration values corresponding to the pixel point coordinates, the pixel width and the pixel height are used for increasing or decreasing the respective existing values.
Illustratively, the pixel location is defined in the upper left corner of the tray and the pixel location is (X, Y, H, W). According to the characteristic that the point cloud data is stored in rows, a formula i is a multiplied by Y + X, and the index number of a point corresponding to the pixel point at the upper left corner of the tray in the three-dimensional point cloud data storage is calculated, wherein the size of a is determined based on the resolution of the camera, for example, the resolution of 640 multiplied by 480 pixels is provided by a BaslerToF camera, and the value of a is 640. Since there is an error in the pixel position obtained by considering the target detection, the starting point of the upper left corner of the tray is set to i 640 × (Y-10) + (X-10), and thus the parameters corresponding to the target pixel position are set to (X-10, Y-10, H +20, W +20), where the pixel width and the pixel height of the pixel position where the tray target is located become H +20 and W + 20. The target pixel position is the selection range of the tray point cloud of interest.
And secondly, selecting the point cloud corresponding to the target pixel position in the point cloud data as the tray point cloud of interest.
When the interest point cloud is selected, the point cloud corresponding to the target pixel position in the point cloud data is determined as the interest point cloud. And then carrying out voxel filtering processing on the interest point cloud to obtain an interest tray point cloud. The voxel filtering processing is mainly used for reducing the number of point clouds, and the method can reduce the number of interested tray point clouds, thereby reducing the calculation amount of tray positioning and improving the calculation efficiency of the tray positioning.
104. And carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud.
In order to improve the accuracy of pallet positioning, the interesting pallet point cloud and the pallet template point cloud need to be registered to obtain an accurate pallet point cloud. The method for acquiring the accurate tray point cloud comprises the following steps of:
step one, determining a feature descriptor of the pallet template point cloud and a feature descriptor of the interesting pallet point cloud.
And during registration, tray template point cloud is required to be used, wherein the tray template point cloud is a pre-manufactured tray point cloud template and can be directly taken during use. The following explains the process of making the pallet template point cloud: firstly, point cloud data of a tray is collected, and a tray vertical cylindrical surface of the tray is extracted, for example, a template point cloud, namely, an M-shaped point cloud corresponding to the tray vertical cylindrical surface a in fig. 2. Certainly, in order to reduce the number of point clouds in the template point cloud, voxel filtering processing may be performed on the point cloud corresponding to the extracted pallet vertical cylindrical surface. The vertical cylindrical surface of the tray has obvious structural characteristics, and the vertical cylindrical surface is selected as a template, so that on one hand, the obvious structural characteristics of the vertical cylindrical surface have advantages for tray characteristic extraction, on the other hand, the point cloud data quantity capable of representing the tray is greatly reduced, and the effect of improving the calculation efficiency is achieved during subsequent tray identification. In addition, the pallet upright column surface is a forkable surface of the pallet by the forklift, and has a great guiding effect on the movement of the pallet by the forklift.
For point cloud, a spatial relationship exists between any point in the point cloud and its surrounding neighborhood points, the feature descriptor can describe the spatial relationship, and the feature descriptor can obtain the best point cloud surface change condition based on the relationship between the point and its k neighborhood to describe the geometrical features of the point cloud. Therefore, it is desirable to determine the feature descriptors of the pallet template point cloud and the feature descriptors of the pallet point cloud of interest. The specific type of feature descriptors described herein may be determined based on business requirements. Illustratively, the Feature descriptor used in the embodiment of the present invention is a Fast Point Feature Histogram (FPFH) Feature descriptor.
And step two, taking the tray template point cloud and the corresponding feature descriptor thereof, and the interested tray point cloud and the corresponding feature descriptor thereof as the input of a first registration algorithm to obtain a transformation matrix between the tray template point cloud and the interested tray point cloud.
The specific algorithm type of the first registration algorithm may be determined based on the service requirement, and this embodiment is not particularly limited. Illustratively, the first registration algorithm is an SCP (Sample Consensus present) algorithm.
The specific process of determining the transformation matrix of the interested tray point cloud and the tray template point cloud by the SCP algorithm is as follows: in a first step, a transformation matrix T ═ argmin ∈ (T) ═ argmin ∑ (T) is defined p-q ) 2 The sum of squares of distances between any point P on the pallet template point cloud P and any point Q in the pallet point cloud Q of interest is minimized. And secondly, finding n more than or equal to 3 random object points in the tray template point cloud P through nearest neighbor matching, and finding corresponding points in the interested tray point cloud Q. Third, a hypothetical transformation T is estimated using the n sampled correspondences and the hypothetical transformation matrix T is applied to the pallet template point cloud P. And fourthly, finding internal points between the transformed tray template point cloud and the tray point cloud Q of interest by adopting a space nearest neighbor searching method, judging by adopting an Euclidean distance threshold value, and returning to the second step if the internal points are lower than a set threshold value. Fifth, a hypothetical transformation is re-estimated using the inlier correspondences. Sixth, calculating ε (T) by substituting the interior points into the above formulaAnd setting the current T as a transformation matrix between the tray template point cloud and the tray point cloud of interest if the value is the minimum value so far.
And step three, taking the tray template point cloud, the interested tray point cloud and the transformation matrix as the input of a second registration algorithm to obtain the accurate tray point cloud.
The specific algorithm type of the second registration algorithm may be determined based on the service requirement, and this embodiment is not limited in particular. Illustratively, the second registration algorithm is an ICP (iterative closest point) algorithm.
The specific process of determining the accurate tray point cloud by the ICP algorithm comprises the following steps: first, defining the mean square error
Figure BDA0003699266450000101
P s And P t Are the corresponding points in the pallet template point cloud and the interesting pallet point cloud, and R and t represent the rotation matrix and the translation vector, respectively. And secondly, applying an initial transformation T to the tray template point cloud according to a corresponding relation 'transformation matrix' obtained by the SCP to obtain a new point cloud. Thirdly, the point cloud P is obtained on the interested tray t Middle search and P s And forming corresponding point pairs by using the nearest point clouds. In the fourth step, the value of ε (T) is calculated. And step five, setting an error threshold and an iteration threshold until the mean square error is smaller than the error threshold, converging, and returning to the step one if the mean square error is not converged.
105. Determining location information for the pallet based on the accurate pallet point cloud.
The positioning information of the pallet includes center coordinates and/or pose data, both of which may be used simultaneously or separately.
The first specific process of determining the central coordinates of the pallet based on the accurate pallet point cloud comprises the following steps from one step to three steps:
selecting a maximum coordinate value and a minimum coordinate value corresponding to each coordinate axis of the tray in a camera coordinate system from the accurate tray point cloud.
The camera coordinate system is a three-dimensional coordinate system which comprises three coordinate axesThe three coordinate axes are an X coordinate axis, a Y coordinate axis and a Z coordinate axis respectively. Selecting the maximum coordinate value and the minimum coordinate value corresponding to each coordinate axis of the tray under a camera coordinate system from the accurate tray point cloud: x max 、X min 、Y max 、Y min 、Z max 、Z min
Step two, executing the following steps for each coordinate axis: and determining the average value of the maximum coordinate value and the minimum coordinate value corresponding to the coordinate axis as the central coordinate value of the tray on the coordinate axis.
Determining the central coordinate values of the tray on the coordinate axis as follows:
Figure BDA0003699266450000111
Figure BDA0003699266450000112
and thirdly, generating a central coordinate corresponding to the center of the tray based on the central coordinate value corresponding to each coordinate axis.
The center coordinates of the center of the tray are (Xcenter, Ycenter, Zcenter). According to the center coordinates, intelligent carrying equipment such as an AGV forklift can confirm the center position of the tray, and the moving offset of the forklift fork end can be conveniently determined.
The method further comprises the following steps:
further, in order to determine whether the pallet is in a transportable or easily transportable state, the mass center and the center corresponding to the precise pallet point cloud, and the coordinate values on the same coordinate axis of the camera coordinate system, need to be determined. And determining the placement of the tray based on the comparison of the two coordinate values.
The centroids described herein may be obtained directly from the exact pallet point cloud. The same coordinate axis as described herein may be determined based on business requirements and, optionally, may be the Y-axis. And comparing the Y coordinate values of the centroid and the center. If the Y coordinate value of the mass center of the tray point cloud is larger than that of the center, the tray is placed in the forward direction, and the tray can be conveyed by intelligent conveying equipment such as a forklift. If the Y coordinate value of the mass center of the tray point cloud is smaller than that of the center, the tray can be placed reversely or has other special conditions, intelligent carrying equipment such as a forklift is inconvenient to carry, and a prompt needs to be sent to carry out subsequent processing.
Secondly, the specific process of determining the pose data of the pallet based on the accurate pallet point cloud comprises the following steps from one step to the second step:
step one, executing for each coordinate axis: and projecting the accurate tray point cloud on a coordinate plane corresponding to the coordinate axis, fitting the projected scattered points into a straight line, and determining the included angle between the coordinate axis and the straight line.
The coordinate axes include an X coordinate axis, a Y coordinate axis, and a Z coordinate axis, and the processing method is the same, and the following description will be given by taking the Z coordinate axis as an example: the precise pallet point cloud is projected onto the xz plane under the camera coordinate system. Because the accurate tray point cloud is the point cloud corresponding to the tray vertical cylindrical surface and is similar to a straight line after projection, scattered points corresponding to the projection can be fitted into a straight line according to a least square method, the included angle between the straight line and the Z coordinate axis is solved, and the solved included angle is the included angle between the tray vertical cylindrical surface and the Z coordinate axis. Similarly, the included angle between the vertical cylindrical surface of the tray and the X coordinate axis and the included angle between the vertical cylindrical surface of the tray and the Y coordinate axis are determined.
And secondly, generating pose data of the tray based on the included angle corresponding to each coordinate axis.
The position and the attitude of the vertical cylindrical surface of the tray are reflected by the included angle corresponding to each coordinate axis, so that the position and the attitude data of the tray are generated based on the included angle corresponding to each coordinate axis, and the position and the attitude data represent the position and the attitude of the tray. Based on intelligent haulage equipment such as position appearance data fork truck alright the angle of adjustment fork to realize the transport of tray.
According to the tray positioning method provided by the embodiment of the invention, when the tray needs to be carried, the two-dimensional image and the point cloud data of the tray are obtained. And then determining the pixel position corresponding to the tray in the two-dimensional image, and selecting the point cloud corresponding to the pixel position in the point cloud data as the point cloud of the tray of interest. And carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud. And finally, determining the positioning information of the pallet based on the accurate pallet point cloud. Therefore, the scheme provided by the embodiment of the invention simultaneously uses a two-dimensional image processing technology and a three-dimensional point cloud processing technology. The method comprises the steps of obtaining a pixel position of a tray in a two-dimensional image by using a two-dimensional image processing technology, then extracting a target point cloud of the tray in three-dimensional point cloud data through the pixel position, obtaining a corresponding interesting tray point cloud, screening interesting tray points of the tray by using the three-dimensional point cloud processing technology, and determining tray positioning information by using the screened accurate tray point cloud. Therefore, the scheme provided by the embodiment of the invention can accurately extract the point cloud of the region of interest of the tray, thereby effectively reducing the number of the point clouds required by determining the tray positioning information and improving the calculation efficiency and the accuracy of the tray positioning.
Further, according to the above method embodiment, another embodiment of the present invention provides a tray positioning apparatus, as shown in fig. 3, the apparatus including:
an acquisition unit 31 for acquiring a two-dimensional image and point cloud data of the tray;
a first determining unit 32, configured to determine a pixel position in the two-dimensional image corresponding to the tray;
a selecting unit 33, configured to select a point cloud corresponding to the pixel position in the point cloud data as an interesting tray point cloud;
the registration unit 34 is configured to perform registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud;
a second determining unit 35, configured to determine positioning information of the pallet based on the accurate pallet point cloud.
According to the tray positioning device provided by the embodiment of the invention, when the tray needs to be carried, the two-dimensional image and the point cloud data of the tray are obtained. And then determining the pixel position corresponding to the tray in the two-dimensional image, and selecting the point cloud corresponding to the pixel position in the point cloud data as the point cloud of the tray of interest. And carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud. And finally, determining the positioning information of the pallet based on the accurate pallet point cloud. Therefore, the scheme provided by the embodiment of the invention simultaneously uses a two-dimensional image processing technology and a three-dimensional point cloud processing technology. The method comprises the steps of obtaining a pixel position of a tray in a two-dimensional image by using a two-dimensional image processing technology, then extracting a target point cloud of the tray in three-dimensional point cloud data through the pixel position, obtaining a corresponding interesting tray point cloud, screening interesting tray points of the tray by using the three-dimensional point cloud processing technology, and determining tray positioning information by using the screened accurate tray point cloud. Therefore, the scheme provided by the embodiment of the invention can accurately extract the point cloud of the region of interest of the tray, thereby effectively reducing the number of the point clouds required by determining the tray positioning information and improving the calculation efficiency and the accuracy of the tray positioning.
Optionally, as shown in fig. 4, the selecting unit 33 includes:
a calibration module 331, configured to calibrate the pixel position, and obtain a target pixel position;
a selecting module 332, configured to select a point cloud corresponding to the pixel position in the point cloud data as an interesting tray point cloud.
Optionally, as shown in fig. 4, the calibration module 331 is specifically configured to verify the pixel position by using respective corresponding verification values of the pixel point coordinate, the pixel width, and the pixel height related to the pixel position, so as to obtain the target pixel position, where the respective corresponding verification values of the pixel point coordinate, the pixel width, and the pixel height are used to increase or decrease respective existing values thereof.
Optionally, as shown in fig. 4, the selecting module 32 is specifically configured to determine a point cloud corresponding to the target pixel position in the point cloud data as a point cloud of interest; and carrying out voxel filtering processing on the interest point cloud to obtain the interest tray point cloud.
Optionally, as shown in fig. 4, the positioning information related to the second determining unit 35 includes the center coordinate and/or the pose data.
Optionally, as shown in fig. 4, when the positioning information is the center coordinate, the second determining unit 35 includes:
the first determining module 351 is used for selecting the maximum coordinate value and the minimum coordinate value corresponding to each coordinate axis of the tray in the camera coordinate system from the accurate tray point cloud; for each of said coordinate axes: determining the average value of the maximum coordinate value and the minimum coordinate value corresponding to the coordinate axis as the central coordinate value of the tray in the coordinate axis; and generating a central coordinate corresponding to the center of the tray based on the central coordinate value corresponding to each coordinate axis.
Optionally, as shown in fig. 4, when the positioning information is pose data, the second determining unit 35 includes:
a second determining module 352, configured to perform, for each of the coordinate axes: projecting the accurate tray point cloud on a coordinate plane corresponding to the coordinate axis, fitting scattered points obtained by projection into a straight line, and determining an included angle between the coordinate axis and the straight line; and generating pose data of the tray based on the included angle corresponding to each coordinate axis.
Optionally, as shown in fig. 4, the second determining unit 35 further includes:
a third determining module 353, configured to determine a centroid and a center corresponding to the accurate pallet point cloud, and coordinate values on the same coordinate axis of the camera coordinate system; and determining the placement condition of the tray based on the comparison result of the two coordinate values.
Optionally, as shown in fig. 4, the registration unit 34 is specifically configured to determine a feature descriptor of the pallet template point cloud and a feature descriptor of the pallet point cloud of interest; taking the tray template point cloud and the corresponding feature descriptor thereof, the interested tray point cloud and the corresponding feature descriptor thereof as the input of a first registration algorithm to obtain a transformation matrix between the tray template point cloud and the interested tray point cloud; and taking the tray template point cloud, the interested tray point cloud and the transformation matrix as the input of a second registration algorithm to obtain the accurate tray point cloud.
Optionally, as shown in fig. 4, the first determining unit 32 is specifically configured to perform tray detection on the two-dimensional image by using a deep learning tray target detection model, and obtain a pixel position corresponding to the tray.
In the tray positioning device provided in the embodiment of the present invention, for a detailed description of a method used in an operation process of each functional module, reference may be made to a detailed description of a corresponding method in the method embodiment of fig. 1, which is not described herein again.
Further, according to the above embodiment, another embodiment of the present invention further provides a computer-readable storage medium, where the storage medium includes a stored program, and when the program runs, the apparatus where the storage medium is located is controlled to execute the tray positioning method in fig. 1.
Further, according to the above embodiment, another embodiment of the present invention provides a storage management apparatus, including:
a memory for storing a program;
a processor, coupled to the memory, for executing the program to perform the tray positioning method of FIG. 1.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method and apparatus described above are referred to one another. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (7)

1. A method of positioning a pallet, the method comprising:
acquiring a two-dimensional image and point cloud data of the tray;
determining a pixel position corresponding to the tray in the two-dimensional image;
selecting point clouds corresponding to the pixel positions in the point cloud data as tray point clouds of interest;
carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud; the tray template point cloud is manufactured on the basis of a tray vertical cylindrical surface;
determining positioning information of the pallet based on the precise pallet point cloud;
registering the interested tray point cloud and the tray template point cloud to obtain an accurate tray point cloud, which comprises the following steps:
determining a feature descriptor of the pallet template point cloud and a feature descriptor of the pallet point cloud of interest;
taking the tray template point cloud and the corresponding feature descriptor thereof, the interested tray point cloud and the corresponding feature descriptor thereof as the input of a first registration algorithm to obtain a transformation matrix between the tray template point cloud and the interested tray point cloud;
taking the tray template point cloud, the interested tray point cloud and the transformation matrix as the input of a second registration algorithm to obtain the accurate tray point cloud;
determining the positioning information of the pallet based on the accurate pallet point cloud if the positioning information includes a center coordinate, including:
selecting a maximum coordinate value and a minimum coordinate value corresponding to each coordinate axis of the tray in a camera coordinate system from the accurate tray point cloud;
for each of said coordinate axes: determining the average value of the maximum coordinate value and the minimum coordinate value corresponding to the coordinate axis as the central coordinate value of the tray in the coordinate axis;
generating a central coordinate corresponding to the center of the tray based on the central coordinate value corresponding to each coordinate axis;
if the positioning information includes pose data, determining the positioning information of the pallet based on the accurate pallet point cloud, including:
for each of the coordinate axes: projecting the accurate tray point cloud on a coordinate plane corresponding to the coordinate axis, fitting scattered points obtained by projection into a straight line, and determining an included angle between the coordinate axis and the straight line;
and generating pose data of the tray based on the included angle corresponding to each coordinate axis.
2. The method of claim 1, wherein selecting the point cloud of the point cloud data corresponding to the pixel location as a tray point cloud of interest comprises:
calibrating the pixel position to obtain a target pixel position;
and selecting the point cloud corresponding to the target pixel position in the point cloud data as the tray point cloud of interest.
3. The method of claim 1, wherein selecting the point cloud of the point cloud data corresponding to the target pixel location as a tray point cloud of interest comprises:
determining point clouds corresponding to the target pixel positions in the point cloud data as interest point clouds;
carrying out voxel filtering processing on the interest point cloud to obtain the interest tray point cloud;
and/or the presence of a gas in the gas,
calibrating the pixel positions to obtain target pixel positions, including:
and verifying the pixel position by utilizing respective corresponding verification values of the pixel point coordinate, the pixel width and the pixel height related to the pixel position to obtain the target pixel position, wherein the respective corresponding verification values of the pixel point coordinate, the pixel width and the pixel height are used for increasing or reducing respective existing values.
4. The method of claim 1, further comprising:
determining a mass center and a center corresponding to the accurate tray point cloud, and coordinate values on the same coordinate axis of a camera coordinate system;
determining the placement condition of the tray based on the comparison result of the two coordinate values; the method specifically comprises the following steps:
if the coordinate value of a certain coordinate axis of the mass center of the tray point cloud is larger than the coordinate value of the center on the same coordinate axis, judging that the tray is placed in the forward direction, and carrying the tray by intelligent carrying equipment;
if the coordinate value of a certain coordinate axis of the mass center of the tray point cloud is smaller than the coordinate value of the center on the same coordinate axis, the tray is judged to be placed reversely or have other special conditions, the intelligent carrying equipment is inconvenient to carry, and a prompt needs to be sent to carry out subsequent processing.
5. A pallet positioning apparatus, said apparatus comprising:
the acquisition unit is used for acquiring a two-dimensional image and point cloud data of the tray;
the first determining unit is used for determining the pixel position corresponding to the tray in the two-dimensional image;
the selecting unit is used for selecting the point cloud corresponding to the pixel position in the point cloud data as the tray point cloud of interest;
the registration unit is used for carrying out registration processing on the tray point cloud of interest and the tray template point cloud to obtain an accurate tray point cloud; the tray template point cloud is manufactured on the basis of a tray vertical cylindrical surface;
a second determining unit for determining the positioning information of the pallet based on the accurate pallet point cloud;
the registration unit is specifically configured to determine a feature descriptor of the pallet template point cloud and a feature descriptor of the pallet point cloud of interest;
taking the tray template point cloud and the corresponding feature descriptor thereof, the interested tray point cloud and the corresponding feature descriptor thereof as the input of a first registration algorithm to obtain a transformation matrix between the tray template point cloud and the interested tray point cloud;
taking the tray template point cloud, the interested tray point cloud and the transformation matrix as the input of a second registration algorithm to obtain the accurate tray point cloud;
the second determination unit includes:
the first determining module is used for selecting the maximum coordinate value and the minimum coordinate value corresponding to each coordinate axis of the tray in a camera coordinate system from the accurate tray point cloud when the positioning information is the center coordinate; for each of said coordinate axes, performing: determining the average value of the maximum coordinate value and the minimum coordinate value corresponding to the coordinate axis as the central coordinate value of the tray in the coordinate axis; generating a central coordinate corresponding to the center of the tray based on the central coordinate value corresponding to each coordinate axis;
the second determination unit includes:
a second determining module, configured to, when the positioning information is pose data, perform, for each coordinate axis: projecting the accurate tray point cloud on a coordinate plane corresponding to the coordinate axis, fitting scattered points obtained by projection into a straight line, and determining an included angle between the coordinate axis and the straight line; and generating pose data of the tray based on the included angle corresponding to each coordinate axis.
6. A computer-readable storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, the apparatus where the storage medium is located is controlled to execute the tray positioning method according to any one of claims 1 to 4.
7. A storage management apparatus, characterized in that the storage management apparatus comprises:
a memory for storing a program;
a processor, coupled to the memory, for executing the program to perform the tray positioning method of any one of claims 1-4.
CN202210690376.4A 2021-12-27 2021-12-27 Tray positioning method and device Pending CN114897972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210690376.4A CN114897972A (en) 2021-12-27 2021-12-27 Tray positioning method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111607658.5A CN113989366A (en) 2021-12-27 2021-12-27 Tray positioning method and device
CN202210690376.4A CN114897972A (en) 2021-12-27 2021-12-27 Tray positioning method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202111607658.5A Division CN113989366A (en) 2021-12-27 2021-12-27 Tray positioning method and device

Publications (1)

Publication Number Publication Date
CN114897972A true CN114897972A (en) 2022-08-12

Family

ID=79734361

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210690376.4A Pending CN114897972A (en) 2021-12-27 2021-12-27 Tray positioning method and device
CN202111607658.5A Pending CN113989366A (en) 2021-12-27 2021-12-27 Tray positioning method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111607658.5A Pending CN113989366A (en) 2021-12-27 2021-12-27 Tray positioning method and device

Country Status (1)

Country Link
CN (2) CN114897972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116040261A (en) * 2022-12-23 2023-05-02 青岛宝佳智能装备股份有限公司 Special tray turnover machine

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820391B (en) * 2022-06-28 2022-10-11 山东亚历山大智能科技有限公司 Point cloud processing-based storage tray detection and positioning method and system
CN116310622A (en) * 2022-12-15 2023-06-23 珠海创智科技有限公司 Method and system for accurately identifying tray based on deep learning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778688B (en) * 2015-03-27 2018-03-13 华为技术有限公司 The method for registering and device of cloud data
CN105678847B (en) * 2016-02-27 2018-08-14 北京工业大学 Line laser is used for the small nanoscale object surface reconstruction method of SLM microscopic stereovisions
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
US10916053B1 (en) * 2019-11-26 2021-02-09 Sdc U.S. Smilepay Spv Systems and methods for constructing a three-dimensional model from two-dimensional images
CN108324369B (en) * 2018-02-01 2019-11-22 艾瑞迈迪医疗科技(北京)有限公司 Method for registering and Use of Neuronavigation equipment in art based on face
CN110793437A (en) * 2019-10-23 2020-02-14 珠海格力智能装备有限公司 Positioning method and device of manual operator, storage medium and electronic equipment
CN112001972A (en) * 2020-09-25 2020-11-27 劢微机器人科技(深圳)有限公司 Tray pose positioning method, device and equipment and storage medium
CN113192054B (en) * 2021-05-20 2023-04-28 清华大学天津高端装备研究院 Method and system for detecting and positioning complicated parts based on 2-3D vision fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116040261A (en) * 2022-12-23 2023-05-02 青岛宝佳智能装备股份有限公司 Special tray turnover machine
CN116040261B (en) * 2022-12-23 2023-09-19 青岛宝佳智能装备股份有限公司 Special tray turnover machine

Also Published As

Publication number Publication date
CN113989366A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN114897972A (en) Tray positioning method and device
CN107063228B (en) Target attitude calculation method based on binocular vision
JP4865557B2 (en) Computer vision system for classification and spatial localization of bounded 3D objects
CN108332752B (en) Indoor robot positioning method and device
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN109035207B (en) Density self-adaptive laser point cloud characteristic detection method
Sehgal et al. Real-time scale invariant 3D range point cloud registration
CN109145902B (en) Method for recognizing and positioning geometric identification by using generalized characteristics
CN114332219B (en) Tray positioning method and device based on three-dimensional point cloud processing
CN113050636A (en) Control method, system and device for autonomous tray picking of forklift
CN116128841A (en) Tray pose detection method and device, unmanned forklift and storage medium
CN115546202A (en) Tray detection and positioning method for unmanned forklift
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN114187418A (en) Loop detection method, point cloud map construction method, electronic device and storage medium
Avidar et al. Local-to-global point cloud registration using a dictionary of viewpoint descriptors
CN116894876A (en) 6-DOF positioning method based on real-time image
Zins et al. 3d-aware ellipse prediction for object-based camera pose estimation
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
US11669988B1 (en) System and method for three-dimensional box segmentation and measurement
Liu et al. An improved local descriptor based object recognition in cluttered 3D point clouds
CN113313725A (en) Bung hole identification method and system for energetic material medicine barrel
CN112907666A (en) Tray pose estimation method, system and device based on RGB-D
Krueger Model based object classification and localisation in multiocular images
CN112598736A (en) Map construction based visual positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination