CN111738253A - Forklift pallet positioning method, device, equipment and readable storage medium - Google Patents

Forklift pallet positioning method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111738253A
CN111738253A CN201910363982.3A CN201910363982A CN111738253A CN 111738253 A CN111738253 A CN 111738253A CN 201910363982 A CN201910363982 A CN 201910363982A CN 111738253 A CN111738253 A CN 111738253A
Authority
CN
China
Prior art keywords
forklift
image
pallet
tray
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910363982.3A
Other languages
Chinese (zh)
Other versions
CN111738253B (en
Inventor
沈蕾
万保成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201910363982.3A priority Critical patent/CN111738253B/en
Publication of CN111738253A publication Critical patent/CN111738253A/en
Application granted granted Critical
Publication of CN111738253B publication Critical patent/CN111738253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Forklifts And Lifting Vehicles (AREA)

Abstract

The invention provides a forklift pallet positioning method, a forklift pallet positioning device, forklift pallet positioning equipment and a readable storage medium, wherein a 3D initial image containing a forklift pallet and prior position information corresponding to the forklift pallet are obtained; intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face; acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet; and acquiring the 3D positions of the jacks of the forklift tray according to the 2D positions of the jacks determined in the 2D characteristic images, so that the efficiency and the accuracy of determining the cross section of the forklift tray are improved, and the accuracy and the reliability of positioning the jacks of the forklift tray are improved.

Description

Forklift pallet positioning method, device, equipment and readable storage medium
Technical Field
The invention relates to the technical field of warehouse logistics, in particular to a forklift pallet positioning method, device, equipment and a readable storage medium.
Background
The forklift is a wheel type carrying vehicle for loading, unloading, stacking and short-distance transportation operation of goods through a forklift tray, and is widely used for carrying materials in ports, airports and warehouses. In the in-service use process, the fork truck tray is piled up on goods shelves, and when the fork truck tray was taken to needs, autopilot was to the preset position before the goods shelves to the jack of bayonet insertion fork truck tray and fork the fork and get the fork truck tray. Because the placing positions of the forklift trays on the goods shelf may be different relative to the preset positions, the jacks of the target forklift trays in the front area need to be positioned when the forklift takes the forklift trays.
In a conventional forklift pallet positioning method, an RFID tag or an identification image is generally set at a preset position of a forklift pallet, so that a forklift can position the forklift pallet according to the positioning of the RFID tag or the identification image. For example, marks are attached to the edges of two sides of the end face of the forklift pallet or the center of the end face, and the forklift pallet pictures acquired by the camera are used for identifying and positioning the artificial marks on the pallet.
However, the forklift pallet may have surface wear during use, so that the RFID tag or the identification image provided on the end surface thereof may be damaged, and problems of being unrecognizable or having identification errors may occur. Therefore, the existing forklift pallet positioning method is low in reliability.
Disclosure of Invention
The embodiment of the invention provides a forklift pallet positioning method, device and equipment and a readable storage medium, so that the accuracy and reliability of positioning jacks of a forklift pallet are improved.
In a first aspect of the embodiments of the present invention, a method for positioning a pallet of a forklift is provided, including:
acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray;
intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face;
acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet;
and acquiring the 3D position of the jack of the forklift tray according to the determined 2D position of the jack in the 2D characteristic image.
In a second aspect of the embodiments of the present invention, there is provided a forklift pallet positioning device, including:
the prior module is used for acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray;
the intercepting module is used for intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face;
the transformation module is used for acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet;
and the positioning module is used for acquiring the 3D position of the jack of the forklift tray according to the determined 2D position of the jack in the 2D characteristic image.
In a third aspect of the embodiments of the present invention, an apparatus is provided, including: a memory, a processor and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the forklift pallet positioning method of the first aspect of the invention and of the various possible designs of the first aspect.
In a fourth aspect of the embodiments of the present invention, a readable storage medium is provided, where a computer program is stored, and the computer program is used, when being executed by a processor, to implement the forklift pallet positioning method according to the first aspect and various possible designs of the first aspect of the present invention.
According to the forklift pallet positioning method, device and equipment and the readable storage medium, the 3D initial image containing the forklift pallet and the prior position information corresponding to the forklift pallet are obtained; intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face; acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet; and acquiring the 3D positions of the jacks of the forklift tray according to the 2D positions of the jacks determined in the 2D characteristic images, so that the efficiency and the accuracy of determining the cross section of the forklift tray are improved, and the accuracy and the reliability of positioning the jacks of the forklift tray are improved.
Drawings
Fig. 1 is a schematic flow chart of a forklift pallet positioning method according to an embodiment of the present invention;
fig. 2 is a 3D schematic view of a pallet of a forklift truck according to an embodiment of the present invention;
FIG. 3 is an example of a 3D initial image including a forklift pallet provided by an embodiment of the invention;
FIG. 4 is an example of a 3D image block including a front cross-section of a forklift pallet provided by an embodiment of the invention;
fig. 5 is an example of an alternative implementation manner of step S103 in fig. 1 according to an embodiment of the present invention
FIG. 6 is a flow chart of another forklift pallet positioning method provided by the embodiment of the invention;
FIG. 7 is an example of a 2D feature image after dilation processing according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a world coordinate system and a new coordinate system provided by an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a pallet positioning device of a forklift truck according to an embodiment of the invention;
fig. 10 is a schematic diagram of a hardware structure of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that all three of A, B, C comprise, "comprises A, B or C" means that one of A, B, C comprises, "comprises A, B and/or C" means that any 1 or any 2 or 3 of A, B, C comprises.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
In the process that the forklift carries goods by utilizing the forklift pallet, the forklift firstly runs to the front of a goods shelf for placing the forklift pallet according to a preset running path, and then images of the forklift pallet on the goods shelf are shot and acquired through a camera arranged at the front end of the forklift. And determining the forklift pallet, namely the position of the jack on the forklift pallet according to the identification of the image of the forklift pallet. The fork truck moves the fork to insert the jack according to the positioning of the jack on the fork truck tray, thereby realizing the insertion and the taking of the fork truck tray. In the process, how to improve the recognition of the image of the forklift pallet and determine the position of the forklift pallet, namely the upper jack of the forklift pallet, is one of the keys influencing whether the forklift can accurately insert and take the forklift pallet. The problem that reliability is low exists in the mode of assisting the positioning of the forklift tray by sticking various identification tags in the prior art, and the mobility of the forklift tray is limited by the positioning of the identification tags. And if the jack location on the fork truck tray is come through the terminal surface image of direct identification fork truck tray, because of the terminal surface image distortion that the shooting angle difference leads to again, and the wrong problem of discernment probably appears, and the reliability is still not high.
In order to solve the problem of low positioning reliability of the forklift pallet in the prior art, the embodiment of the invention provides a positioning method of the forklift pallet, wherein image blocks are intercepted from a 3D initial image containing the forklift pallet according to prior position information, and a 2D image projected on a front section of the forklift pallet is obtained according to the image blocks, so that the jack positions of the pallet are identified, and the accuracy and the reliability of positioning the forklift pallet and jacks on the forklift pallet are improved.
Referring to fig. 1, which is a flowchart illustrating a method for positioning a pallet of a forklift truck according to an embodiment of the present invention, an execution main body of the method shown in fig. 1 may be a software and/or hardware device, for example, a positioning terminal provided on the forklift truck, or a server for performing data interaction with the forklift truck. The method shown in fig. 1 includes steps S101 to S104, which are specifically as follows:
s101, acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray.
In particular, it may be that a 3D initial image containing the forklift pallet is acquired from a camera, which should be a 3D camera, e.g. a TOF3D camera. The obtained 3D initial image comprises point clouds indicating 3D images of the forklift tray, and each pixel point corresponds to a 3D coordinate. Referring to fig. 2, a 3D schematic view of a forklift pallet according to an embodiment of the present invention is shown. The 3D initial image may be, for example, an image containing the image shown in fig. 2 and other noise information. The X-axis direction, the Y-axis direction, and the Z-axis direction shown in fig. 2 are directions of respective coordinate axes in the coordinate system in the 3D initial image.
The a priori position information is, for example, a position that is pre-specified for the forklift pallet, such as an approximate range of the forklift pallet in the Y-axis direction and an approximate range of the forklift pallet in the Z-axis direction in fig. 2. For example, the approximate ranges of the forklift pallet in the X-axis direction, the Y-axis direction, and the Z-axis direction in fig. 2. The a priori position information may be understood as an approximate range for determining the position of the forklift tray, e.g. the a priori position information indicates that the forklift tray is located between 800-1800mm in the Z-axis direction and between 30-200mm in the Y-axis direction.
Optionally, since the 3D camera coordinate system may have a certain offset from the world coordinate system, the camera coordinate system may be calibrated before the 3D initial image including the forklift pallet and the prior position information corresponding to the forklift pallet are acquired. For example, a rotation matrix between a world coordinate system and a 3D camera coordinate system is obtained first, and a 3D initial image captured by the 3D camera is obtained, where coordinates of each pixel of the 3D initial image belong to the 3D camera coordinate system. Then, according to the rotation matrix, the coordinates of each pixel point of the 3D initial image are transformed from the 3D camera coordinate system to the coordinatesAnd in a world coordinate system, obtaining the 3D initial image in the world coordinate system. For example, the relative rotation matrix of the 3D camera coordinate system and the world coordinate system is expressed as
Figure BDA0002047653030000051
The coordinate of the point P in the 3D camera coordinate system is recorded as cPThe coordinate of the point P in the world coordinate system is denoted by WPThen, transforming the 3D camera coordinate system into the world coordinate system, and obtaining the coordinates of the point P in the 3D initial image in the world coordinate system as:
Figure BDA0002047653030000061
and S102, intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face.
The coordinates of pixel point clouds of the forklift trays in the world coordinate system can be roughly determined according to the prior position information, so that the 3D initial image of the forklift tray can be intercepted, wherein pixel points in the Y direction and the Z direction can be intercepted, and pixel points in the X direction in the 3D initial image are reserved. In some embodiments, the pixel points in the X, Y, Z three directions may also be intercepted, which is not limited herein. Optionally, the above-mentioned interception may be performed under the condition that a certain margin is reserved on the basis of the a priori position information.
Referring to fig. 3, an example of a 3D initial image including a forklift pallet according to an embodiment of the present invention is provided. Referring to fig. 4, an example of a 3D image block including a front cross section of a forklift pallet according to an embodiment of the present invention is provided. The 3D initial image in fig. 3 may contain pixel points of the other shelf structure of the forklift tray except for the pixel points of the forklift tray, and most of interference pixel points are removed in the 3D image block containing the front section of the forklift tray obtained after the interception, and the remaining pixel points of the intercepted section of the forklift tray.
S103, acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift tray.
Optionally, in order to improve the accuracy of the 2D feature image on the front cross section of the forklift pallet, the 2D feature image may be obtained by processing the 3D image block by a principal component analysis method. Specifically, fig. 5 may be seen, which is an example of an optional implementation manner of step S103 in fig. 1 provided in the embodiment of the present invention. The method shown in fig. 5 includes steps S201 to S204 as follows.
S201, acquiring a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block.
With piRepresenting any pixel point p on the 3D image block in a 3D camera coordinate systemi=(xi,yi,zi),
Figure BDA0002047653030000062
Representing the mean of the pallet point cloud, the covariance matrix C is defined as:
Figure BDA0002047653030000063
and k represents the number of pixel points in the 3D image block.
S202, obtaining the eigenvalue of the covariance matrix and the eigenvector corresponding to each eigenvalue.
Each pixel point pi ═ xi,yi,zi) Forming the pixel matrix of the 3D image block for a column vector, calculating a covariance matrix C for the pixel matrix, and calculating an eigenvalue and an eigenvector from the covariance matrix C, which satisfies
Figure BDA0002047653030000071
λjFor the eigenvalues of the covariance matrix,
Figure BDA0002047653030000072
for eigenvector representation, j ∈ {0,1,2 }.
S203, transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the feature vectors, and acquiring new coordinates of each pixel point in the 3D image block.
The X axis, the Y axis and the Z axis of the new coordinate system are sequentially a first characteristic vector, a second characteristic vector and a third characteristic vector which are determined by characteristic values from large to small in sequence in the characteristic vectors, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane where the positive section of the forklift pallet is located. It can be understood that in the above-mentioned new coordinate system supported by three eigenvectors, the smallest eigenvector corresponds to the eigenvector pointing to the normal direction of the cross section of the forklift pallet, and the plane formed by the other two eigenvectors is the plane where the cross section of the forklift pallet is located.
And S204, projecting the new coordinates of each pixel point in the 3D image block along the Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
For example, the Z value of the new coordinate of each pixel point in the 3D image block is taken as 0, so that the pixel point coordinate of the front section of the forklift pallet is obtained.
In the embodiment shown in fig. 5, the eigenvalue and the eigenvector are solved by the covariance matrix of each pixel point of the 3D image block, the change significance is determined according to the magnitude of the eigenvalue, and the two directions with the most significant change of the eigenvalue are taken as the new X-axis direction and the new Y-axis direction, so that the plane where the front section of the forklift pallet is located is determined, and the accuracy is high.
And S104, acquiring the 3D position of the jack of the forklift tray according to the 2D position of the jack determined in the 2D characteristic image.
Specifically, the 2D positions of the insertion holes may be converted from the new coordinate system to a world coordinate system corresponding to the 3D initial image, so as to obtain the 3D positions of the insertion holes of the forklift pallet.
According to the forklift pallet positioning method provided by the embodiment, a 3D initial image containing a forklift pallet and prior position information corresponding to the forklift pallet are obtained; intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face; acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet; and acquiring the 3D positions of the jacks of the forklift tray according to the 2D positions of the jacks determined in the 2D characteristic images, so that the efficiency and the accuracy of determining the cross section of the forklift tray are improved, and the accuracy and the reliability of positioning the jacks of the forklift tray are improved.
On the basis of the above embodiment, in order to further improve the accuracy of positioning the 2D positions of the jacks, after the step S103 (obtaining the 2D feature images of the 3D image blocks projected on the front cross section of the forklift tray), a process of circularly positioning and comparing the 2D positions of the forklift tray multiple times may be further included, and the 3D position of the forklift tray obtained in the previous cycle is used as the prior position information of the next cycle. Specifically, fig. 6 is a schematic flow chart of another forklift pallet positioning method according to an embodiment of the present invention. The method shown in fig. 6 includes steps S301 to S309 as follows.
S301, acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray.
S302, intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face.
And S303, acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet.
The specific implementation manner of the steps S301 to S303 can refer to the steps S101 to S103 shown in fig. 1, and the implementation principle and the technical effect are similar, and are not described herein again.
S304, determining the 2D position of the forklift pallet according to the 2D characteristic image and a preset target pallet template.
Specifically, the 2D feature image may be matched by sliding with at least one preset tray template, so as to obtain a matching result corresponding to each tray template. And then determining the tray template with the optimal matching result as a preset target tray template. And determining the 2D position of the forklift pallet according to the position of the preset target pallet template which is most matched with the 2D characteristic image. The calculation method for sliding matching with the tray template can be implemented by the following formula:
Figure BDA0002047653030000081
wherein, R (x, y) represents a matching value of an image area with coordinates (x, y) as a reference point in the 2D feature image and the tray template T, and I represents a part of the 2D feature image matching T. Assuming that the template image size is M x N; and if the size of the 2D characteristic image is Ms × Ns, then R (x, y) is a sliding value of x on [0, Ms-M ], y is a sliding value on [0, Ns-N ], sliding matching is carried out by taking the tray template T as a search window, one R (x, y) is obtained in each sliding step, and finally, the minimum value in a series of R (x, y) obtained by each tray template is taken as a matching result of the tray template and the 2D characteristic image. The matching structure is for example a numerical value. And then, in the at least one tray template, the tray template with the minimum matching result is taken as the finally determined target tray template. For example, the matching result corresponding to the pallet template with the smallest matching result is Rmin (xT, yT), and then the pixel point range of (xT-xT + M, yT-yT + N) in the 2D feature image is the 2D position of the forklift pallet obtained by the current positioning.
S305, determining the 3D position of the forklift tray according to the 2D position of the forklift tray.
After the 2D position of the forklift pallet is obtained, the accuracy of single positioning may be insufficient, so that in order to improve the positioning accuracy, the 3D position of the forklift pallet is determined according to the positioning at the time, and then the 3D position is used as the prior position information to repeat the steps to realize further fine positioning. The 3D position of the forklift pallet is determined according to the 2D position of the forklift pallet, and the 3D position of the forklift pallet can be obtained by transforming the pixel point coordinates of (xT-xT + M, yT-yT + N) in the above embodiment into a 3D camera coordinate system from the new coordinate system constructed by the feature vectors.
And S306, judging whether the 2D positions of the forklift pallet determined twice continuously are the same.
If not, the process proceeds to S307; if yes, the process proceeds to S308.
And S307, taking the 3D position of the forklift pallet as prior position information, and returning to execute the step S302.
And S308, determining the 2D position of the jack in the 2D characteristic image of the forklift pallet determined at the last time.
S309, acquiring the jack 3D position of the forklift tray according to the jack 2D position determined in the 2D characteristic image.
The specific implementation manner of the step S309 can refer to step S104 shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
This embodiment is through manifold cycles location, has further improved the accuracy of fork truck tray location.
On the basis of the embodiment shown in fig. 6, before step S304 (determining the 2D position of the forklift pallet according to the 2D feature image and the preset target pallet template), expansion processing may be performed on each pixel point of the 2D feature image to obtain an expanded 2D feature image, so as to improve the density of the pixel points in the 2D feature image, reduce the influence of low resolution of the 3D camera, and thus improve the effect of target matching. Fig. 7 is a 2D feature image example of comparing before and after the dilation process according to an embodiment of the present invention.
The process of the expansion treatment may be, for example:
Figure BDA0002047653030000091
wherein A represents the 2D feature image;
Figure BDA0002047653030000092
representing an expansion operator; b represents a structural element for performing expansion processing, such as 9 pixel units arranged in a nine-square lattice shape or 5 pixel units arranged in a cross shape;BxThe term "x + B | B ∈ B" denotes a set of points obtained by translating the structuring element by x, and B denotes the coordinates of each pixel element in the structuring element B.
After expansion processing is performed on each pixel point of the 2D feature image, correspondingly, sliding matching is performed on the 2D feature image by using at least one preset tray template to obtain a matching result corresponding to each tray template, specifically, sliding matching is performed on the expanded 2D feature image by using at least one preset tray template to obtain a matching result corresponding to each tray template.
In some embodiments, the pallet may be cracked as the end faces of the pallet may not be placed neatly on the shelf or ground, which may result in the forks striking the sides of the pallet. In order to improve the accuracy of inserting and taking the forklift pallet by the forklift, after the step S203, a process of acquiring the end face deflection angle of the forklift pallet may be further included. Specifically, referring to fig. 8, a schematic diagram of a world coordinate system and a new coordinate system according to an embodiment of the present invention is shown. In fig. 8, the coordinate axes of the new coordinate system are indicated by broken lines, and the coordinate axes of the world coordinate system are indicated by solid lines. When the pallet of the forklift is placed at different left and right sides, the pallet of the forklift can be understood as the right section (see X)NewONewYNewPlane of the X-ray) and the XOY plane of the world coordinate system (see X)Chinese character' shiOChinese character' shiYChinese character' shiPlane) may have an included angle, so the X-axis of the new coordinate system (see X in fig. 8) may be alignedNewAxis) of the world coordinate system corresponding to the 3D initial image (see X in fig. 8)Chinese character' shiAxes) (see angle θ in fig. 8) determined as the rotation angle of the end face of the forklift tray relative to the XOY plane of the world coordinate system. The forklift will adjust the attitude of the fork insertion according to the rotation angle.
On the basis of the various embodiments, the process of obtaining the prior location information corresponding to the forklift tray in step S101 shown in fig. 1 may be implemented in various ways, for example, conditional euclidean may be used to perform plane clustering by using normal information to obtain the prior location information corresponding to the forklift tray.
Specifically, the k-D tree structure of the pixel points in the 3D initial image may be obtained according to euclidean distances between the pixel points in the 3D initial image. It can be understood that the positive cross section of the forklift pallet in the pixel cloud satisfies the assumption that the pallet is almost in one plane. Theoretically, the included angle between the normals of 2 adjacent pixel points on one plane is also small. Firstly, constructing a k-d tree for an input pixel point cloud P, so as to conveniently and rapidly search adjacent pixel points for each pixel point in the follow-up process (in Euclidean space, the Euclidean distance between 2 adjacent pixel points is defined to be smaller than a preset Euclidean threshold value). And then determining adjacent pixel points for each pixel point in the k-d tree structure. And acquiring the normal information of each pixel point in the k-d tree structure, wherein the normal information indicates the normal of the surface formed by the pixel point and the adjacent pixel point corresponding to the pixel point. And determining the normal included angle between each pixel point and the adjacent pixel point according to the normal information of each pixel point in the k-d tree structure. It can be understood that, if the normal of each point in the pixel point cloud P is calculated and stored, theoretically, if 2 adjacent pixel points on the same plane have smaller normal included angles. And dividing the pixel points in the k-d tree structure into a plurality of pixel categories according to a preset included angle threshold and the normal included angle between each pixel point and the adjacent pixel point, wherein each pixel category comprises the pixel point of which the normal included angle is smaller than the included angle threshold and the adjacent pixel point, and the pixel point corresponding to each pixel category is the pixel point in the same plane. And determining the front section of the forklift pallet class in the plane formed by the pixel points corresponding to the pixel classes according to preset characteristic information of the forklift pallet. It is understood that an empty classification list C (for storing pixel classes C1, C2, C, ci, C) and an empty queue Q (Q for recording which points have been processed) are initialized; the following five-step process is started for each point Pi in P:
step one, storing Pi into Q to show that it is processed.
Step two, if Pi does not belong to any class, a class corresponding to Pi, such as C1, is newly built in C.
Step three, searching the adjacent pixel point Pj of Pi in the point cloud (the searching method of the adjacent pixel point is to set a sphere area by taking the pixel point Pi as the center of circle and r as the radius, and take the pixel point in the sphere area as the adjacent pixel point Pj of Pi)
And step four, judging whether each adjacent pixel point Pj is processed (namely whether the adjacent pixel point Pj is in Q) or not, if the adjacent pixel point Pj is processed (namely in Q), not operating the adjacent pixel point, and continuously judging whether other adjacent pixel points are processed or not.
Step five, if the adjacent pixel point Pj is not processed (namely is not in Q), judging whether the normal included angle between Pi and Pj is smaller than a preset included angle threshold value; if the normal included angle is smaller than the preset included angle threshold value, adding Pj into the category to which Pi belongs, then storing Pj into Q (namely marked as processed), if the normal included angle is larger than or equal to the preset included angle threshold value, building a category corresponding to Pj in C, for example C2, and then storing Pj into Q (namely marked as processed).
After all the pixel points are processed through the five steps, a classification list C containing a plurality of classes ci is obtained, and each ci represents a plane. Then, the characteristic information of the forklift trays such as the area, the mass center, the perimeter and the like can be used for rough screening in the C, and the forklift tray type normal section is determined. A forklift pallet-like right section is understood to be a section in the world coordinate system that is close to the parallel XOY plane.
And finally, determining prior position information corresponding to the forklift pallet according to the corresponding coordinate range of the forklift pallet type normal section in a world coordinate system. For example, if the coordinate interval of the pixel point in the Y direction in the cross section of the forklift pallet is [ Y1, Y2], the coordinate interval is used as the prior position information of the forklift pallet in the Y direction.
Fig. 9 is a schematic structural view of a pallet positioning device for a forklift truck according to an embodiment of the present invention. The forklift pallet positioning device 80 shown in fig. 9 includes:
and the prior module 81 is used for acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray.
And the intercepting module 82 is used for intercepting the 3D initial image in the position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face.
And the transformation module 83 is configured to obtain a 2D feature image of the 3D image block projected on the front cross section of the forklift pallet.
And the positioning module 84 is used for acquiring the 3D positions of the jacks of the forklift tray according to the 2D positions of the jacks determined in the 2D characteristic image.
The forklift pallet positioning device 80 of the embodiment shown in fig. 9 can be correspondingly used for executing the steps in the method embodiment shown in fig. 1, and the implementation principle and technical effect are similar, and are not described herein again.
On the basis of the above embodiment, the transformation module 83 is configured to obtain a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block; obtaining eigenvalues of the covariance matrix and eigenvectors corresponding to the eigenvalues; transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the characteristic vectors, and acquiring new coordinates of each pixel point in the 3D image block, wherein an X axis, a Y axis and a Z axis of the new coordinate system are sequentially a first characteristic vector, a second characteristic vector and a third characteristic vector which are determined by characteristic values in the characteristic vectors from large to small, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane where the positive section of the forklift pallet is located; and projecting the new coordinates of each pixel point in the 3D image block along the Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift tray.
On the basis of the above embodiment, the positioning module 84 is configured to transform the 2D positions of the insertion holes from the new coordinate system to a world coordinate system corresponding to the 3D initial image, so as to obtain 3D positions of the insertion holes of the forklift pallet.
On the basis of the foregoing embodiment, the positioning module 84 is configured to, after the transformation module 83 obtains the 2D feature image of the 3D image block projected on the front cross section of the forklift pallet, determine the 2D position of the forklift pallet according to the 2D feature image and a preset target pallet template; determining the 3D position of the forklift tray according to the 2D position of the forklift tray; and taking the 3D position of the forklift tray as prior position information, returning and executing the interception of the 3D initial image in the position area indicated by the prior position information to obtain a 3D image block containing the front section of the forklift tray until the two continuous determinations of the same 2D position of the forklift tray are carried out, and determining the 2D position of the jack in the last determined 2D characteristic image of the forklift tray.
On the basis of the above embodiment, the positioning module 84 is configured to perform sliding matching on the 2D feature image by using at least one preset tray template, and obtain a matching result corresponding to each tray template; determining the tray template with the optimal matching result as a preset target tray template; and determining the 2D position of the forklift pallet according to the position of the preset target pallet template which is most matched with the 2D characteristic image.
On the basis of the above embodiment, the positioning module 84 is configured to perform expansion processing on each pixel point of the 2D feature image before determining the 2D position of the forklift pallet according to the 2D feature image and a preset target pallet template, so as to obtain an expanded 2D feature image.
Correspondingly, the positioning module 84 is configured to perform sliding matching on the expanded 2D feature image by using at least one preset tray template, and obtain a matching result corresponding to each tray template.
On the basis of the above embodiment, the transforming module 83 is configured to, after the 3D coordinates of each pixel in the 3D image block are transformed into a new coordinate system formed by the feature vectors to obtain new coordinates of each pixel in the 3D image block, determine an included angle between an X axis of the new coordinate system and an X axis of a world coordinate system corresponding to the 3D initial image as a rotation angle of the end surface of the forklift pallet relative to an XOY plane of the world coordinate system.
On the basis of the above embodiment, the prior module 81 is configured to obtain a k-D tree structure of the pixel points in the 3D initial image according to an euclidean distance between the pixel points in the 3D initial image; determining adjacent pixel points for each pixel point in the k-d tree structure; acquiring normal information of each pixel point in the k-d tree structure, wherein the normal information indicates a normal of a local plane formed by the pixel point and an adjacent pixel point corresponding to the pixel point; determining a normal included angle between each pixel point and an adjacent pixel point according to the normal information of each pixel point in the k-d tree structure; dividing the pixel points in the k-d tree structure into a plurality of pixel categories according to a preset included angle threshold value and the normal included angle between each pixel point and the adjacent pixel point, wherein each pixel category comprises the pixel point of which the normal included angle is smaller than the included angle threshold value and the adjacent pixel point, and the pixel point corresponding to each pixel category is the pixel point in the same plane; determining the front cross section of the forklift pallet in a plane formed by the pixel points corresponding to the pixel categories according to preset characteristic information of the forklift pallet; and determining prior position information corresponding to the forklift pallet according to the corresponding coordinate range of the forklift pallet type normal section in a world coordinate system.
On the basis of the foregoing embodiment, the prior module 81 is configured to, before the obtaining of the 3D initial image including the forklift pallet and the prior position information corresponding to the forklift pallet, obtain a rotation matrix between a world coordinate system and a 3D camera coordinate system; acquiring a 3D initial image shot by a 3D camera, wherein the coordinates of each pixel point of the 3D initial image belong to a 3D camera coordinate system; and transforming the coordinates of each pixel point of the 3D initial image from the 3D camera coordinate system to the world coordinate system according to the rotation matrix to obtain the 3D initial image in the world coordinate system.
Referring to fig. 10, which is a schematic diagram of a hardware structure of an apparatus according to an embodiment of the present invention, the apparatus 90 includes: a processor 91, memory 92 and computer programs; wherein
A memory 92 for storing the computer program, which may also be a flash memory (flash). The computer program is, for example, an application program, a functional module, or the like that implements the above method.
And a processor 91 for executing the computer program stored in the memory to implement the steps of the forklift pallet positioning method. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 92 may be separate or integrated with the processor 91.
When the memory 92 is a device independent of the processor 91, the apparatus may further include:
a bus 93 for connecting the memory 92 and the processor 91.
The present invention also provides a readable storage medium, in which a computer program is stored, and the computer program is used for implementing the forklift pallet positioning method provided by the above various embodiments when being executed by a processor.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the readable storage medium may also reside as discrete components in a communication device. The readable storage medium may be a read-only memory (ROM), a random-access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the apparatus may read the execution instructions from the readable storage medium, and the execution of the execution instructions by the at least one processor causes the apparatus to implement the forklift pallet positioning method provided by the various embodiments described above.
In the above embodiments of the apparatus, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A forklift pallet positioning method is characterized by comprising the following steps:
acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray;
intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face;
acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet;
and acquiring the 3D position of the jack of the forklift tray according to the determined 2D position of the jack in the 2D characteristic image.
2. The method of claim 1, wherein the obtaining a 2D feature image of the 3D patch projected on the forklift pallet front section comprises:
acquiring a covariance matrix according to the 3D coordinates of each pixel point in the 3D image block;
obtaining eigenvalues of the covariance matrix and eigenvectors corresponding to the eigenvalues;
transforming the 3D coordinates of each pixel point in the 3D image block into a new coordinate system formed by the characteristic vectors, and acquiring new coordinates of each pixel point in the 3D image block, wherein an X axis, a Y axis and a Z axis of the new coordinate system are sequentially a first characteristic vector, a second characteristic vector and a third characteristic vector which are determined by characteristic values in the characteristic vectors from large to small, and a plane formed by the X axis and the Y axis of the new coordinate system is a plane where the positive section of the forklift pallet is located;
and projecting the new coordinates of each pixel point in the 3D image block along the Z axis of the new coordinate system to obtain a 2D characteristic image of the 3D image block projected on the front section of the forklift tray.
3. The method of claim 2, wherein the obtaining jack 3D locations of the forklift tray from the determined jack 2D locations in the 2D feature images comprises:
and transforming the 2D position of the jack from the new coordinate system to a world coordinate system corresponding to the 3D initial image to obtain the 3D position of the jack of the forklift tray.
4. The method of any of claims 1 to 3, further comprising, after said obtaining a 2D feature image of said 3D patch projected on a frontal section of said forklift pallet:
determining the 2D position of the forklift pallet according to the 2D characteristic image and a preset target pallet template;
determining the 3D position of the forklift tray according to the 2D position of the forklift tray;
and taking the 3D position of the forklift tray as prior position information, returning and executing the interception of the 3D initial image in the position area indicated by the prior position information to obtain a 3D image block containing the front section of the forklift tray until the two continuous determinations of the same 2D position of the forklift tray are carried out, and determining the 2D position of the jack in the last determined 2D characteristic image of the forklift tray.
5. The method of claim 4, wherein determining the 2D position of the forklift pallet according to the 2D feature image and a preset target pallet template comprises:
performing sliding matching on the 2D characteristic image by using at least one preset tray template to obtain a matching result corresponding to each tray template;
determining the tray template with the optimal matching result as a preset target tray template;
and determining the 2D position of the forklift pallet according to the position of the preset target pallet template which is most matched with the 2D characteristic image.
6. The method of claim 5, further comprising, before the determining the 2D position of the forklift pallet from the 2D feature image and a preset target pallet template:
performing expansion processing on each pixel point of the 2D characteristic image to obtain an expanded 2D characteristic image;
correspondingly, the sliding matching of the 2D feature image is performed by using at least one preset tray template, and the matching result corresponding to each tray template is obtained, including:
and performing sliding matching on the expanded 2D characteristic image by using at least one preset tray template to obtain a matching result corresponding to each tray template.
7. The method according to claim 2, wherein after the transforming the 3D coordinates of each pixel in the 3D image block into a new coordinate system formed by the feature vectors to obtain new coordinates of each pixel in the 3D image block, the method further comprises:
and determining an included angle between the X axis of the new coordinate system and the X axis of the world coordinate system corresponding to the 3D initial image as a rotation angle of the end face of the forklift pallet relative to an XOY plane of the world coordinate system.
8. The method of claim 1, wherein the obtaining a priori position information corresponding to the forklift pallet comprises:
acquiring a k-D tree structure of the pixel points in the 3D initial image according to the Euclidean distance between the pixel points in the 3D initial image;
determining adjacent pixel points for each pixel point in the k-d tree structure;
acquiring normal information of each pixel point in the k-d tree structure, wherein the normal information indicates a normal of a local plane formed by the pixel point and an adjacent pixel point corresponding to the pixel point;
determining a normal included angle between each pixel point and an adjacent pixel point according to the normal information of each pixel point in the k-d tree structure;
dividing the pixel points in the k-d tree structure into a plurality of pixel categories according to a preset included angle threshold value and the normal included angle between each pixel point and the adjacent pixel point, wherein each pixel category comprises the pixel point of which the normal included angle is smaller than the included angle threshold value and the adjacent pixel point, and the pixel point corresponding to each pixel category is the pixel point in the same plane;
determining the front cross section of the forklift pallet in a plane formed by the pixel points corresponding to the pixel categories according to preset characteristic information of the forklift pallet;
and determining prior position information corresponding to the forklift pallet according to the corresponding coordinate range of the forklift pallet type normal section in a world coordinate system.
9. The method of claim 1, further comprising, prior to the obtaining the 3D initial image containing forklift pallets and the a priori positional information corresponding to the forklift pallets:
acquiring a rotation matrix between a world coordinate system and a 3D camera coordinate system;
acquiring a 3D initial image shot by a 3D camera, wherein the coordinates of each pixel point of the 3D initial image belong to a 3D camera coordinate system;
and transforming the coordinates of each pixel point of the 3D initial image from the 3D camera coordinate system to the world coordinate system according to the rotation matrix to obtain the 3D initial image in the world coordinate system.
10. A forklift pallet positioning device, comprising:
the prior module is used for acquiring a 3D initial image containing a forklift tray and prior position information corresponding to the forklift tray;
the intercepting module is used for intercepting the 3D initial image in a position area indicated by the prior position information to obtain a 3D image block containing a forklift pallet normal section, wherein the forklift pallet normal section is a forklift pallet section parallel to the end face;
the transformation module is used for acquiring a 2D characteristic image of the 3D image block projected on the front section of the forklift pallet;
and the positioning module is used for acquiring the 3D position of the jack of the forklift tray according to the determined 2D position of the jack in the 2D characteristic image.
11. An apparatus, comprising: a memory, a processor and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the forklift pallet positioning method of any one of claims 1 to 9.
12. A readable storage medium, in which a computer program is stored, which, when being executed by a processor, is adapted to carry out the forklift pallet positioning method according to any one of claims 1 to 9.
CN201910363982.3A 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium Active CN111738253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910363982.3A CN111738253B (en) 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910363982.3A CN111738253B (en) 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111738253A true CN111738253A (en) 2020-10-02
CN111738253B CN111738253B (en) 2023-08-08

Family

ID=72645887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910363982.3A Active CN111738253B (en) 2019-04-30 2019-04-30 Fork truck tray positioning method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111738253B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554701A (en) * 2021-07-16 2021-10-26 杭州派珞特智能技术有限公司 PDS tray intelligent identification and positioning system and working method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5208753A (en) * 1991-03-28 1993-05-04 Acuff Dallas W Forklift alignment system
US5812395A (en) * 1994-11-16 1998-09-22 Masciangelo; Stefano Vision based forklift control system for autonomous pallet loading
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
CN106672859A (en) * 2017-01-05 2017-05-17 深圳市有光图像科技有限公司 Method for visually identifying tray based on forklift and forklift
CN107218927A (en) * 2017-05-16 2017-09-29 上海交通大学 A kind of cargo pallet detecting system and method based on TOF camera
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
CN108502810A (en) * 2018-04-13 2018-09-07 深圳市有光图像科技有限公司 A kind of method and fork truck of fork truck identification pallet
CN109520418A (en) * 2018-11-27 2019-03-26 华南农业大学 A kind of pallet method for recognizing position and attitude based on two dimensional laser scanning instrument
JP2019048696A (en) * 2017-09-11 2019-03-28 Kyb株式会社 Information processing device and information processing method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5208753A (en) * 1991-03-28 1993-05-04 Acuff Dallas W Forklift alignment system
US5812395A (en) * 1994-11-16 1998-09-22 Masciangelo; Stefano Vision based forklift control system for autonomous pallet loading
CN105976375A (en) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 RGB-D-type sensor based tray identifying and positioning method
CN106672859A (en) * 2017-01-05 2017-05-17 深圳市有光图像科技有限公司 Method for visually identifying tray based on forklift and forklift
CN107218927A (en) * 2017-05-16 2017-09-29 上海交通大学 A kind of cargo pallet detecting system and method based on TOF camera
CN107507167A (en) * 2017-07-25 2017-12-22 上海交通大学 A kind of cargo pallet detection method and system matched based on a cloud face profile
JP2019048696A (en) * 2017-09-11 2019-03-28 Kyb株式会社 Information processing device and information processing method
CN108502810A (en) * 2018-04-13 2018-09-07 深圳市有光图像科技有限公司 A kind of method and fork truck of fork truck identification pallet
CN109520418A (en) * 2018-11-27 2019-03-26 华南农业大学 A kind of pallet method for recognizing position and attitude based on two dimensional laser scanning instrument

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BENJAMIN MOLTER, ET AL.: "Real-time Pallet Localization with 3D Camera Technology for Forklifts in Logistic Environments", 《2018 IEEE INTERNATIONAL CONFERENCE ON SERVICE OPERATIONS AND LOGISTICS,AND INFORMATICS(SOLI) 》 *
JUNHAO XIAO,ET AL.: "Pallet recognition and localization using an RGB-D camera", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554701A (en) * 2021-07-16 2021-10-26 杭州派珞特智能技术有限公司 PDS tray intelligent identification and positioning system and working method thereof

Also Published As

Publication number Publication date
CN111738253B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN110807350B (en) System and method for scan-matching oriented visual SLAM
Aldoma et al. CAD-model recognition and 6DOF pose estimation using 3D cues
JP5705147B2 (en) Representing 3D objects or objects using descriptors
US6278798B1 (en) Image object recognition system and method
US9378431B2 (en) Method of matching image features with reference features and integrated circuit therefor
Lloyd et al. Recognition of 3D package shapes for single camera metrology
CN105046684B (en) A kind of image matching method based on polygon generalised Hough transform
US8798377B2 (en) Efficient scale-space extraction and description of interest points
EP3766644B1 (en) Workpiece picking device and workpiece picking method
US20130051658A1 (en) Method of separating object in three dimension point cloud
US20140105506A1 (en) Recognition and pose determination of 3d objects in multimodal scenes
CN105139416A (en) Object identification method based on image information and depth information
Sehgal et al. Real-time scale invariant 3D range point cloud registration
Liu et al. 6D pose estimation of occlusion-free objects for robotic Bin-Picking using PPF-MEAM with 2D images (occlusion-free PPF-MEAM)
Seib et al. Object recognition using hough-transform clustering of surf features
Holz et al. Fast edge-based detection and localization of transport boxes and pallets in rgb-d images for mobile robot bin picking
CN111108515A (en) Picture target point correcting method, device and equipment and storage medium
CN111738253B (en) Fork truck tray positioning method, device, equipment and readable storage medium
CN112465908B (en) Object positioning method, device, terminal equipment and storage medium
US11113522B2 (en) Segment-based pattern matching algorithm
CN116434219A (en) Three-dimensional target identification method based on laser radar
KR101184588B1 (en) A method and apparatus for contour-based object category recognition robust to viewpoint changes
CN111275693B (en) Counting method and counting device for objects in image and readable storage medium
CN113496142A (en) Method and device for measuring volume of logistics piece
Wu et al. Real-time robust algorithm for circle object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant