CN111986185A - Tray detection and positioning method based on depth camera - Google Patents
Tray detection and positioning method based on depth camera Download PDFInfo
- Publication number
- CN111986185A CN111986185A CN202010866470.1A CN202010866470A CN111986185A CN 111986185 A CN111986185 A CN 111986185A CN 202010866470 A CN202010866470 A CN 202010866470A CN 111986185 A CN111986185 A CN 111986185A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- tran
- coordinate
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Abstract
The invention provides a tray detection and positioning method based on a depth camera, which comprises the following steps: pasting the infrared reflective mark on the goods shelf; detecting an infrared image acquired by a depth camera, and judging whether the tray is on a goods shelf or not; and carrying out segmentation, plane detection and rasterization on the point cloud containing the tray to realize the positioning of the tray. The invention can adapt to two scenes of trays on the ground or on a goods shelf and can quickly and accurately realize positioning.
Description
Technical Field
The invention relates to the technical field of measurement, in particular to a tray detection and positioning method based on a depth camera.
Background
With the continuous development of modern logistics, the warehousing robot plays an increasingly important role in the intelligent warehousing system, and the detection of the tray is one of the core technologies of the warehousing robot. The storage environment has the characteristics of complex background, uneven light, difficulty in determining the pose of the tray and the like, so that accurate and efficient tray detection is a problem to be solved urgently at present.
In recent years, inspection methods based on vision, laser radar, and a combination of both have been mainly used for inspection research on pallets. The vision-based detection method mainly separates the tray from the image background and detects the tray by using specific characteristics. CHEN G et al (Cui G Z, Lu L S, He Z D, et al. A robust automatic mobile platform recognition [ C ]// information in Control, Automation and Robotics (CAR),20102nd International Asia Conference on. IEEE Press, 2010: 286-290.) propose visual detection methods based on color and geometric features, which generate features from information such as color, edges, and corners, but are susceptible to interference from light and complex backgrounds. Syu J L et al (Syu J L, Li H T, Chiang J S, et al. a computer vision assisted system for automation of lift vehicles in real influence environment J. Multimedia Tools and Applications,2017,76(18): 18387-18407) propose a robust detection method based on Harr-like features, but require the camera to remain parallel to the tray column plane. VARGA R et al (Robust pallet detection for automated positioning operations C// International Conference on Computer Vison Theory and applications, rome: SciTePress,2016:470-477.) use a detection method of stereo vision, in which the LBP feature-based detection algorithm, although Robust to light, relies on the camera being parallel to the vertical plane of the tray column and the number of trays being specified. The detection method based on the laser radar has strong robustness to illumination, but is generally expensive.
In summary, the current solutions all have certain limitations, the detection method based on the vision is susceptible to the influence of light and the relative pose relationship between the tray and the sensor, and although the detection method based on the laser radar has robustness to illumination changes, the detection range is limited and the price is high.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a tray detection and positioning method based on a depth camera; the method comprises the steps of preprocessing point cloud and infrared images acquired by a depth camera, then carrying out plane segmentation on the point cloud through a region growing algorithm, carrying out rasterization processing on a segmented point cloud plane, finally realizing tray pose measurement by using an image processing algorithm, and giving an intuitive result on a display screen.
The technical scheme of the invention is as follows:
a tray detection and positioning method based on a depth camera is characterized by comprising the following steps:
step 1: according to the widths of the beam and the column of the pallet shelf, selecting circular infrared light-reflecting marks with the diameter of 3-5 cm to be adhered on the outer surfaces of the beam and the column, wherein the specific adhering positions are as follows: the outer surface of the shelf upright post is higher than the position 5-10 cm higher than the tray, the junction of the shelf beam and the outer surface of the upright post, the middle part of each tray preset position on the outer surface of the shelf beam and the junction of the adjacent trays;
step 2: acquiring a point cloud containing a target pallet and an infrared image using a depth camera, noting the point cloud as an original point cloud P0The infrared image is I0;
Step 2.1: for infrared image I according to formula (1)0Processing to obtain a corresponding binary image B0:
Where thr denotes a binarization threshold, I0(x, y) denotes an infrared image I0Gray value of middle (x, y) pixel coordinate, B0(x, y) represents the binarized image B0Of (x, y) pixel coordinatesGray value;
step 2.2: using contour detection algorithm to detect binary image B0B contours without internal contours form a candidate contour set L shown in a formula (2);
L={Bc|c=1,2,…,b,0.5≤η(Bc)≤2} (2)
wherein, BcRepresenting a binarized image B0Of (c) th profile, η (B)c) Represents the outline BcAspect ratio of the smallest circumscribed rectangle of (a);
step 2.3: if the number of the candidate contours in the set L is 0, the infrared image I is represented0Turning to step 3 if no infrared reflective mark exists; otherwise, calculating the centroid coordinate O of the minimum circumscribed rectangle corresponding to each contour in the set L according to the formula (3)cAnd obtaining a centroid set O as shown in the formula (4)L;
OL={Oc=(xc,yc,zc)|Bc∈L,c∈{1,2,…,b}} (4)
Wherein the content of the first and second substances,representing the c-th contour B in the set LcThe four corner points of the corresponding minimum circumscribed rectangle are in the original point cloud P0Coordinate of (a), (b), (c), (d) and (d)c,yc,zc) Represents the outline BcThe centroid of the corresponding minimum circumscribed rectangle is in the original point cloud P0Coordinate of (5)c;
Step 2.4: removing the original point cloud P0Middle x coordinate is less than xnLOr greater than xxLAnd y is less than ynLOr greater than yxLAnd the remaining points are formed into new P0Wherein x isnLAnd xxLEach represents OLMinimum and maximum of the middle x coordinate, ynLAnd yxLAre respectively represented by OLMinimum value of middle y coordinateAnd a maximum value;
and step 3: removing point cloud P by adopting straight-through filtering algorithm0Removing outliers from the invalid area; dividing point cloud by using a region growing algorithm based on normal vector constraint, firstly, using a normal estimation method based on principal component analysis, setting the neighborhood radius of seed points in the method to be 30 mm, setting the lower limit of region growing point number to be 50 and the upper limit to be 2000, carrying out normal estimation on any seed point in the point cloud, clustering the point cloud into n point cluster sets, wherein the jth clustering point set DjAs shown in equation (5), j is 1,2, …, n, and the point set DjConsidered as a plane; screening out the planar point clouds of the tray stand columns to be processed in the point clouds and forming a point cloud set P shown as a formula (6)planeRecording the plane corresponding to the point cloud set as alpha;
Dj={di||Ai|<5°,i=1,2,…,v} (5)
wherein d isiRepresenting a point cloud P0The ith seed point ofiRepresents the seed point diThe normal vector angle difference with the point in its neighborhood; a. thejkRepresenting a set of plane points DjAnd DkThe angle difference of the normal vector of (a),representing a set of plane points DjThe angle difference between the normal vector of (D) and the positive z-axis vector of the depth camera, η (D)j) Representing a set of plane points DjThe overall aspect ratio;
and 4, step 4: the original point cloud P0In that x coordinate is greater than xnpAnd is less than xxpY coordinate greater than ynpAnd is less than yxpZ coordinate greater than znpAnd is less than zxpW points of (a) constitute a point cloud PsegWherein x isnpAnd xxpRespectively represent point clouds PplaneMinimum and maximum of the middle x coordinate, ynpAnd yxpRespectively represent point clouds PplaneMinimum and maximum values of the middle y coordinate, znpAnd zxpRespectively represent point clouds PplaneThe minimum and maximum of the medium z coordinate; obtaining PsegProjecting the tangent point clouds belonging to the plane alpha to form a point cloud set P shown as a formula (7)pro(ii) a Obtaining P according to formula (8)proRotated tangent point cloud PtranThe z coordinate size of a point in the point cloud is denoted as ztran;
Wherein d issRepresenting a point cloud PsegAt any point in the process, the point of the point is,represents a point dsProjection point on plane α, Dist (α, d)s) Represents a point dsDistance to plane alpha in mm, RprozIs a point cloud PproRotation matrix to positive z-axis of camera, (x)pro,ypro,zpro) Representing a point cloud PproCoordinates of the midpoint, (x)tran,ytran,ztran) Represents a point (x)pro,ypro,zpro) Rotated to point cloud PtranCoordinates of the corresponding point in;
and 5: measuring the length L of the tray object0Width of WeIn millimeters, and then the length of the grid pattern G is set to L0And width is set as WeA point cloud P of a planetranConverting into a grid graph G, specifically: setting the gray value of the pixel coordinates of the grid image according to the formula (9), and then setting the gray value of the sparse area of the grid image to be 255 by adopting a closed operation algorithm, so that the white area in the grid image represents a tray, and the black area represents a background or a fork hole;
wherein (x)G,yG) Denotes the pixel coordinates of the grid graph G, G (x)G,yG) Representing coordinates (x)G,yG) Gray value of the pixel at, xmin、yminAre respectively a point cloud PtranMinimum of x, y coordinates of midpoint, xtran、ytranRespectively represent point clouds PtranThe x and y coordinates of any point in the image;
step 6: detecting t contours without internal contours in the grid map G by using a contour detection algorithm, and screening to obtain a candidate contour set H shown as a formula (10); acquiring two external rectangles corresponding to the outline of the pallet fork hole in the grid graph G and obtaining a set R shown as a formula (11)fRecord set RfThe pixel coordinates of eight corner points of the two circumscribed rectangles are (x)re,yre) 1,2, …,8, the point cloud P corresponding to the cornertranHas the coordinate of (x)min+xre,ymin+yre,ztran);
H={Ci|i=1,2,…,t,1≤η(Ci)≤4} (10)
Rf={RC|C∈min2(H)} (11)
Wherein, CiRepresenting an arbitrary profile, η (C), in the grid graph Gi) Represents the contour CiThe aspect ratio of the circumscribed rectangle of (1); in formula (11), min2(H) represents two profiles in candidate profile set H with the smallest difference from the actual aspect ratio of pallet fork holes, RCA circumscribed rectangle of the outline C;
and 7: the eight corner points are positioned in the point cloud P according to the formula (12)tranCoordinate (x) ofmin+xre,ymin+yre,ztran) Rotate back to the original point cloud P0Obtaining an original point cloud P0Eight coordinates (x) corresponding to0i,y0i,z0i) I is 1,2, …,8, and then calculating the original point cloud coordinates (x) corresponding to the center points of the two fork holes according to the formula (13)cen,ycen,zcen) At the mostFinally, calculating the distance l between the forklift and the pallet according to the formula (14)fork(ii) a An included angle between a normal vector of the plane of the tray upright column and the positive direction of the camera is an included angle between the tray and the forklift, and is marked as angle and obtained by a formula (15); calculated to obtain lforkAnd angle, i.e. precise positioning of the tray is achieved;
lfork=min(l(dcen1,dfork),l(dcen2,dfork)) (14)
wherein R istranIs a point cloud PtranTo point cloud P0When s takes 1, k takes 4, s takes 5 and k takes 8 in the formula (13), the coordinates of the center points of the two fork holes are respectively calculated and recorded as dcen1And dcen2(ii) a In the formula (14), dforkCoordinates representing pre-calibrated points on the fork-lift truck, l (d)cen1,dfork)、l(dcen2,dfork) Respectively taking the distance between the center point of the two fork holes and a pre-calibration point, and taking the minimum value of min ();representing the point cloud direction vector of the plane of the pallet upright post,representing the positive camera direction vector,to representAndx represents a vector cross product,is composed ofRepresents a vector point product.
Compared with the prior art, the invention has the beneficial effects that: the method is based on the depth camera, has low cost and high algorithm robustness, can adapt to scenes with different illumination, different distances and different angles, can quickly and accurately realize the positioning of the tray, and has easier actual installation and small space required by the installation.
Drawings
FIG. 1 is a schematic view of the position of the infrared reflective markers attached, wherein black is a shelf, white is a circular infrared reflective marker, and gray is a tray;
fig. 2 shows two scenarios of the tray being placed on the ground or on a shelf: the tray has no bottom (left) and has a bottom (right), wherein black indicates the floor or shelf;
FIG. 3 is a diagram of the overall effect of the algorithm;
FIG. 4 is a grid diagram of the tray after the close operation;
FIG. 5 is a grid diagram corresponding to the tray, wherein white dots are positions corresponding to the center points of the fork holes;
fig. 6 is a diagram of the measurement results of point cloud cross holes.
Detailed Description
The invention is further described below with reference to the figures and examples.
The invention discloses a method for detecting a tray and realizing tray positioning based on a depth camera, which specifically comprises the following steps:
step 1: as shown in fig. 1, according to the widths of the beam and the column of the pallet shelf, circular infrared reflective marks with the diameter of 3-5 cm are selected and adhered to the outer surfaces of the beam and the column, and the specific adhering positions are as follows: the outer surface of the shelf upright post is higher than the position 5-10 cm higher than the tray, the junction of the shelf beam and the outer surface of the upright post, the middle part of each tray preset position on the outer surface of the shelf beam and the junction of the adjacent trays;
step 2: acquiring a point cloud containing a target pallet and an infrared image using a depth camera, noting the point cloud as an original point cloud P0The infrared image is I0;
Step 2.1: for infrared image I according to formula (1)0Processing to obtain a corresponding binary image B0:
Where thr denotes a binarization threshold, I0(x, y) denotes an infrared image I0Gray value of middle (x, y) pixel coordinate, B0(x, y) represents the binarized image B0Gray scale value of middle (x, y) pixel coordinate; in this embodiment, thr is set to 200;
step 2.2: using contour detection algorithm to detect binary image B0B contours without internal contours form a candidate contour set L shown in a formula (2);
L={Bc|c=1,2,…,b,0.5≤η(Bc)≤2} (2)
wherein, BcRepresenting a binarized image B0Of (c) th profile, η (B)c) Represents the outline BcAspect ratio of the smallest circumscribed rectangle of (a);
step 2.3: if the number of the candidate contours in the set L is 0, the infrared image I is represented0Turning to step 3 if no infrared reflective mark exists; otherwise, calculating the centroid coordinate O of the minimum circumscribed rectangle corresponding to each contour in the set L according to the formula (3)cAnd obtaining a centroid set O as shown in the formula (4)L;
OL={Oc=(xc,yc,zc)|Bc∈L,c∈{1,2,…,b}} (4)
Wherein the content of the first and second substances,representing the c-th contour B in the set LcThe four corner points of the corresponding minimum circumscribed rectangle are in the original point cloud P0Coordinate of (a), (b), (c), (d) and (d)c,yc,zc) Represents the outline BcThe centroid of the corresponding minimum circumscribed rectangle is in the original point cloud P0Coordinate of (5)c;
Step 2.4: removing the original point cloud P0Middle x coordinate is less than xnLOr greater than xxLAnd y is less than ynLOr greater than yxLAnd the remaining points are formed into new P0Wherein x isnLAnd xxLEach represents OLMinimum and maximum of the middle x coordinate, ynLAnd yxLAre respectively represented by OLThe minimum and maximum values of the medium y coordinate;
and step 3: removing point cloud P by adopting straight-through filtering algorithm0Removing outliers from the invalid area; dividing point cloud by using a region growing algorithm based on normal vector constraint, firstly, using a normal estimation method based on principal component analysis, setting the neighborhood radius of seed points in the method to be 30 mm, setting the lower limit of region growing point number to be 50 and the upper limit to be 2000, carrying out normal estimation on any seed point in the point cloud, clustering the point cloud into n point cluster sets, wherein the jth clustering point set DjAs shown in equation (5), j is 1,2, …, n, and the point set DjConsidered as a plane; screening out the planar point clouds of the tray stand columns to be processed in the point clouds and forming a point cloud set P shown as a formula (6)planeRecording the plane corresponding to the point cloud set as alpha;
Dj={di||Ai|<5°,i=1,2,…,v} (5)
wherein d isiTo representPoint cloud P0The ith seed point ofiRepresents the seed point diThe normal vector angle difference with the point in its neighborhood; a. thejkRepresenting a set of plane points DjAnd DkThe angle difference of the normal vector of (a),representing a set of plane points DjThe angle difference between the normal vector of (D) and the positive z-axis vector of the depth camera, η (D)j) Representing a set of plane points DjThe overall aspect ratio;
and 4, step 4: the original point cloud P0In that x coordinate is greater than xnpAnd is less than xxpY coordinate greater than ynpAnd is less than yxpZ coordinate greater than znpAnd is less than zxpW points of (a) constitute a point cloud PsegWherein x isnpAnd xxpRespectively represent point clouds PplaneMinimum and maximum of the middle x coordinate, ynpAnd yxpRespectively represent point clouds PplaneMinimum and maximum values of the middle y coordinate, znpAnd zxpRespectively represent point clouds PplaneThe minimum and maximum of the medium z coordinate; obtaining PsegProjecting the tangent point clouds belonging to the plane alpha to form a point cloud set P shown as a formula (7)pro(ii) a Obtaining P according to formula (8)proRotated tangent point cloud PtranThe z coordinate size of a point in the point cloud is denoted as ztranAs shown in the lower right of fig. 3;
wherein d issRepresenting a point cloud PsegAt any point in the process, the point of the point is,represents a point dsProjection point on plane α, Dist (α, d)s) Represents a point dsDistance to plane alpha in mm, RprozIs a point cloud PproRotation matrix to positive z-axis of camera, (x)pro,ypro,zpro) Representing a point cloud PproCoordinates of the midpoint, (x)tran,ytran,ztran) Represents a point (x)pro,ypro,zpro) Rotated to point cloud PtranCoordinates of the corresponding point in;
and 5: measuring the length L of the tray object0Width of WeIn millimeters, and then the length of the grid pattern G is set to L0And width is set as WeA point cloud P of a planetranConverting into a grid graph G, specifically: setting the gray value of the pixel coordinates of the grid image according to the formula (9), and then setting the gray value of the sparse area of the grid image to 255 by adopting a closed operation algorithm, so that the white area in the grid image represents a tray, and the black area represents a background or a cross hole, as shown in fig. 4;
wherein (x)G,yG) Denotes the pixel coordinates of the grid graph G, G (x)G,yG) Representing coordinates (x)G,yG) Gray value of the pixel at, xmin、yminAre respectively a point cloud PtranMinimum of x, y coordinates of midpoint, xtran、ytranRespectively represent point clouds PtranThe x and y coordinates of any point in the image;
step 6: detecting t contours without internal contours in the grid map G by using a contour detection algorithm, and screening to obtain a candidate contour set H shown as a formula (10); acquiring two external rectangles corresponding to the outline of the pallet fork hole in the grid graph G and obtaining a set R shown as a formula (11)fRecord set RfThe pixel coordinates of eight corner points of the two circumscribed rectangles are (x)re,yre) 1,2, …,8, the point cloud P corresponding to the cornertranHas the coordinate of (x)min+xre,ymin+yre,ztran);
H={Ci|i=1,2,…,t,1≤η(Ci)≤4} (10)
Rf={RC|C∈min2(H)} (11)
Wherein, CiRepresenting an arbitrary profile, η (C), in the grid graph Gi) Represents the contour CiThe aspect ratio of the circumscribed rectangle of (1); in formula (11), min2(H) represents two profiles in candidate profile set H with the smallest difference from the actual aspect ratio of pallet fork holes, RCA circumscribed rectangle of the outline C;
and 7: the eight corner points are positioned in the point cloud P according to the formula (12)tranCoordinate (x) ofmin+xre,ymin+yre,ztran) Rotate back to the original point cloud P0Obtaining an original point cloud P0Eight coordinates (x) corresponding to0i,y0i,z0i) I is 1,2, …,8, and then calculating the original point cloud coordinates (x) corresponding to the center points of the two fork holes according to the formula (13)cen,ycen,zcen) Finally, the distance l between the forklift and the pallet is calculated according to the formula (14)fork(ii) a An included angle between a normal vector of the plane of the tray upright column and the positive direction of the camera is an included angle between the tray and the forklift, and is marked as angle and obtained by a formula (15); calculated to obtain lforkAnd angle, namely, the accurate positioning of the tray is realized, and the effect is shown in fig. 5 and 6;
lfork=min(l(dcen1,dfork),l(dcen2,dfork)) (14)
wherein R istranIs a point cloud PtranTo point cloud P0When s takes 1, k takes 4, s takes 5 and k takes 8 in the formula (13), the coordinates of the center points of the two fork holes are respectively calculated and recorded as dcen1And dcen2(ii) a In the formula (14), dforkCoordinates representing pre-calibrated points on the fork-lift truck, l (d)cen1,dfork)、l(dcen2,dfork) Respectively taking the distance between the center point of the two fork holes and a pre-calibration point, and taking the minimum value of min ();representing the point cloud direction vector of the plane of the pallet upright post,representing the positive camera direction vector,to representAndx represents a vector cross product,is composed ofRepresents a vector point product.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.
Claims (3)
1. A tray detection and positioning method based on a depth camera is characterized by comprising the following steps:
step 1: according to the widths of the cross beam and the upright column of the pallet shelf, selecting a circular infrared light-reflecting mark with the diameter of 3-5 cm and sticking the circular infrared light-reflecting mark on the outer surfaces of the cross beam and the upright column;
step 2: acquiring a point cloud containing a target pallet and an infrared image using a depth camera, noting the point cloud as an original point cloud P0The infrared image is I0Preprocessing the point cloud and the infrared image;
and step 3: removing point cloud P by adopting straight-through filtering algorithm0Removing outliers from the invalid area; dividing point cloud by using a region growing algorithm based on normal vector constraint, firstly, using a normal estimation method based on principal component analysis, setting the neighborhood radius of seed points in the method to be 30 mm, setting the lower limit of region growing point number to be 50 and the upper limit to be 2000, carrying out normal estimation on any seed point in the point cloud, clustering the point cloud into n point cluster sets, wherein the jth clustering point set DjAs shown in equation (5), j is 1,2, …, n, and the point set DjConsidered as a plane; screening out the planar point clouds of the tray stand columns to be processed in the point clouds and forming a point cloud set P shown as a formula (6)planeRecording the plane corresponding to the point cloud set as alpha;
Dj={di||Ai|<5°,i=1,2,…,v} (5)
wherein d isiRepresenting a point cloud P0The ith seed point ofiRepresents the seed point diThe normal vector angle difference with the point in its neighborhood; a. thejkRepresenting a set of plane points DjAnd DkThe angle difference of the normal vector of (a),representing a set of plane points DjThe angle difference between the normal vector of (D) and the positive z-axis vector of the depth camera, η (D)j) Representing a set of plane points DjThe overall aspect ratio;
and 4, step 4:the original point cloud P0In that x coordinate is greater than xnpAnd is less than xxpY coordinate greater than ynpAnd is less than yxpZ coordinate greater than znpAnd is less than zxpW points of (a) constitute a point cloud PsegWherein x isnpAnd xxpRespectively represent point clouds PplaneMinimum and maximum of the middle x coordinate, ynpAnd yxpRespectively represent point clouds PplaneMinimum and maximum values of the middle y coordinate, znpAnd zxpRespectively represent point clouds PplaneThe minimum and maximum of the medium z coordinate; obtaining PsegProjecting the tangent point clouds belonging to the plane alpha to form a point cloud set P shown as a formula (7)pro(ii) a Obtaining P according to formula (8)proRotated tangent point cloud PtranThe z coordinate size of a point in the point cloud is denoted as ztran;
Wherein d issRepresenting a point cloud PsegAt any point in the process, the point of the point is,represents a point dsProjection point on plane α, Dist (α, d)s) Represents a point dsDistance to plane alpha in mm, RprozIs a point cloud PproRotation matrix to positive z-axis of camera, (x)pro,ypro,zpro) Representing a point cloud PproCoordinates of the midpoint, (x)tran,ytran,ztran) Represents a point (x)pro,ypro,zpro) Rotated to point cloud PtranCoordinates of the corresponding point in;
and 5: measuring the length L of the tray object0Width of WeUnit ofIs mm, and then the length of the grid pattern G is set to L0And width is set as WeA point cloud P of a planetranConverting into a grid graph G, specifically: setting the gray value of the pixel coordinates of the grid image according to the formula (9), and then setting the gray value of the sparse area of the grid image to be 255 by adopting a closed operation algorithm, so that the white area in the grid image represents a tray, and the black area represents a background or a fork hole;
wherein (x)G,yG) Denotes the pixel coordinates of the grid graph G, G (x)G,yG) Representing coordinates (x)G,yG) Gray value of the pixel at, xmin、yminAre respectively a point cloud PtranMinimum of x, y coordinates of midpoint, xtran、ytranRespectively represent point clouds PtranThe x and y coordinates of any point in the image;
step 6: detecting t contours without internal contours in the grid map G by using a contour detection algorithm, and screening to obtain a candidate contour set H shown as a formula (10); acquiring two external rectangles corresponding to the outline of the pallet fork hole in the grid graph G and obtaining a set R shown as a formula (11)fRecord set RfThe pixel coordinates of eight corner points of the two circumscribed rectangles are (x)re,yre) 1,2, …,8, the point cloud P corresponding to the cornertranHas the coordinate of (x)min+xre,ymin+yre,ztran);
H={Ci|i=1,2,…,t,1≤η(Ci)≤4} (10)
Rf={RC|C∈min2(H)} (11)
Wherein, CiRepresenting an arbitrary profile, η (C), in the grid graph Gi) Represents the contour CiThe aspect ratio of the circumscribed rectangle of (1); in formula (11), min2(H) represents two profiles in candidate profile set H with the smallest difference from the actual aspect ratio of pallet fork holes, RCA circumscribed rectangle of the outline C;
and 7: the eight corner points are positioned in the point cloud P according to the formula (12)tranCoordinate (x) ofmin+xre,ymin+yre,ztran) Rotate back to the original point cloud P0Obtaining an original point cloud P0Eight coordinates (x) corresponding to0i,y0i,z0i) I is 1,2, …,8, and then calculating the original point cloud coordinates (x) corresponding to the center points of the two fork holes according to the formula (13)cen,ycen,zcen) Finally, the distance l between the forklift and the pallet is calculated according to the formula (14)fork(ii) a An included angle between a normal vector of the plane of the tray upright column and the positive direction of the camera is an included angle between the tray and the forklift, and is marked as angle and obtained by a formula (15); calculated to obtain lforkAnd angle, i.e. precise positioning of the tray is achieved;
lfork=min(l(dcen1,dfork),l(dcen2,dfork)) (14)
wherein R istranIs a point cloud PtranTo point cloud P0When s takes 1, k takes 4, s takes 5 and k takes 8 in the formula (13), the coordinates of the center points of the two fork holes are respectively calculated and recorded as dcen1And dcen2(ii) a In the formula (14), dforkCoordinates representing pre-calibrated points on the fork-lift truck, l (d)cen1,dfork)、l(dcen2,dfork) Respectively taking the distance between the center point of the two fork holes and a pre-calibration point, and taking the minimum value of min ();representing the point cloud direction vector of the plane of the pallet upright post,representing the positive camera direction vector,to representAndx represents a vector cross product,is composed ofRepresents a vector point product.
2. The tray detecting and positioning method based on the depth camera as claimed in claim 1, wherein the specific pasting positions of the circular infrared reflective markers are as follows: the outer surface of the shelf upright post is higher than the position 5-10 cm higher than the pallet, the junction of the shelf beam and the outer surface of the upright post, the middle part of each pallet preset position on the outer surface of the shelf beam and the junction of the adjacent pallets.
3. The method for detecting and positioning the tray based on the depth camera according to claim 1, wherein the step 2 specifically comprises:
step 2.1: for infrared image I according to formula (1)0Processing to obtain a corresponding binary image B0:
Where thr denotes a binarization threshold, I0(x, y) denotes an infrared image I0Gray value of middle (x, y) pixel coordinate, B0(x, y) represents the binarized image B0Gray scale value of middle (x, y) pixel coordinate;
step 2.2: using contour detection algorithm to detect binary image B0B contours without internal contours form a candidate contour set L shown in a formula (2);
L={Bc|c=1,2,…,b,0.5≤η(Bc)≤2} (2)
wherein, BcRepresenting a binarized image B0Of (c) th profile, η (B)c) Represents the outline BcAspect ratio of the smallest circumscribed rectangle of (a);
step 2.3: if the number of the candidate contours in the set L is 0, the infrared image I is represented0Turning to step 3 if no infrared reflective mark exists; otherwise, calculating the centroid coordinate O of the minimum circumscribed rectangle corresponding to each contour in the set L according to the formula (3)cAnd obtaining a centroid set O as shown in the formula (4)L;
OL={Oc=(xc,yc,zc)|Bc∈L,c∈{1,2,…,b}} (4)
Wherein the content of the first and second substances,representing the c-th contour B in the set LcThe four corner points of the corresponding minimum circumscribed rectangle are in the original point cloud P0Coordinate of (a), (b), (c), (d) and (d)c,yc,zc) Represents the outline BcThe centroid of the corresponding minimum circumscribed rectangle is in the original point cloud P0Coordinate of (5)c;
Step 2.4: removing the original point cloud P0Middle x coordinate is less than xnLOr greater than xxLAnd y is less than ynLOr greater than yxLAnd the remaining points are formed into new P0Wherein x isnLAnd xxLEach represents OLMinimum and maximum of the middle x coordinate, ynLAnd yxLAre respectively represented by OLThe minimum and maximum values of the middle y coordinate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010866470.1A CN111986185A (en) | 2020-08-25 | 2020-08-25 | Tray detection and positioning method based on depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010866470.1A CN111986185A (en) | 2020-08-25 | 2020-08-25 | Tray detection and positioning method based on depth camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111986185A true CN111986185A (en) | 2020-11-24 |
Family
ID=73442616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010866470.1A Withdrawn CN111986185A (en) | 2020-08-25 | 2020-08-25 | Tray detection and positioning method based on depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986185A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113267180A (en) * | 2021-06-10 | 2021-08-17 | 上海大学 | AGV forklift tray positioning and forking method based on 3D depth vision |
CN113418467A (en) * | 2021-06-16 | 2021-09-21 | 厦门硅谷动能信息技术有限公司 | Method for detecting general and black luggage size based on ToF point cloud data |
CN113724322A (en) * | 2021-07-30 | 2021-11-30 | 上海动亦科技有限公司 | Cargo pallet positioning method and system for unmanned forklift |
CN114078220A (en) * | 2022-01-19 | 2022-02-22 | 浙江光珀智能科技有限公司 | Tray identification method based on depth camera |
CN114332219A (en) * | 2021-12-27 | 2022-04-12 | 机科发展科技股份有限公司 | Tray positioning method and device based on three-dimensional point cloud processing |
CN114372993A (en) * | 2021-12-20 | 2022-04-19 | 广州市玄武无线科技股份有限公司 | Oblique-shooting shelf layered detection method and system based on image correction |
CN116040261A (en) * | 2022-12-23 | 2023-05-02 | 青岛宝佳智能装备股份有限公司 | Special tray turnover machine |
CN117649450A (en) * | 2024-01-26 | 2024-03-05 | 杭州灵西机器人智能科技有限公司 | Tray grid positioning detection method, system, device and medium |
-
2020
- 2020-08-25 CN CN202010866470.1A patent/CN111986185A/en not_active Withdrawn
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113267180A (en) * | 2021-06-10 | 2021-08-17 | 上海大学 | AGV forklift tray positioning and forking method based on 3D depth vision |
CN113418467A (en) * | 2021-06-16 | 2021-09-21 | 厦门硅谷动能信息技术有限公司 | Method for detecting general and black luggage size based on ToF point cloud data |
CN113724322A (en) * | 2021-07-30 | 2021-11-30 | 上海动亦科技有限公司 | Cargo pallet positioning method and system for unmanned forklift |
CN114372993A (en) * | 2021-12-20 | 2022-04-19 | 广州市玄武无线科技股份有限公司 | Oblique-shooting shelf layered detection method and system based on image correction |
CN114372993B (en) * | 2021-12-20 | 2022-10-28 | 广州市玄武无线科技股份有限公司 | Layered detection method and system for oblique-shooting shelf based on image correction |
CN114332219A (en) * | 2021-12-27 | 2022-04-12 | 机科发展科技股份有限公司 | Tray positioning method and device based on three-dimensional point cloud processing |
CN114078220A (en) * | 2022-01-19 | 2022-02-22 | 浙江光珀智能科技有限公司 | Tray identification method based on depth camera |
CN114078220B (en) * | 2022-01-19 | 2022-05-27 | 浙江光珀智能科技有限公司 | Tray identification method based on depth camera |
CN116040261A (en) * | 2022-12-23 | 2023-05-02 | 青岛宝佳智能装备股份有限公司 | Special tray turnover machine |
CN116040261B (en) * | 2022-12-23 | 2023-09-19 | 青岛宝佳智能装备股份有限公司 | Special tray turnover machine |
CN117649450A (en) * | 2024-01-26 | 2024-03-05 | 杭州灵西机器人智能科技有限公司 | Tray grid positioning detection method, system, device and medium |
CN117649450B (en) * | 2024-01-26 | 2024-04-19 | 杭州灵西机器人智能科技有限公司 | Tray grid positioning detection method, system, device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986185A (en) | Tray detection and positioning method based on depth camera | |
CN107507167B (en) | Cargo tray detection method and system based on point cloud plane contour matching | |
US20220148213A1 (en) | Method for fully automatically detecting chessboard corner points | |
CN103345755B (en) | A kind of Chessboard angular point sub-pixel extraction based on Harris operator | |
CN111754583A (en) | Automatic method for vehicle-mounted three-dimensional laser radar and camera external parameter combined calibration | |
KR101095579B1 (en) | A method for positioning and orienting of a pallet based on monocular vision | |
CN107218927B (en) | A kind of cargo pallet detection system and method based on TOF camera | |
CN110930407B (en) | Suspension gap visual detection method based on image processing | |
CN104331695B (en) | A kind of circle marker symbol shape quality detection method of robust | |
CN111767780A (en) | AI and vision combined intelligent hub positioning method and system | |
US20220398711A1 (en) | Transparency detection method based on machine vision | |
CN112880562A (en) | Method and system for measuring pose error of tail end of mechanical arm | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN111861979A (en) | Positioning method, positioning equipment and computer readable storage medium | |
CN116452852A (en) | Automatic generation method of high-precision vector map | |
CN109597096B (en) | Laser radar point cloud processing system and method | |
CN108182707B (en) | Chessboard grid calibration template under incomplete collection condition and automatic identification method thereof | |
CN111932617B (en) | Method and system for realizing real-time detection and positioning of regular objects | |
CN113724322A (en) | Cargo pallet positioning method and system for unmanned forklift | |
CN115546202B (en) | Tray detection and positioning method for unmanned forklift | |
CN116309882A (en) | Tray detection and positioning method and system for unmanned forklift application | |
CN117037082A (en) | Parking behavior recognition method and system | |
CN116309817A (en) | Tray detection and positioning method based on RGB-D camera | |
CN115600118A (en) | Tray leg identification method and system based on two-dimensional laser point cloud | |
CN111854678B (en) | Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20201124 |
|
WW01 | Invention patent application withdrawn after publication |