CN110689568A - Accurate calculation method for cubic object volume based on depth image - Google Patents

Accurate calculation method for cubic object volume based on depth image Download PDF

Info

Publication number
CN110689568A
CN110689568A CN201910990406.1A CN201910990406A CN110689568A CN 110689568 A CN110689568 A CN 110689568A CN 201910990406 A CN201910990406 A CN 201910990406A CN 110689568 A CN110689568 A CN 110689568A
Authority
CN
China
Prior art keywords
point
edge line
edge
points
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910990406.1A
Other languages
Chinese (zh)
Inventor
任大明
汪辉
任昌
刘晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xinhe Electronic Technology Co Ltd
Original Assignee
Nanjing Xinhe Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xinhe Electronic Technology Co Ltd filed Critical Nanjing Xinhe Electronic Technology Co Ltd
Priority to CN201910990406.1A priority Critical patent/CN110689568A/en
Publication of CN110689568A publication Critical patent/CN110689568A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Abstract

The invention provides a method for accurately calculating the volume of a cubic object based on a depth image, which comprises the following steps: acquiring a depth image and a gradient map of a cubic object, classifying pixel points in the depth image, and acquiring an edge point map and segmentation images of the upper surface, the left vertical surface and the right vertical surface of the cubic object; carrying out edge line detection on the edge point diagram, clustering and combining all detected edge lines to obtain a collinear edge line group set, extracting angular points according to the longest edge line of each group and pairing; determining the upper surface and the overall contour of the cubic object; obtaining the contour edge line of the upper surface and determining the quadrangle of the upper surface according to the contour edge line; and calculating the length and width of the cubic object according to the upper surface quadrangle, fitting four vertical planes, accurately calculating the length, width and height of the cubic object, and calculating the volume of the cubic object. The volume measuring method is applied to the logistics industry, can solve the problems of volume measurement limitation and time consumption of objects such as cubic object-shaped packages and the like, and improves the package sorting efficiency.

Description

Accurate calculation method for cubic object volume based on depth image
Technical Field
The invention relates to the field of target recognition of depth images, in particular to a method for accurately calculating the volume of a cubic object based on a depth image.
Background
The depth image is also called a distance image, and refers to an image in which the distance from an image collector to each point in a scene is taken as a pixel value, and the image directly reflects the geometric shape of a visible surface of a target, so that the application based on the depth image is wide, especially in the logistics industry. At present, the throughput of domestic logistics, port shipping and airport shipping is very high and is in an increasing trend, each sorting center is relatively limited and time-consuming in measurement of the volumes of packages, goods and containers, and the transportation efficiency is reduced.
In the prior art, packages are mostly in a cubic object shape, the volume of the packages is measured mainly by comparing depth images of a conveying belt before and after loading, a rectangular area where the packages are located is extracted to obtain the length and width of the packages, the height of the packages is obtained by using difference characteristics, and the volume of the packages is calculated, so that the method is limited to a specific scene, the application range is small, and the requirement of dense package accumulation in the actual logistics industry cannot be met; in addition, most of the prior art requires a camera to shoot objects vertically, the limitation on the shooting angle of the camera is large, only a single object can be basically detected, the detection errors of a plurality of objects are large, a plurality of faces of a cubic object are difficult to accurately detect, and the calculation precision is influenced by the interference of external environment noise. At present, the volume measurement of large-sized goods and containers is also basically carried out manually.
Disclosure of Invention
In order to solve the problems of volume measurement limitation and time consumption of targets such as cubic objects, packages, containers and the like in the prior art, the invention provides an accurate calculation method of the cubic object volume based on a depth image.
The technical scheme of the invention is realized as follows:
a method for accurately calculating a volume of a cubic object based on a depth image, comprising:
s1, obtaining a depth image of the target, and respectively obtaining gradient maps in the X direction and the Y direction which have the same size with the depth image; s2, dividing pixel points in the gradient image into a horizontal plane, a left vertical plane and a right vertical plane according to the gradient image, and acquiring an edge point image of the depth image; s3, acquiring segmented images of the upper surface, the left vertical surface and the right vertical surface of the target in the depth image according to the classification result, the gradient map and the edge point map; s4, performing edge line detection on the edge point image, clustering and combining all detected edge lines, and acquiring a collinear edge line group set; s5, extracting corner points according to each group of the longest edge lines of the collinear edge line group set; s6, pairing the extracted corner points; s7, determining the upper surface and the overall contour of the cubic object according to the obtained quadrangle and the segmentation image of the upper surface; s8, fitting the plane of the cubic object in the three-dimensional space according to the upper surface of the cubic object, removing noise points on each plane through repeated fitting for many times, obtaining an ideal point set corresponding to the upper surface, carrying out contour detection according to the ideal point set, and obtaining a contour edge line of the upper surface; s9, determining a quadrilateral of the upper surface according to the contour edge line of the upper surface; s10, fitting four vertical planes of the cubic object where the upper surface is located according to the upper surface quadrangle, and calculating the length and the width of the cubic object; and S11, calculating the height of the cubic object, and calculating the volume of the cubic object according to the length, width and height of the cubic object.
Preferably, in S3, the specific manner of acquiring the segmented images of the upper surface, the left vertical plane, and the right vertical plane of the target in the depth image according to the classification result, the gradient map, and the edge point map is as follows:
obtaining a horizontal plane, a left vertical plane and a right vertical plane which have the same size as the gradient map according to the classification result, defining a segmentation image of an upper surface which has the same size as the gradient map, and setting an initial value of a pixel point to be 0;
traversing pixel points in a first column in a binary image of a horizontal plane from bottom to top, acquiring the reliability value of each pixel point according to the depth image, giving a reliability threshold value, and if the reliability value of the pixel point is smaller than the given reliability threshold value, processing the next pixel point if the state of the pixel point is unknown; otherwise, if the corresponding pixel point value is 255, the state of the pixel point is horizontal, if the corresponding pixel point value is 0, the state of the pixel point is vertical, if the number of pixel points with a pixel point value of 0 is greater than a given height threshold, then the pixel point with the pixel point value of 255 appearing later is corresponded to the segmentation image on the upper surface, the pixel value is updated to be 255, the number of lines of the pixel point with the changed pixel point state each time is recorded, after the pixel points of all lines in the first row are traversed, setting the pixel values of the positions of all pixel points in the first column from the recorded line number to the 1 st line, which correspond to the upper surface segmentation image, as 0, traversing all the lines and rows of the parallel plane image in the same way to obtain an initial upper surface segmentation image, repairing the obtained initial segmented image of the upper surface, and updating the segmented image of the upper surface;
and respectively corresponding the edge points in the edge point graph to the updated upper surface divided image, the left vertical plane binary image and the right vertical plane binary image one by one, updating the pixel point value of the horizontal non-edge point in the updated upper surface divided image to be 0, updating the pixel point value of the non-edge point in the left vertical plane binary image to be 0, updating the pixel point value of the non-edge point in the right vertical plane binary image to be 0, acquiring the final upper surface divided image, the left vertical plane binary image and the right vertical plane binary image, and respectively recording the left vertical plane binary image and the right vertical plane binary image as the left vertical plane divided image and the right vertical plane divided image.
Preferably, the specific manner of acquiring the edge point map of the depth image is as follows:
calculating the classified gradient average Yp of all the pixel points in the Y direction of the vertical plane, the classified gradient average Yq of all the pixel points in the Y direction of the horizontal plane, the gradient average XP of all the pixel points in the X direction of the left vertical plane and the gradient average Xq of all the pixel points in the X direction of the right vertical plane; setting a positive threshold value in the Y direction, a negative threshold value in the Y direction, a positive threshold value in the X direction and a negative threshold value in the X direction according to Yp, Yq, Xp and Xq; defining an image with the same size as the gradient image, and setting the initial value of a pixel point in the image to be 0; traversing all pixel points of the gradient map in the X direction and the gradient map in the Y direction, and if the absolute value of the gradient value in the Y direction of each pixel point is larger than the absolute value of the gradient value in the X direction and is smaller than a negative threshold value in the Y direction or is larger than or equal to a positive threshold value in the Y direction, the pixel point is an edge point, and the pixel point is corresponding to a defined image and the pixel value is updated to be 255; if the absolute value of the gradient value in the Y direction is smaller than the absolute value of the gradient value in the X direction, and the absolute value of the gradient value in the X direction is smaller than or equal to the negative threshold value in the X direction or larger than or equal to the positive threshold value in the X direction, the pixel point is an edge point, the pixel point is corresponding to a defined image, and the pixel value is updated to be 255; and obtaining an edge point diagram.
Preferably, the specific way of performing edge line detection on the edge point map, clustering and merging all detected edge lines, and obtaining the collinear edge line group set is as follows:
performing edge line detection on an edge line diagram, acquiring all edge lines, sequencing the edge lines into an edge line set according to the length of the edge lines, defining an empty collinear edge line set, wherein the collinear edge line set comprises a plurality of groups of edge lines, the edge lines in each group are collinear, the edge lines between two groups are not collinear, firstly adding the longest edge line into one group of the collinear edge line set, judging whether each A edge line in the edge line set is collinear with each group of the longest B edge lines in the collinear edge line set or not, and sequentially performing the following judgment conditions:
c1, calculating the angle difference between the edge line A and the edge line B, and judging whether the angle difference is less than or equal to a given angle difference threshold value e;
c2, if the angle difference is less than or equal to the angle difference threshold e, calculating the distances T1 and T2 from the starting point of the B edge line to the starting point and the ending point of the A edge line, and T3 and T4 from the ending point of the B edge line to the starting point and the ending point of the A edge line, giving an inter-point distance threshold G, and judging whether the minimum distance among the distances T1, T2, T3 and T4 is less than or equal to the inter-point distance threshold G or not;
c3, if min (min (T1, T2), min (T3, T4)) ≦ G, calculating the length L of the edge line A and the length H of the edge line B, and judging whether the maximum distance among the distances T1, T2, T3 and T4 is smaller than G + H + L;
c4, if max (max (T1, T2), max (T3, T4)) < (G + H + L), calculating distances S1 and S2 from the starting point and the end point of the B edge line to the A edge line, and distances S3 and S4 from the starting point and the end point of the A edge line to the B edge line, giving a distance threshold J, and if the maximum distance between T1 and T2 is smaller than L + G, judging whether S1 is smaller than or equal to J; if the maximum distance between the distances T3 and T4 is smaller than L + G, judging whether S2 is smaller than or equal to J; if the maximum distance between the distances T1 and T3 is smaller than L + G, judging whether S3 is smaller than or equal to J; if the maximum distance between the distances T2 and T4 is smaller than L + G, judging whether S4 is smaller than or equal to J; if any maximum distance is larger than or equal to L + G, directly executing the next step;
c5, if the condition in C4 is satisfied, the A edge line and the B edge line are collinear, the A edge line is added into the group of the collinear edge line group set, and the edge line of the group is updated;
in the judging process, if any condition is not met, directly processing the longest edge line of the next group in the collinear edge line group set until all groups of the collinear edge line group set are processed, wherein each group of edge lines is represented by the longest edge line of the group, and if the A edge line is not collinear with the longest edge line of all groups of the collinear edge line group set, directly adding the A edge line into a new group of the collinear edge line group set, and updating the group number of the collinear edge line group set; after the A edge line and the B edge line are collinear, judging whether the A edge line and the B edge line meet a combination condition, if T1 is not less than H, T2 and not more than H, T3 and not more than H and T4 and not more than H, the A edge line and the B edge line do not meet the combination condition, otherwise, combining the A edge line and the B edge line, updating the starting point and the end point of the B edge line, and updating a collinear edge line group set;
and processing the next edge line in the edge line set in the same way as the edge line A, continuously updating the collinear edge line group set, and sequencing the edge lines of each group in the collinear edge line group set according to the length to obtain the final collinear edge line group set.
Preferably, the specific way of extracting the corner points according to each group of longest edge lines of the collinear edge line group set is as follows:
c1, obtaining the longest edge line of each group of the collinear edge line group set, recording the longest edge line as a seed edge line set, sequencing all edge lines of the seed edge line set from long to short, traversing all edge lines of the seed edge line set, calculating an included angle a between the longest C edge line and any other D edge line from the longest C edge line, setting a curvature threshold value b, and judging whether min (a, 180-a) is greater than b;
c2, if min (a, 180-a) > b, calculating the intersection point W (Wx, Wy) of the C edge line and the D edge line, and judging whether Wx or Wy does not exceed the image range;
c3, if Wx or Wy does not exceed the image range, judging whether the intersection point W is an angular point, wherein the condition for judging whether the intersection point W is the angular point is as follows: first, distances D1 and D2 from two end points of the C edge line to the intersection point W are calculated, a minimum distance min _ D1 is defined, if the intersection point W is between the two end points of the C edge line, min _ D1=0, otherwise min _ D1= min (D1, D2), if the minimum distance min _ D1 is not 0 or min (D1, D2) is less than a given arm length threshold Z1, then distances D3 and D4 from the two end points of the D edge line to the intersection point W are calculated, a minimum distance min _ D2 is defined, if the intersection point W is between the two end points of the D edge line, min _ D2=0, otherwise min _ D56 = min (D3, D4), if the minimum distance min _ D2 is not 0 or min (D3, D4) is less than a given arm length threshold Z2, and it is determined whether the intersection point W2 is smaller than the given arm length threshold Z2, D2, 2 r;
c4, if the condition of C3 is met, giving a quadrilateral arm gap threshold v, judging whether max (min (d1, d2) and min (d3, d 4)) is less than or equal to v, and if the condition is met, determining that the intersection point W is the corner point of the quadrilateral;
sequentially judging the conditions C1 to C4, and if any condition is not met, processing the next edge line;
adding the first acquired corner point into the corner point set, and calculating the distances L between the other acquired corner points and all corner points in the corner point setiI is a variable, judging whether all the angular points in the angular point set are similar, if L is the sameiR, and adding the corner point to the corner point set, wherein the corner point is not similar to the rest corner points;
the intersection point W corresponds to two arms, the first arm is a connecting line of the intersection point W and the terminal point of the C edge line, the second arm is a connecting line of the intersection point W and the terminal point of the D edge line, the terminal point of the C edge line is the larger corresponding terminal point of D1 and D2 in the C edge line, and the terminal point of the D edge line is the larger corresponding terminal point of D3 and D4 in the D edge line; and processing any two rest edge lines in the seed edge line group set in the same way to obtain all the corner points, expanding two arms of each corner point according to the seed edge line set to obtain a final corner point set, and realizing corner point extraction.
Preferably, the specific way of pairing the extracted corner points is as follows:
c1, pairing all the corner points in the corner point set in pairs, if the coordinates of the two paired corner points are different, continuously judging whether the two corner points share one arm, marking the two arms of one corner point as arm11 and arm12, marking the two arms of the other corner point as arm21 and arm22, and firstly judging whether arm11 and arm21 are the same arm of the two corner points;
c2, judging the mode that arm11 and arm21 are the same arm: if the distance between two corner points is greater than one fourth of the minimum value of arm11 and arm21, firstly calculating the absolute value x1 of the angle difference between arm11 and the connecting line of the two corner points, the absolute value x2 of the angle difference between arm21 and the connecting line of the two corner points, and the value ranges of x1 and x2 are [0,180], giving an angle similarity threshold value f, if x1 and x2 are both smaller than the angle similarity threshold value f, giving an inter-corner distance threshold value u and a length proportion threshold value ratio, if the length sum of arm11 and arm21 is greater than or equal to the ratio times of the distance between the two corner points and is greater than or equal to the difference between the distance between the two corner points and u, judging that arm11 and arm21 are the same arm, and successfully pairing the corner points;
otherwise, judging whether any other two arms of the two angular points are the same arm by adopting the same method, if not, failing to pair the angular points, otherwise, successfully pairing the angular points;
pairwise pairing the rest corner points in the same manner to obtain all successfully paired corner point combinations, wherein the same arm of each corner point combination is a shared arm, the other two arms are opening arms, included angles u1 and u2 between the two opening arms and the shared arm in each corner point combination are calculated, the value ranges of the included angles u1 and u2 are [0,180], and the corner point combinations with the included angle u1 or u2 smaller than the bending threshold b are removed; judging whether two opening arms of the remaining corner point combinations are on the same side of the shared arm or not, removing the corner point combinations of the two opening arms which are not on the same side of the shared arm, and finally determining a quadrangle according to the currently paired corner point combinations; all corner combinations are processed in the same way to obtain all quadrilaterals.
Preferably, the specific manner of obtaining the contour edge line of the upper surface in S8 is as follows:
c1, traversing all the detected pixel points on the upper surface of each cubic object, and acquiring world coordinate points of all the pixel points on the upper surface in the three-dimensional space according to the transformation relation between the image coordinate system and the world coordinate system;
c2, traversing all world coordinate points on the current upper surface, giving a point threshold value, and judging whether all the points are greater than the given point threshold value;
c3, if the number of points is larger than the threshold value of the number of points, calculating average value points of all the points and carrying out normalization processing, wherein all coordinate points on the upper surface of each cubic object are located on the same plane in the three-dimensional space, the mean value point is located on the plane, calculating a normal vector of the plane of the three-dimensional space where the average value points are located, and acquiring a primary fitting three-dimensional plane equation of the three-dimensional space according to the normal vector and the mean value point;
c4, calculating the distance between all world coordinate points on the upper surface and the primary fitting three-dimensional plane, giving a distance threshold value, removing all points with the distance greater than the given distance threshold value, obtaining an ideal point set, fitting again according to the ideal point set, and obtaining a secondary fitting three-dimensional plane equation;
c5, repeating the steps for multiple repeated fitting, calculating the distance from all points of the ideal point set to the fitting three-dimensional plane after each fitting, eliminating all points with the distance larger than a given distance threshold value, updating the ideal point set, and recording as the ideal point set;
c6, acquiring plane coordinate points of the two-dimensional plane corresponding to all world coordinate points of the upper ideal point set of the current upper surface, detecting the contour according to the plane coordinate points, acquiring all contours, calculating the overlapping area of the circumscribed rectangle of each contour and the circumscribed rectangle of the current upper surface, searching the contour with the largest overlapping area and larger than a given threshold, wherein the contour is the final contour of the upper surface, calculating the convex hull of the upper surface according to the contour, acquiring the corresponding fitting polygon according to the convex hull, acquiring all edge lines of the fitting polygon of the upper surface, and sequencing from long to short, wherein the edge lines are the contour edge lines of the upper surface.
Preferably, the specific manner of determining the quadrilateral on the upper surface according to the contour edge line of the upper surface in S9 is as follows:
giving a parallel angle difference threshold value j, judging whether the angle difference of any two contour edge lines is smaller than j, if the condition is met, the two contour edge lines are in the same group currently, grouping all the contour edge lines, if the total number after grouping is larger than or equal to 2, calculating the total length of the contour edge lines of each group and sequencing the total length from large to small, recording the two groups with the maximum total length as a group 1 and a group 2, wherein the group 1 comprises one group of approximately parallel sides of a quadrangle on the upper surface, the group 2 comprises the other group of approximately parallel sides of the quadrangle on the upper surface, and if the total number after grouping is smaller than 2, processing the next upper surface;
respectively calculating the distance difference of any two contour edge lines in the group 1 and the group 2, giving a parallel distance difference threshold value, respectively dividing the contour edge lines of which the distance difference is smaller than the parallel distance difference threshold value in the group 1 and the group 2 into the same group, if the total number of the divided groups in the group 1 is greater than or equal to 2, calculating the total length of the edge lines of each group in the group 1 and sorting the edge lines from large to small, respectively recording the two groups with the maximum total length as the group 11 and the group 12, if the total number of the divided groups in the group 2 is greater than or equal to 2, calculating the total length of the edge lines of each group in the group 2 and sorting the edge lines from large to small, respectively recording the two groups with the maximum total length as the group 21 and the group 22, and if the total number of the divided groups of the group 1 or the group 2 is;
and traversing all pixel points of the upper surface contour, respectively calculating the distance from each pixel point to all contour edge lines of the group 11, the group 12, the group 21 and the group 22, and obtaining the contour edge line with the minimum group distance and smaller than a given threshold, namely the four edges of the upper surface.
Preferably, in S10, the four vertical planes of the cubic object where the upper surface is located are fitted according to the upper surface quadrangle, and the specific way of calculating the length and width of the cubic object is as follows:
acquiring a normal vector of the upper surface in a three-dimensional space, acquiring a world coordinate of an edge corresponding to the three-dimensional space for any edge of a quadrilateral of the upper surface, belonging to a vertical plane of a cubic object corresponding to the upper surface, constructing a point which is parallel to the edge and belongs to the vertical plane of the edge along the downward direction of the normal vector, and fitting a vertical plane of the edge in the three-dimensional space according to the constructed point and all edge points of the edge;
fitting the vertical planes of the three-dimensional space of the rest three sides of the upper surface quadrangle in the same way to obtain four vertical planes of the cubic object corresponding to the current upper surface;
regarding any one side of the quadrangle on the upper surface, marking as side 1, marking as side 2 the side approximately parallel to the side, calculating the distance from all edge points of side 1 to the vertical plane of side 2, simultaneously calculating the distance from all edge points of side 2 to the vertical plane of side 1, and calculating the average value of all the distances as the length of the cubic object corresponding to the current upper surface;
and respectively recording the other two approximately parallel edges of the quadrilateral on the upper surface as an edge 3 and an edge 4, calculating the distances from all edge points of the edge 3 to the vertical plane where the edge 4 is located, simultaneously calculating the distances from all edge points of the edge 4 to the vertical plane where the edge 3 is located, and calculating the average value of all the distances to be used as the width of the cubic object corresponding to the current upper surface.
Preferably, the way to calculate the height of the cube is:
determining a search area containing the cubic object according to the outline of the cubic object, traversing all pixel points on the search area, acquiring corresponding world coordinates of the pixel points, judging whether the minimum value of the distances from each pixel point to four vertical planes in the three-dimensional space is less than or equal to a given point-to-plane minimum distance threshold dt, and processing the next pixel point if the condition is not met;
if the condition is met, calculating the distance from the pixel point to the upper surface, obtaining the distances from all pixel points, meeting the condition, in the search area to the upper surface, wherein one distance is represented by a square column, the number of the square columns exceeds the number of the distances, the initial value of the histogram value of all the square columns is 0, if the distance from the pixel point to the vertical plane is less than dt/2, the histogram value corresponding to the pixel point is added by 2, if the distance from the pixel point to the vertical plane is greater than dt/2 and less than dt, the histogram value corresponding to the pixel point is added by 1, and thus obtaining the distance distribution histogram corresponding to all the pixel points meeting the condition in the search area; traversing each square column value of the histogram, calculating the number of non-zero square columns, the sum of the square column values corresponding to all the non-zero square columns and the corresponding maximum square column label maxBin, and calculating the average value of all the non-zero square column values by dividing the non-zero square column value sum by the number of the non-zero square columns;
the distances of the distance distribution histograms are arranged from small to large, and in the largest square column from the column label maxBin to the column label maxBin/2, the smallest column label d2 with the column value larger than one eighth of the average value of the columns is recorded, and the largest column label d1 with the column value larger than one half of the average value of the columns is recorded;
and if the labels d1 and d2 exist, traversing the rectangular columns with the labels d1 to the maximum rectangular column label maxBin, calculating the average distance corresponding to all the rectangular columns, wherein the distance is the height of the cubic object, and calculating the volume of the cubic object according to the length, the width and the height of the cubic object.
The invention has the beneficial effects that: according to the accurate calculation method of the cubic object volume based on the depth image, on the basis of rapidly distinguishing the surface of the cubic object by dividing the image, all detected edge lines are clustered and combined to obtain all collinear edge line groups, and each group of longest edge lines is selected to obtain an angular point, so that the calculation amount is reduced, and the detection stability is improved; performing corner pairing to obtain all possible quadrilateral detections, then detecting the upper surface segmentation image to obtain all upper surface contours, matching the upper surface contours with all detected quadrilaterals to obtain the upper surface of a cube and the integral contour of the cube, and fitting the upper surface of the cube on a plane of a three-dimensional space for multiple times according to the upper surface of the cube to obtain the final upper surface; acquiring all possible contour boundary lines of a quadrangle on the upper surface according to the fitted upper surface, grouping the contour boundary lines according to angles and distances between the boundary lines respectively, wherein four sides of the quadrangle capable of forming the upper surface necessarily comprise two groups of approximately parallel sides, the angles between the two groups of sides are different, and for each group of approximately parallel sides, a certain distance is necessarily reserved between the sides, the longest side boundary line capable of forming the quadrangle on the upper surface in all contour boundary lines of the upper surface is acquired by utilizing the characteristic, the final determination of the quadrangle on the upper surface is realized, four vertical planes corresponding to the cubic object corresponding to the quadrangle are fitted according to the four sides of the quadrangle, then a three-dimensional plane where a ground plane of a depth image is located is fitted, the length and the width of the cubic object are acquired according to the two groups of approximately parallel sides on the upper surface, meanwhile, a search area is determined according to the overall contour, calculating the distances from the points to the upper surface, counting the distance distribution histogram, wherein the larger the distance is, the closer the distance is to the lower part of the search area, when the maximum distance is reached, the longer the distance is, the corresponding points belonging to the vertical plane or the closest points to the vertical plane cannot be found, which indicates that the contact position of the cubic object and the lowest plane in the depth image has been reached, counting the average value of the distances from the maximum distance distribution histogram to the histogram which is more than half of the average distance, which is the height of the cubic object, in order to reduce the noise interference, and then calculating the volume of the cubic object. The pixel point classification method is suitable for any object of any scene, and is practically applied to the cubic object in volume measurement, wherein the cubic object at least comprises a visible upper surface and a vertical surface in an image, and the plane where a target in a depth image is located is the lowest plane or is directly on the ground.
The method for accurately calculating the volume of the cubic object is applied to volume measurement of targets such as packages, goods, containers and the like, avoids comparing depth images of a conveying belt before and after loading, directly identifies the cubic object, and finally obtains the length, the width and the height of the cubic object to calculate the volume of the cubic object, solves the problems of limitation and time consumption of package volume measurement, helps the logistics industry to improve package sorting efficiency, and has important practical application significance. The method for accurately calculating the volume of the cubic object can also be applied to the field of image recognition.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b): s1, obtaining a depth image of a target, and respectively obtaining gradient maps in the X direction and the Y direction with the same size as the depth image; s2, dividing pixel points in the gradient image into a horizontal plane, a left vertical plane and a right vertical plane according to the gradient image, and acquiring an edge point image of the depth image; s3, acquiring segmented images of the upper surface, the left vertical surface and the right vertical surface of the target in the depth image according to the classification result, the gradient map and the edge point map; s4, performing edge line detection on the edge point image, clustering and combining all detected edge lines, and acquiring a collinear edge line group set; s5, extracting corner points according to all edge lines of the collinear edge line set; s6, pairing the extracted corner points; s7, determining the upper surface and the overall contour of the cubic object according to the obtained quadrangle and the segmentation image of the upper surface; s8, fitting the plane of the cubic object in the three-dimensional space according to the upper surface, the left vertical surface and the right vertical surface of the cubic object, removing noise points on each surface through repeated fitting for multiple times, acquiring an ideal point set corresponding to each surface, performing contour detection according to the ideal point set, and acquiring a contour edge line of the upper surface; s9, determining a quadrilateral of the upper surface according to the contour edge line of the upper surface; and S10, calculating the length and width of the cubic object according to the upper surface quadrangle, fitting a three-dimensional plane where the ground plane is located, calculating the height of the cubic object according to the distance between the upper surface and the ground plane, and calculating the volume of the cubic object.
Depth images can be acquired by adopting a depth camera, and the current depth cameras can be divided into three types according to the working principle of the depth cameras: the system comprises a TOF (time Of flight) depth camera, a binocular stereoscopic vision depth camera and a structured light depth camera, and in actual use, a proper depth camera can be selected according to specific requirements to obtain a depth image.
The specific mode of dividing the pixel points into the horizontal plane, the left vertical plane and the right vertical plane according to the gradient map in the S2 is as follows: traversing each pixel point value DX (r, c) in the gradient map in the X direction and each pixel point value DY (r, c) in the gradient map in the Y direction, wherein r =0. Giving a positive segmentation threshold value p and a negative segmentation threshold value q, wherein pixel points corresponding to DY (r, c) which are more than or equal to p are positioned on a vertical plane, and pixel points corresponding to DY (r, c) which are less than or equal to q are positioned on a horizontal plane; for the points on all the vertical planes, the pixel points corresponding to DX (r, c) less than or equal to q are positioned on the left vertical plane, and the pixel points corresponding to DX (r, c) more than or equal to p are positioned on the right vertical plane.
The specific way of acquiring the edge point diagram of the depth image in S2 is as follows: calculating the classified gradient average Yp of all the pixel points in the Y direction of the vertical plane, the classified gradient average Yq of all the pixel points in the Y direction of the horizontal plane, the gradient average XP of all the pixel points in the X direction of the left vertical plane and the gradient average Xq of all the pixel points in the X direction of the right vertical plane; setting a positive threshold value in the Y direction, a negative threshold value in the Y direction, a positive threshold value in the X direction and a negative threshold value in the X direction according to Yp, Yq, Xp and Xq; defining an image with the same size as the gradient image, and setting the initial value of a pixel point in the image to be 0; traversing all pixel points of the gradient map in the X direction and the gradient map in the Y direction, for each pixel point, if the absolute value of the gradient value in the Y direction is greater than the absolute value of the gradient value in the X direction and is less than a negative threshold value in the Y direction or is greater than or equal to a positive threshold value in the Y direction, the pixel point is an edge point, corresponding the pixel point to a defined image and updating the pixel value to 255, if the absolute value of the gradient value in the Y direction is less than the absolute value of the gradient value in the X direction and is less than or equal to the negative threshold value in the X direction or is greater than or equal to the positive threshold value in the X direction, the pixel point is an edge point, corresponding the pixel point to the defined image and updating the pixel value to 255; and obtaining an edge point diagram, wherein the edge point diagram is a binary diagram.
In S3, the specific manner of obtaining the segmented images of the upper surface, the left vertical plane, and the right vertical plane of the target in the depth image according to the classification result, the gradient map, and the edge point map is as follows: obtaining a horizontal plane, a left vertical plane and a right vertical plane which have the same size as the gradient map according to the classification result, defining a segmentation image of an upper surface which has the same size as the gradient map, and setting an initial value of a pixel point to be 0; traversing pixel points in a first column in a binary image of a horizontal plane from bottom to top, acquiring the reliability value of each pixel point according to the depth image, giving a reliability threshold value, and if the reliability value of the pixel point is smaller than the given reliability threshold value, processing the next pixel point if the state of the pixel point is unknown; otherwise, if the corresponding pixel point value is 255, the state of the pixel point is horizontal, if the corresponding pixel point value is 0, the state of the pixel point is vertical, if the number of the pixel points with the pixel point value of 0 is greater than a given height threshold value, the pixel points with the pixel point value of 255 appearing thereafter are corresponded to the segmented image on the upper surface and the pixel values of the pixel points are updated to 255, the number of rows of the pixel points with the changed state of the pixel points each time is recorded, after the pixel points of all the rows in the first column are traversed, the pixel values of the positions of all the pixel points in the first column from the recorded number of rows to the 1 st row, which are corresponded to the segmented image on the upper surface, are set to 0, all the rows and the columns of the parallel plane image are traversed in the same way, and the initial segmented image on the; repairing the obtained initial segmented image of the upper surface, and updating the segmented image of the upper surface; and respectively corresponding the edge points in the edge point graph to the updated upper surface divided image, the left vertical plane binary image and the right vertical plane binary image one by one, updating the pixel point value of the horizontal non-edge point in the updated upper surface divided image to be 0, updating the pixel point value of the non-edge point in the left vertical plane binary image to be 0, updating the pixel point value of the non-edge point in the right vertical plane binary image to be 0, acquiring the final upper surface divided image, the left vertical plane binary image and the right vertical plane binary image, and respectively recording the left vertical plane binary image and the right vertical plane binary image as the left vertical plane divided image and the right vertical plane divided image.
In the calculation process, individual points on the upper surface are misjudged as points of the vertical surface possibly due to noise interference, so that the points of the upper surface are discontinuous, and the segmented image of the upper surface is repaired for improving the stability of the algorithm. The specific way to patch the acquired initial segmented image of the upper surface is as follows: setting an image slice size threshold t, defining an upper surface patch image with the same size as the segmented image of the upper surface, setting an initial value of a pixel point to be 0, updating the pixel point value of the t column subtracted from the t column to the total column of the upper surface patch image according to the segmented image of the upper surface, traversing all rows and the 1 st to the 2 nd columns of the segmented image of the upper surface, updating each pixel point value of the t column of each row of the upper surface patch image to the sum of the pixel point values of the 1 st to the 2 nd columns of the current row of the segmented image of the upper surface, and updating the upper surface patch image; traversing all rows and the t +1 th column of the segmented image of the upper surface to the total column number minus the t column, updating each pixel point value of the upper surface patch image into the pixel point value of the upper surface patch image of the previous column of the current row plus the pixel point value of the segmented image of the upper surface of the current column of the current row plus the t column, and then subtracting the pixel point value of the segmented image of the upper surface of the current column minus the t column of the current row to obtain the final upper surface patch image; and traversing all pixel points of the upper surface patch image, giving a pixel value sum threshold, if the pixel value of the current pixel point is greater than the given pixel value sum threshold, updating the pixel value of the current pixel point on the segmentation image corresponding to the upper surface to 255, otherwise, updating to 0, and updating the segmentation image of the upper surface.
Recording the number of lines of the pixel points of which the states of the pixel points are changed every time, acquiring a depth image of a target in S1, wherein the target can be completely seen in the depth image, traversing each line of pixel points from bottom to top, if a vertical plane and a horizontal plane of a cubic object appear successively, the category of the subsequent pixel points can be changed, and setting the number of lines corresponding to the position where the state of the pixel point recorded last time is changed to the pixel value of all the pixel points of the current line of the first line on the segmentation image on the upper surface to be 0 (the point value of the recorded line number to the first line can be 0 or 255, and is set to be 0); otherwise, the upper surface of the current cubic object is probably not completely appeared in the picture, the points of the upper surface are removed, and the pixel value of the divided image corresponding to the upper surface is set to be 0.
In S4, edge line detection is performed on the edge point map, and all the detected edge lines are clustered and merged to obtain a collinear edge line group set in the specific manner: performing edge line detection on an edge line diagram, acquiring all edge lines, sequencing the edge lines into an edge line set according to the length of the edge lines, defining an empty collinear edge line set, wherein the collinear edge line set comprises a plurality of groups of edge lines, the edge lines in each group are collinear, the edge lines between two groups are not collinear, firstly adding the longest edge line into one group of the collinear edge line set, judging whether each A edge line in the edge line set is collinear with each group of the longest B edge lines in the collinear edge line set or not, and sequentially performing the following judgment conditions: c1, calculating the angle difference between the edge line A and the edge line B, and judging whether the angle difference is less than or equal to a given angle difference threshold value e, wherein the angle difference is greater than e, and the edge line A and the edge line B are definitely not collinear, because the two collinear edge lines are approximately parallel, the angle difference ratio is smaller; c2, if the angle difference is less than or equal to e, calculating the distances T1 and T2 from the start point of the B edge line to the start point and the end point of the a edge line, T3 and T4 from the end point of the B edge line to the start point and the end point of the a edge line, given that the minimum distance between the distances T1, T2, T3 and T4 is less than or equal to the inter-point distance threshold G, this part is to detect whether the minimum distance between the two end points of the a edge line and the B edge line is less than the given inter-point distance threshold G, if the minimum distance is greater than the given inter-point distance threshold, it is said that the two edge lines are farther apart, min (min (T1, T2), min (T3, T4)) G, then the a and B edge lines must not be collinear; c3, if min (min (T1, T2), min (T3, T4)) ≦ G, calculating the length L of the edge line A and the length H of the edge line B, and judging whether the maximum distance among the distances T1, T2, T3 and T4 is smaller than G + H + L, max (max (T1, T2), max (T3, T4)) > or more (G + H + L), wherein the edge lines A and B are not collinear; c4, if max (max (T1, T2), max (T3, T4)) < (G + H + L), calculating distances S1 and S2 from the starting point and the end point of the B edge line to the A edge line, and distances S3 and S4 from the starting point and the end point of the A edge line to the B edge line, giving a distance threshold J, and if the maximum distance between T1 and T2 is smaller than L + G, judging whether S1 is smaller than or equal to J; if the maximum distance between the distances T3 and T4 is smaller than L + G, judging whether S2 is smaller than or equal to J; if the maximum distance between the distances T1 and T3 is smaller than L + G, judging whether S3 is smaller than or equal to J; if the maximum distance between the distances T2 and T4 is smaller than L + G, judging whether S4 is smaller than or equal to J; if any maximum distance is larger than or equal to L + G, directly executing the next step; c5, if the condition in C4 is satisfied, the A edge line and the B edge line are collinear, the A edge line is added into the group of the collinear edge line group set, and the edge line of the group is updated; in the sequential judgment process from the C1 to the C6, when any condition is not met, directly processing the longest edge line of the next group in the collinear edge line group set until all groups of the collinear edge line group set are processed, wherein each group of edge lines is represented by the longest edge line thereof, and if the A edge line is not collinear with the longest edge line of all groups of the collinear edge line group set, directly adding the A edge line into a new group of the collinear edge line group set, and updating the group number of the collinear edge line group set; after the A edge line and the B edge line are collinear, judging whether the A edge line and the B edge line meet a combination condition, if T1 is not less than H, T2 and not more than H, T3 and not more than H and T4 and not more than H, the A edge line and the B edge line do not meet the combination condition, otherwise, combining the A edge line and the B edge line, updating the starting point and the end point of the B edge line, and updating a collinear edge line group set; and processing the next edge line in the edge line set in the same way as the edge line A, continuously updating the collinear edge line group set, and sequencing the edge lines in the collinear edge line group set according to the length to obtain the final collinear edge line group set.
The reason why the collinear clustering and merging of the edge lines are performed here is that in the process of edge line detection, the original continuous edge lines are often disconnected due to noise interference, and only one edge line may detect a plurality of edge lines which are close to each other in parallel but different in length when detecting, so that the stability and the precision of the algorithm are reduced.
The A edge line and the B edge line are merged, and the mode of updating the starting point and the end point of the B edge line is as follows: and if the x coordinates of the two end points of the edge line are different, defining the left end point of the edge line as the starting point and the right end point as the end point, and if the x coordinates of the two end points of the edge line are the same, defining the upper end point as the starting point and the lower end point as the end point. The judgment method for updating the end point of the B edge line comprises the following steps: c1, if T1> H and T2> H, then T3> T4, then updating the B edge line terminal point as the starting point of the A edge line, and if the T3> T4 condition is not satisfied, updating the B edge line terminal point as the terminal point of the A edge line; c2, not satisfying the conditions T1> H and T2> H in C1, if T1> H but T2< = H, updating the B edge line end point to the start point of the a edge line; c3, not satisfying the conditions in C1 and C2, if T2> H but T1< = H, the B edge line end point is updated to the end point of the a edge line. The judgment method for updating the starting point of the B edge line is as follows: c1, if T3> H and T4> H, then T1> T2, then updating the starting point of the B edge line as the starting point of the A edge line, if the condition of T1> T2 is not satisfied, updating the starting point of the B edge line as the end point of the A edge line; c2, not satisfying the conditions T3> H and T4> H in C1, if T3> H but T4< = H, updating the B edge line start point to the start point of the a edge line; c3, and if the conditions in C1 and C2 are not satisfied, if T4> H but T3< = H, the B edge line start point is updated to the a edge line end point.
The specific way of extracting the corner points according to each group of the longest edge lines of the collinear edge line group set in S5 is as follows: c1, obtaining the longest edge line of each group of the collinear edge line group set, recording the longest edge line as a seed edge line set, sequencing all edge lines of the seed edge line set from long to short, traversing all edge lines of the seed edge line set, calculating an included angle a between the longest C edge line and any other D edge line from the longest C edge line, setting a curvature threshold value b, and judging whether min (a, 180-a) is greater than b; c2 Ruimin (a, 180-a)>b, calculating the intersection point W (Wx, Wy) of the C edge line and the D edge line, and judging whether Wx or Wy does not exceed the image range; if Wx or Wy exceeds the image range, processing the next edge line; c3, if Wx or Wy does not exceed the image range, judging whether the intersection point W is an angular point, wherein the condition for judging whether the intersection point W is the angular point is as follows: distances d1 and d2 from the two end points of the C edge line to the intersection point W are calculated first, defining a minimum distance min _ d1, if the intersection point W is between the two end points of the C edge line, then min _ d1=0, otherwise min _ d1= min (D1, D2), if the minimum distance min _ D1 is not 0 or min (D1, D2) is less than the given arm length threshold Z1, then calculating the distances D3 and D4 from the two end points of the D edge line to the intersection point W, defining the minimum distance min _ D2, if the intersection point W is between the two end points of the D edge line, min _ D2=0, otherwise min _ D2= min (D3, D4), if the minimum distance min _ D2 is not 0 or min (D3, D4) is less than the given arm length threshold Z2, determining whether min (D1, D2) and min (D3, D4) are less than the given intersection point radius threshold r; c4, if the condition of C3 is met, giving a quadrilateral arm gap threshold v, judging whether max (min (d1, d2) and min (d3, d 4)) is less than or equal to v, if the condition is not met, processing the next edge line, and if the condition is met, determining that the intersection point W is the corner point of the quadrilateral; sequentially judging the conditions C1 to C4, and if any condition is not met, processing the next edge line; adding the first acquired corner point into the corner point set, and calculating the distances L between the other acquired corner points and all corner points in the corner point setiI is a variable, judging whether all the angular points in the angular point set are similar, if L is the sameiR, and adding the corner point to the corner point set, wherein the corner point is not similar to the rest corner points; the intersection point W corresponds to two arms, the first arm is a connecting line of the intersection point W and the terminal point of the C edge line, the second arm is a connecting line of the intersection point W and the terminal point of the D edge line, the terminal point of the C edge line is the larger corresponding terminal point of D1 and D2 in the C edge line, and the terminal point of the D edge line is the larger corresponding terminal point of D3 and D4 in the D edge line; and processing any two rest edge lines in the seed edge line group set in the same way to obtain all the corner points, expanding two arms of each corner point according to the seed edge line set to obtain a final corner point set, and realizing corner point extraction.
The specific way of judging whether the two corner points are similar is as follows: respectively calculating absolute values a11, a12, a21 and a22 of the difference value of the angle of each opening arm of one corner point and the angle of each opening arm of the other corner point, wherein the value range of the absolute value of the difference value of the angles is [0,180], selecting min (a 11, a 12), min (a 21, a 22), giving an angle similarity threshold value f, and if max (min (a 11, a 12), min (a 21, a 22)) < f, enabling the two corner points to be similar.
The similarity of two corner points means that the opening arms corresponding to the two corner points are approximately parallel, for example: if the two corner points respectively comprise a horizontal opening arm and a vertical opening arm, the horizontal opening arms of the two corner points are approximately parallel, the angle difference is small, the two vertical opening arms are approximately parallel, and the angle difference is small, so that the minimum value of min (a 11, a 12) and min (a 21, a 22) should be smaller than a given similar angle threshold, otherwise, the two corner points are not similar.
The specific way of expanding the two arms of each corner point according to the seed edge line set is as follows: traversing all edge lines of the seed edge line set, calculating the distance from each corner point to the current seed edge line, if the distance is less than a given threshold, judging whether the seed edge line can extend two arms of the current corner point, firstly judging whether the seed edge line can extend the arm1 of the current corner point, and the specific mode is as follows:
c1, calculating the distance from the current corner point to the starting point and the ending point of the edge line of the seed, and recording as dist1 and
dist2, judging whether the current corner point is between two end points of the seed edge line;
c2, if the current corner point is between two end points of the edge line of the seed, the current corner point is connected to the edge of the seed
Marking a line segment where a starting point of the line is positioned as M, the starting point of the line segment M as a current corner point, the end point of the line segment M as a starting point of a seed edge line, the line segment where an arm1 of the current corner point is positioned as N, the starting point of the line segment N as the current corner point, calculating an angle1 of the line segment M and an angle2 of the line segment N, and judging whether the absolute value of the difference between the angle1 and the angle2 is smaller than a given angle threshold value or not;
c3, if the absolute value of the difference between angle1 and angle2 is smaller than a given angle threshold, calculating the distance rM from the end point of the line M to the current corner point, calculating the length rN of the line segment N, and judging whether the difference value between rM and rN is smaller than a given length difference threshold;
c4, if the difference between rM and rN is less than the given length difference threshold, calculating the end point of the line segment N
If the distance to the current seed edge line is smaller than the given point-to-line distance threshold and dist1 is greater than rN, the current seed edge line may extend the arm1 of the current corner point, the end point of the extended arm1 of the current corner point is the start point of the seed edge line, and the length of the extended arm1 is dist 1;
c5, if any condition from C2 to C4 is not satisfied, it indicates that the current seed edge line may not be expanded
Judging whether the current seed edge line can be expanded to the arm2 of the current angular point or not by adopting the same method for the arm1 of the current angular point, if so, taking the end point of the expanded arm2 as the starting point of the seed edge line, taking the length of the expanded arm2 as dist1, and if not, directly executing the next step;
c6, updating the terminal point of the line segment M in the C2 to the terminal point of the current seed edge line, updating the line segment N to the line segment where the arm2 of the current corner point is located, the starting point of the line segment N is the current corner point, updating dist1 to dist2, judging whether the current seed edge line can expand the arm1 and the arm2 of the current corner point again by adopting the same method, and realizing the expansion of the two arms of the current corner point;
c7, if the condition in C1 is not met, judging whether dist1 is less than or equal to dist 2;
c8, if dist1 is less than or equal to dist2, updating the starting point of the segment M in C2 to be the current seed edge
Updating the starting point of the edge line to be the end point of the current edge line, wherein the line segment N is the line segment where the arm1 or the arm2 of the current angular point is located, and judging whether the current seed edge line can expand the arm1 or the arm2 of the current angular point by adopting the same method to realize the expansion of the two arms of the current angular point;
c9, if dist1 is larger than dist2, updating the starting point of the line segment M in the C2 to be the current seed edge line
And (3) updating the terminal point to be the starting point of the current edge line, updating the line segment N to be the line segment where the arm1 or the arm2 of the current corner point is located, updating the dist1 to be the dist2, and judging whether the current seed edge line can expand the arm1 and the arm2 of the current corner point by adopting the same method to realize the expansion of the two arms of the current corner point.
The specific way of pairing the extracted corner points in S6 is as follows: c1, pairing all the corner points in the corner point set in pairs, if the coordinates of the two paired corner points are different, continuously judging whether the two corner points share one arm, marking the two arms of one corner point as arm11 and arm12, marking the two arms of the other corner point as arm21 and arm22, and firstly judging whether arm11 and arm21 are the same arm of the two corner points; c2, judging the mode that arm11 and arm21 are the same arm: if the distance between two corner points is less than one quarter of the minimum of arm11 and arm21, arm11 and arm21 are not the same arm; if the distance between two corner points is greater than one fourth of the minimum value of arm11 and arm21, firstly calculating the absolute value x1 of the angle difference between arm11 and the connecting line of the two corner points, the absolute value x2 of the angle difference between arm21 and the connecting line of the two corner points, and the value ranges of x1 and x2 are [0,180], if x1 and x2 are both smaller than an angle similarity threshold f, giving an inter-corner distance threshold u and a length proportion threshold ratio, and if the sum of the lengths of arm11 and arm21 is greater than or equal to the ratio times of the distance between the two corner points and is greater than or equal to the difference between the distance between the two corner points and u, judging that arm11 and arm21 are the same arm, and successfully pairing the corner points; otherwise, judging whether any other two arms of the two angular points are the same arm by adopting the same method, if not, failing to pair the angular points, otherwise, successfully pairing the angular points; pairwise pairing the rest corner points in the same manner to obtain all successfully paired corner point combinations, wherein the same arm of each corner point combination is a shared arm, the other two arms are opening arms, included angles u1 and u2 between the two opening arms and the shared arm in each corner point combination are calculated, the value ranges of the included angles u1 and u2 are [0,180], and the corner point combinations with the included angle u1 or u2 smaller than the bending threshold b are removed; for the rest corner point combinations, judging whether two opening arms are positioned on the same side of the shared arm, calculating the center point of the shared arm of the corner point combination, taking the center point as a starting point, calculating a unit normal vector of the shared arm, calculating a vector v1 of a connecting line of the terminal point of the opening arm of the first corner point and the center point and a vector v2 of a connecting line of the terminal point of the opening arm of the second corner point and the center point, if the inner products of the vectors v1 and v2 and the unit normal vector of the shared arm are positive and negative, the two opening arms are not positioned on the same side of the shared arm, removing the corner point combinations of the two opening arms which are not positioned on the same side of the shared arm, and finally determining a quadrangle according to the currently paired corner point combinations; all corner combinations are processed in the same way to obtain all quadrilaterals.
The specific way of determining the upper surface and the overall contour of the cubic object according to the acquired quadrilateral and upper surface segmentation images in S7 is as follows: c1, processing the obtained quadrangles, if the end point distance of the two arms of the quadrangles to be matched is smaller than the given distance proportion threshold value times, and the quadrangles do not accord with the end point distance, processing the next quadrangle, obtaining all quadrangles which accord with the conditions, obtaining the outline and the external rectangle of each quadrangle, and calculating the convex hull of each quadrangle according to the outline coordinate points;
c2, carrying out contour detection according to the upper surface segmentation image, obtaining all contours, calculating convex hulls of the contours according to contour coordinate points, obtaining the minimum rotating rectangle containing each contour and the circumscribed rectangle containing the rotating rectangle, calculating the area of each contour, and judging whether the area of the contour is larger than a given area threshold value;
c3, if the area of the contour is larger than the area threshold, matching the current contour with the quadrangle meeting the conditions, respectively calculating the minimum distances h1 and h2 from the two corner points of the quadrangle to be matched to the convex hull of the current contour, and judging whether the two corner points of the quadrangle to be matched are outside the convex hull of the quadrangle to be matched and whether h1 and h2 are smaller than or equal to a given distance threshold;
c4, if two corner points of the matched quadrangle are outside the convex hull of the matched quadrangle and h1 and h2 are both smaller than or equal to the distance threshold, judging whether the center point of the rotating rectangle outside the current outline is inside the convex hull of the matched quadrangle;
c5, if the center point of the rotating rectangle outside the current contour is positioned in the convex hull of the quadrangle for matching, calculating the absolute value of the minimum distance from the rotating rectangle to the convex hull of the quadrangle for matching, and meeting the condition that the value is less than or equal to the length and width of the rotating rectangle and the center point is positioned at the lower side of any arm of the quadrangle for matching, calculating the overlapping area of the external rectangle of the quadrangle for matching and the external rectangle of the current contour, if the overlapping area is greater than a given proportional threshold, the current contour and the quadrangle for matching are successfully primarily matched, and the quadrangle for matching is recorded as a primarily matched quadrangle;
c6, normalizing the distances from the center point of the rotating rectangle of the current contour to the three arms of the preliminary matching quadrangle, wherein the normalized value is in a theoretical range, the sum of h1 and h2 is less than or equal to a given threshold value, and the preliminary matching quadrangle is marked as an effective quadrangle;
c7, judging whether the two opening arms of the effective quadrangle face downwards or not, and recording the effective quadrangle with the two opening arms facing downwards as a seed quadrangle; if the next sub-quadrangle is obtained, calculating whether the sum of the minimum distances from the two corner points of the sub-quadrangle to the outline of the sub-quadrangle is smaller than the sum of the minimum distances from the two corner points of the previous sub-quadrangle to the outline of the sub-quadrangle, if so, replacing the previous sub-quadrangle with the current sub-quadrangle, otherwise, removing the current sub-quadrangle, and continuing processing until the final sub-quadrangle of the current outline is obtained;
c8, judging whether a final seed quadrangle of the current contour and the effective quadrangle have a shared corner point;
c9, if the sharing angular points exist, judging whether the line segment where the sharing arm and the opening arm of the effective quadrangle are located is overlapped with the line segment where the sharing arm of the seed quadrangle is located, if so, judging whether the number of the overlapping of the line segment where the sharing arm and the opening arm of the effective quadrangle are located and the line segment where the opening arm of the seed quadrangle is located is more than or equal to 1, if so, acquiring the cubic object profile with the three arms of the seed quadrangle as the upper surface and the arms of the effective quadrangle except the overlapping arms as the sides of the side vertical plane, acquiring all the cubic object profiles, and recording the cubic object profiles as a cubic object profile set;
c10, acquiring all point coordinates of the cubic object outline according to the end points of each edge of the cubic object outline, wherein the cubic object outline takes a seed quadrangle as the upper surface;
c11, traversing the cubic object contour set, calculating the y coordinate P.y of the center point of each cubic object contour and the y coordinate t.y of the center point of the upper surface of each cubic object contour, calculating the area P.area of each cubic object contour and the area t.area of the upper surface of each cubic object contour, and if t.y > P.y or t.area > P.area, rejecting the cubic object contour to obtain the final cubic object contour set.
The specific way of acquiring the contour edge line of the upper surface in S8 is as follows: c1, traversing all the detected pixel points on the upper surface of each cubic object, and acquiring world coordinate points of all the pixel points on the upper surface in the three-dimensional space according to the transformation relation between the image coordinate system and the world coordinate system; c2, traversing all world coordinate points on the current upper surface, giving a point threshold value, and judging whether all the points are greater than the given point threshold value; c3, if the number of points is larger than the threshold value of the number of points, calculating average value points of all the points and carrying out normalization processing, wherein all coordinate points on the upper surface of each cubic object are located on the same plane in the three-dimensional space, the mean value point is located on the plane, calculating a normal vector of the plane of the three-dimensional space where the average value points are located, and acquiring a primary fitting three-dimensional plane equation of the three-dimensional space according to the normal vector and the mean value point; c4, calculating the distance between all world coordinate points on the upper surface and the primary fitting three-dimensional plane, giving a distance threshold value, removing all points with the distance greater than the given distance threshold value, obtaining an ideal point set, fitting again according to the ideal point set, and obtaining a secondary fitting three-dimensional plane equation; c5, repeating the steps for multiple repeated fitting, calculating the distance from all points of the ideal point set to the fitting three-dimensional plane after each fitting, eliminating all points with the distance larger than a given distance threshold value, updating the ideal point set, and recording as the ideal point set; c6, acquiring plane coordinate points of the two-dimensional plane corresponding to all world coordinate points of the upper ideal point set of the current upper surface, detecting the contour according to the plane coordinate points, acquiring all contours, calculating the overlapping area of the circumscribed rectangle of each contour and the circumscribed rectangle of the current upper surface, searching the contour with the largest overlapping area and larger than a given threshold, wherein the contour is the final contour of the upper surface, calculating the convex hull of the upper surface according to the contour, acquiring the corresponding fitting polygon according to the convex hull, acquiring all edge lines of the fitting polygon of the upper surface, and sequencing from long to short, wherein the edge lines are the contour edge lines of the upper surface.
The specific way of determining the quadrilateral of the upper surface according to the contour edge line of the upper surface in S9 is as follows: giving a parallel angle difference threshold value j, judging whether the angle difference of any two contour edge lines is smaller than j, if the condition is met, the two contour edge lines are in the same group currently, grouping all the contour edge lines, if the total number after grouping is larger than or equal to 2, calculating the total length of the contour edge lines of each group and sequencing the total length from large to small, recording the two groups with the maximum total length as a group 1 and a group 2, wherein the group 1 comprises one group of approximately parallel sides of a quadrangle on the upper surface, the group 2 comprises the other group of approximately parallel sides of the quadrangle on the upper surface, and if the total number after grouping is smaller than 2, processing the next upper surface;
respectively calculating the distance difference of any two contour edge lines in the group 1 and the group 2, giving a parallel distance difference threshold value, respectively dividing the contour edge lines of which the distance difference is smaller than the parallel distance difference threshold value in the group 1 and the group 2 into the same group, if the total number of the divided groups in the group 1 is greater than or equal to 2, calculating the total length of the edge lines of each group in the group 1 and sorting the edge lines from large to small, respectively recording the two groups with the maximum total length as the group 11 and the group 12, if the total number of the divided groups in the group 2 is greater than or equal to 2, calculating the total length of the edge lines of each group in the group 2 and sorting the edge lines from large to small, respectively recording the two groups with the maximum total length as the group 21 and the group 22, and if the total number of the divided groups of the group 1 or the group 2 is;
and traversing all pixel points of the upper surface contour, respectively calculating the distance from each pixel point to all contour edge lines of the group 11, the group 12, the group 21 and the group 22, and obtaining the contour edge line with the minimum group distance and smaller than a given threshold, namely the four edges of the upper surface.
In S10, four vertical planes of the cubic object where the upper surface is located are fitted according to the upper surface quadrangle, and the specific way of calculating the length and width of the cubic object is as follows: acquiring a normal vector of the upper surface in a three-dimensional space, acquiring a world coordinate of an edge corresponding to the three-dimensional space for any edge of a quadrilateral of the upper surface, belonging to a vertical plane of a cubic object corresponding to the upper surface, constructing a point which is parallel to the edge and belongs to the vertical plane of the edge along the downward direction of the normal vector, and fitting a vertical plane of the edge in the three-dimensional space according to the constructed point and all edge points of the edge;
fitting the vertical planes of the three-dimensional space of the rest three sides of the upper surface quadrangle in the same way to obtain four vertical planes of the cubic object corresponding to the current upper surface;
regarding any one side of the quadrangle on the upper surface, marking as side 1, marking as side 2 the side approximately parallel to the side, calculating the distance from all edge points of side 1 to the vertical plane of side 2, simultaneously calculating the distance from all edge points of side 2 to the vertical plane of side 1, and calculating the average value of all the distances as the length of the cubic object corresponding to the current upper surface;
and respectively recording the other two approximately parallel edges of the quadrilateral on the upper surface as an edge 3 and an edge 4, calculating the distances from all edge points of the edge 3 to the vertical plane where the edge 4 is located, simultaneously calculating the distances from all edge points of the edge 4 to the vertical plane where the edge 3 is located, and calculating the average value of all the distances to be used as the width of the cubic object corresponding to the current upper surface.
The approximately parallel range is a parallel angle difference threshold, and the angle difference value between the edge 1 and the edge 2 is smaller than a given parallel angle difference threshold, so that the two edges are approximately parallel; if edges 1 and 2 are perfectly parallel, the angles of the two edges should be equal, and if they are nearly parallel, the angle difference between the two edges is smaller than the given parallel angle difference threshold. This approach helps improve the stability of the algorithm since the two edges that would otherwise be parallel after projection of each face of the cube may not be perfectly parallel.
The way to calculate the height of a cubic object is: determining a search area containing the cubic object according to the outline of the cubic object, traversing all pixel points on the search area, acquiring corresponding world coordinates of the pixel points, judging whether the minimum value of the distances from each pixel point to four vertical planes in the three-dimensional space is less than or equal to a given point-to-plane minimum distance threshold dt, and processing the next pixel point if the condition is not met;
if the condition is met, calculating the distance from the pixel point to the upper surface, obtaining the distances from all pixel points, meeting the condition, in the search area to the upper surface, wherein one distance is represented by a square column, the number of the square columns exceeds the number of the distances, the initial value of the histogram value of all the square columns is 0, if the distance from the pixel point to the vertical plane is less than dt/2, the histogram value corresponding to the pixel point is added by 2, if the distance from the pixel point to the vertical plane is greater than dt/2 and less than dt, the histogram value corresponding to the pixel point is added by 1, and thus obtaining the distance distribution histogram corresponding to all the pixel points meeting the condition in the search area; traversing each square column value of the histogram, calculating the number of non-zero square columns, the sum of the square column values corresponding to all the non-zero square columns and the corresponding maximum square column label maxBin, and calculating the average value of all the non-zero square column values by dividing the non-zero square column value sum by the number of the non-zero square columns;
the distances of the distance distribution histograms are arranged from small to large, and in the largest square column from the column label maxBin to the column label maxBin/2, the smallest column label d2 with the column value larger than one eighth of the average value of the columns is recorded, and the largest column label d1 with the column value larger than one half of the average value of the columns is recorded;
and if the labels d1 and d2 exist, traversing the rectangular columns with the labels d1 to the maximum rectangular column label maxBin, calculating the average distance corresponding to all the rectangular columns, wherein the distance is the height of the cubic object, and calculating the volume of the cubic object according to the length, the width and the height of the cubic object.
The plane on which the cubic object is placed is the lowest plane in the depth image, when the distance from a point which is close to four vertical planes and meets the condition to the upper surface reaches the maximum, the position of the lowest plane in the image is shown to be reached, namely the position of the plane which is in contact with the lower surface of the cubic object, the distance from the point to the upper surface of the position is the closest to the height of the cubic object, but due to noise interference, the final height of the cubic object is obtained by calculating the average distance between the maximum distance histogram and the histogram which is larger than half of the average distance through counting the distance distribution histograms, and the stability of the algorithm is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for accurately calculating the volume of a cubic object based on a depth image is characterized by comprising the following steps:
s1, obtaining a depth image of the target, and respectively obtaining gradient maps in the X direction and the Y direction which have the same size with the depth image; s2, dividing pixel points in the gradient image into a horizontal plane, a left vertical plane and a right vertical plane according to the gradient image, and acquiring an edge point image of the depth image; s3, acquiring segmented images of the upper surface, the left vertical surface and the right vertical surface of the target in the depth image according to the classification result, the gradient map and the edge point map; s4, performing edge line detection on the edge point image, clustering and combining all detected edge lines, and acquiring a collinear edge line group set; s5, extracting corner points according to each group of the longest edge lines of the collinear edge line group set; s6, pairing the extracted corner points; s7, determining the upper surface and the overall contour of the cubic object according to the obtained quadrangle and the segmentation image of the upper surface; s8, fitting the plane of the cubic object in the three-dimensional space according to the upper surface of the cubic object, removing noise points on each plane through repeated fitting for many times, obtaining an ideal point set corresponding to the upper surface, carrying out contour detection according to the ideal point set, and obtaining a contour edge line of the upper surface; s9, determining a quadrilateral of the upper surface according to the contour edge line of the upper surface; s10, fitting four vertical planes of the cubic object where the upper surface is located according to the upper surface quadrangle, and calculating the length and the width of the cubic object; and S11, calculating the height of the cubic object, and calculating the volume of the cubic object according to the length, width and height of the cubic object.
2. The method for accurately calculating the volume of a cubic object based on a depth image as claimed in claim 1, wherein the step of obtaining the segmented images of the upper surface, the left vertical plane and the right vertical plane of the object in the depth image based on the classification result, the gradient map and the edge point map in S3 is as follows:
obtaining a horizontal plane, a left vertical plane and a right vertical plane which have the same size as the gradient map according to the classification result, defining a segmentation image of an upper surface which has the same size as the gradient map, and setting an initial value of a pixel point to be 0;
traversing pixel points in a first column in a binary image of a horizontal plane from bottom to top, acquiring the reliability value of each pixel point according to the depth image, giving a reliability threshold value, and if the reliability value of the pixel point is smaller than the given reliability threshold value, processing the next pixel point if the state of the pixel point is unknown; otherwise, if the corresponding pixel point value is 255, the state of the pixel point is horizontal, if the corresponding pixel point value is 0, the state of the pixel point is vertical, if the number of pixel points with a pixel point value of 0 is greater than a given height threshold, then the pixel point with the pixel point value of 255 appearing later is corresponded to the segmentation image on the upper surface, the pixel value is updated to be 255, the number of lines of the pixel point with the changed pixel point state each time is recorded, after the pixel points of all lines in the first row are traversed, setting the pixel values of the positions of all pixel points in the first column from the recorded line number to the 1 st line, which correspond to the upper surface segmentation image, as 0, traversing all the lines and rows of the parallel plane image in the same way to obtain an initial upper surface segmentation image, repairing the obtained initial segmented image of the upper surface, and updating the segmented image of the upper surface;
and respectively corresponding the edge points in the edge point graph to the updated upper surface divided image, the left vertical plane binary image and the right vertical plane binary image one by one, updating the pixel point value of the horizontal non-edge point in the updated upper surface divided image to be 0, updating the pixel point value of the non-edge point in the left vertical plane binary image to be 0, updating the pixel point value of the non-edge point in the right vertical plane binary image to be 0, acquiring the final upper surface divided image, the left vertical plane binary image and the right vertical plane binary image, and respectively recording the left vertical plane binary image and the right vertical plane binary image as the left vertical plane divided image and the right vertical plane divided image.
3. The method for accurately calculating the volume of a cubic object based on a depth image as claimed in claim 2, wherein the edge point map of the depth image is obtained by:
calculating the classified gradient average Yp of all the pixel points in the Y direction of the vertical plane, the classified gradient average Yq of all the pixel points in the Y direction of the horizontal plane, the gradient average XP of all the pixel points in the X direction of the left vertical plane and the gradient average Xq of all the pixel points in the X direction of the right vertical plane; setting a positive threshold value in the Y direction, a negative threshold value in the Y direction, a positive threshold value in the X direction and a negative threshold value in the X direction according to Yp, Yq, Xp and Xq; defining an image with the same size as the gradient image, and setting the initial value of a pixel point in the image to be 0; traversing all pixel points of the gradient map in the X direction and the gradient map in the Y direction, and if the absolute value of the gradient value in the Y direction of each pixel point is larger than the absolute value of the gradient value in the X direction and is smaller than a negative threshold value in the Y direction or is larger than or equal to a positive threshold value in the Y direction, the pixel point is an edge point, and the pixel point is corresponding to a defined image and the pixel value is updated to be 255; if the absolute value of the gradient value in the Y direction is smaller than the absolute value of the gradient value in the X direction, and the absolute value of the gradient value in the X direction is smaller than or equal to the negative threshold value in the X direction or larger than or equal to the positive threshold value in the X direction, the pixel point is an edge point, the pixel point is corresponding to a defined image, and the pixel value is updated to be 255; and obtaining an edge point diagram.
4. The method for accurately calculating the volume of a cubic object based on a depth image as claimed in claim 3, wherein the specific way of performing edge line detection on the edge point map, clustering and merging all detected edge lines, and obtaining the collinear edge line group set is as follows:
performing edge line detection on an edge line diagram, acquiring all edge lines, sequencing the edge lines into an edge line set according to the length of the edge lines, defining an empty collinear edge line set, wherein the collinear edge line set comprises a plurality of groups of edge lines, the edge lines in each group are collinear, the edge lines between two groups are not collinear, firstly adding the longest edge line into one group of the collinear edge line set, judging whether each A edge line in the edge line set is collinear with each group of the longest B edge lines in the collinear edge line set or not, and sequentially performing the following judgment conditions:
c1, calculating the angle difference between the edge line A and the edge line B, and judging whether the angle difference is less than or equal to a given angle difference threshold value e;
c2, if the angle difference is less than or equal to the angle difference threshold e, calculating the distances T1 and T2 from the starting point of the B edge line to the starting point and the ending point of the A edge line, and T3 and T4 from the ending point of the B edge line to the starting point and the ending point of the A edge line, giving an inter-point distance threshold G, and judging whether the minimum distance among the distances T1, T2, T3 and T4 is less than or equal to the inter-point distance threshold G or not;
c3, if min (min (T1, T2), min (T3, T4)) ≦ G, calculating the length L of the edge line A and the length H of the edge line B, and judging whether the maximum distance among the distances T1, T2, T3 and T4 is smaller than G + H + L;
c4, if max (max (T1, T2), max (T3, T4)) < (G + H + L), calculating distances S1 and S2 from the starting point and the end point of the B edge line to the A edge line, and distances S3 and S4 from the starting point and the end point of the A edge line to the B edge line, giving a distance threshold J, and if the maximum distance between T1 and T2 is smaller than L + G, judging whether S1 is smaller than or equal to J; if the maximum distance between the distances T3 and T4 is smaller than L + G, judging whether S2 is smaller than or equal to J; if the maximum distance between the distances T1 and T3 is smaller than L + G, judging whether S3 is smaller than or equal to J; if the maximum distance between the distances T2 and T4 is smaller than L + G, judging whether S4 is smaller than or equal to J; if any maximum distance is larger than or equal to L + G, directly executing the next step;
c5, if the condition in C4 is satisfied, the A edge line and the B edge line are collinear, the A edge line is added into the group of the collinear edge line group set, and the edge line of the group is updated;
in the judging process, if any condition is not met, directly processing the longest edge line of the next group in the collinear edge line group set until all groups of the collinear edge line group set are processed, wherein each group of edge lines is represented by the longest edge line of the group, and if the A edge line is not collinear with the longest edge line of all groups of the collinear edge line group set, directly adding the A edge line into a new group of the collinear edge line group set, and updating the group number of the collinear edge line group set; after the A edge line and the B edge line are collinear, judging whether the A edge line and the B edge line meet a combination condition, if T1 is not less than H, T2 and not more than H, T3 and not more than H and T4 and not more than H, the A edge line and the B edge line do not meet the combination condition, otherwise, combining the A edge line and the B edge line, updating the starting point and the end point of the B edge line, and updating a collinear edge line group set;
and processing the next edge line in the edge line set in the same way as the edge line A, continuously updating the collinear edge line group set, and sequencing the edge lines of each group in the collinear edge line group set according to the length to obtain the final collinear edge line group set.
5. The method for accurately calculating the volume of a cubic object based on depth images as claimed in claim 4, wherein the specific way of extracting the corner points from each group of the longest edge lines of the set of collinear edge line groups is:
c1, obtaining the longest edge line of each group of the collinear edge line group set, recording the longest edge line as a seed edge line set, sequencing all edge lines of the seed edge line set from long to short, traversing all edge lines of the seed edge line set, calculating an included angle a between the longest C edge line and any other D edge line from the longest C edge line, setting a curvature threshold value b, and judging whether min (a, 180-a) is greater than b;
c2, if min (a, 180-a) > b, calculating the intersection point W (Wx, Wy) of the C edge line and the D edge line, and judging whether Wx or Wy does not exceed the image range;
c3, if Wx or Wy does not exceed the image range, judging whether the intersection point W is an angular point, wherein the condition for judging whether the intersection point W is the angular point is as follows: first, distances D1 and D2 from two end points of the C edge line to the intersection point W are calculated, a minimum distance min _ D1 is defined, if the intersection point W is between the two end points of the C edge line, min _ D1=0, otherwise min _ D1= min (D1, D2), if the minimum distance min _ D1 is not 0 or min (D1, D2) is less than a given arm length threshold Z1, then distances D3 and D4 from the two end points of the D edge line to the intersection point W are calculated, a minimum distance min _ D2 is defined, if the intersection point W is between the two end points of the D edge line, min _ D2=0, otherwise min _ D56 = min (D3, D4), if the minimum distance min _ D2 is not 0 or min (D3, D4) is less than a given arm length threshold Z2, and it is determined whether the intersection point W2 is smaller than the given arm length threshold Z2, D2, 2 r;
c4, if the condition of C3 is met, giving a quadrilateral arm gap threshold v, judging whether max (min (d1, d2) and min (d3, d 4)) is less than or equal to v, and if the condition is met, determining that the intersection point W is the corner point of the quadrilateral;
sequentially judging the conditions C1 to C4, and if any condition is not met, processing the next edge line;
adding the first acquired corner point into the corner point set, and calculating the distances L between the other acquired corner points and all corner points in the corner point setiI is a variable, judging whether all the angular points in the angular point set are similar, if L is the sameiR, and adding the corner point to the corner point set, wherein the corner point is not similar to the rest corner points;
the intersection point W corresponds to two arms, the first arm is a connecting line of the intersection point W and the terminal point of the C edge line, the second arm is a connecting line of the intersection point W and the terminal point of the D edge line, the terminal point of the C edge line is the larger corresponding terminal point of D1 and D2 in the C edge line, and the terminal point of the D edge line is the larger corresponding terminal point of D3 and D4 in the D edge line; and processing any two rest edge lines in the seed edge line group set in the same way to obtain all the corner points, expanding two arms of each corner point according to the seed edge line set to obtain a final corner point set, and realizing corner point extraction.
6. The method for accurately calculating the volume of a cubic object based on depth images as claimed in claim 5, wherein the extracted corner points are paired in a specific manner as follows:
c1, pairing all the corner points in the corner point set in pairs, if the coordinates of the two paired corner points are different, continuously judging whether the two corner points share one arm, marking the two arms of one corner point as arm11 and arm12, marking the two arms of the other corner point as arm21 and arm22, and firstly judging whether arm11 and arm21 are the same arm of the two corner points;
c2, judging the mode that arm11 and arm21 are the same arm: if the distance between two corner points is greater than one fourth of the minimum value of arm11 and arm21, firstly calculating the absolute value x1 of the angle difference between arm11 and the connecting line of the two corner points, the absolute value x2 of the angle difference between arm21 and the connecting line of the two corner points, and the value ranges of x1 and x2 are [0,180], giving an angle similarity threshold value f, if x1 and x2 are both smaller than the angle similarity threshold value f, giving an inter-corner distance threshold value u and a length proportion threshold value ratio, if the length sum of arm11 and arm21 is greater than or equal to the ratio times of the distance between the two corner points and is greater than or equal to the difference between the distance between the two corner points and u, judging that arm11 and arm21 are the same arm, and successfully pairing the corner points;
otherwise, judging whether any other two arms of the two angular points are the same arm by adopting the same method, if not, failing to pair the angular points, otherwise, successfully pairing the angular points;
pairwise pairing the rest corner points in the same manner to obtain all successfully paired corner point combinations, wherein the same arm of each corner point combination is a shared arm, the other two arms are opening arms, included angles u1 and u2 between the two opening arms and the shared arm in each corner point combination are calculated, the value ranges of the included angles u1 and u2 are [0,180], and the corner point combinations with the included angle u1 or u2 smaller than the bending threshold b are removed; judging whether two opening arms of the remaining corner point combinations are on the same side of the shared arm or not, removing the corner point combinations of the two opening arms which are not on the same side of the shared arm, and finally determining a quadrangle according to the currently paired corner point combinations; all corner combinations are processed in the same way to obtain all quadrilaterals.
7. The method for accurately calculating the volume of a cubic object based on depth images as claimed in claim 6, wherein the specific manner of obtaining the contour edge line of the upper surface in S8 is:
c1, traversing all the detected pixel points on the upper surface of each cubic object, and acquiring world coordinate points of all the pixel points on the upper surface in the three-dimensional space according to the transformation relation between the image coordinate system and the world coordinate system;
c2, traversing all world coordinate points on the current upper surface, giving a point threshold value, and judging whether all the points are greater than the given point threshold value;
c3, if the number of points is larger than the threshold value of the number of points, calculating average value points of all the points and carrying out normalization processing, wherein all coordinate points on the upper surface of each cubic object are located on the same plane in the three-dimensional space, the mean value point is located on the plane, calculating a normal vector of the plane of the three-dimensional space where the average value points are located, and acquiring a primary fitting three-dimensional plane equation of the three-dimensional space according to the normal vector and the mean value point;
c4, calculating the distance between all world coordinate points on the upper surface and the primary fitting three-dimensional plane, giving a distance threshold value, removing all points with the distance greater than the given distance threshold value, obtaining an ideal point set, fitting again according to the ideal point set, and obtaining a secondary fitting three-dimensional plane equation;
c5, repeating the steps for multiple repeated fitting, calculating the distance from all points of the ideal point set to the fitting three-dimensional plane after each fitting, eliminating all points with the distance larger than a given distance threshold value, updating the ideal point set, and recording as the ideal point set;
c6, acquiring plane coordinate points of the two-dimensional plane corresponding to all world coordinate points of the upper ideal point set of the current upper surface, detecting the contour according to the plane coordinate points, acquiring all contours, calculating the overlapping area of the circumscribed rectangle of each contour and the circumscribed rectangle of the current upper surface, searching the contour with the largest overlapping area and larger than a given threshold, wherein the contour is the final contour of the upper surface, calculating the convex hull of the upper surface according to the contour, acquiring the corresponding fitting polygon according to the convex hull, acquiring all edge lines of the fitting polygon of the upper surface, and sequencing from long to short, wherein the edge lines are the contour edge lines of the upper surface.
8. The method for accurately calculating the volume of a cubic object based on depth images as claimed in claim 7, wherein the determining the quadrilateral on the upper surface according to the contour edge line of the upper surface in S9 is performed by:
giving a parallel angle difference threshold value j, judging whether the angle difference of any two contour edge lines is smaller than j, if the condition is met, the two contour edge lines are in the same group currently, grouping all the contour edge lines, if the total number after grouping is larger than or equal to 2, calculating the total length of the contour edge lines of each group and sequencing the total length from large to small, recording the two groups with the maximum total length as a group 1 and a group 2, wherein the group 1 comprises one group of approximately parallel sides of a quadrangle on the upper surface, the group 2 comprises the other group of approximately parallel sides of the quadrangle on the upper surface, and if the total number after grouping is smaller than 2, processing the next upper surface;
respectively calculating the distance difference of any two contour edge lines in the group 1 and the group 2, giving a parallel distance difference threshold value, respectively dividing the contour edge lines of which the distance difference is smaller than the parallel distance difference threshold value in the group 1 and the group 2 into the same group, if the total number of the divided groups in the group 1 is greater than or equal to 2, calculating the total length of the edge lines of each group in the group 1 and sorting the edge lines from large to small, respectively recording the two groups with the maximum total length as the group 11 and the group 12, if the total number of the divided groups in the group 2 is greater than or equal to 2, calculating the total length of the edge lines of each group in the group 2 and sorting the edge lines from large to small, respectively recording the two groups with the maximum total length as the group 21 and the group 22, and if the total number of the divided groups of the group 1 or the group 2 is;
and traversing all pixel points of the upper surface contour, respectively calculating the distance from each pixel point to all contour edge lines of the group 11, the group 12, the group 21 and the group 22, and obtaining the contour edge line with the minimum group distance and smaller than a given threshold, namely the four edges of the upper surface.
9. The method for accurately calculating the volume of a cubic object based on depth images as claimed in claim 7, wherein the step of fitting the quadrilateral shape of the upper surface to the four vertical planes of the cubic object on which the upper surface is located in S10 is performed by:
acquiring a normal vector of the upper surface in a three-dimensional space, acquiring a world coordinate of an edge corresponding to the three-dimensional space for any edge of a quadrilateral of the upper surface, belonging to a vertical plane of a cubic object corresponding to the upper surface, constructing a point which is parallel to the edge and belongs to the vertical plane of the edge along the downward direction of the normal vector, and fitting a vertical plane of the edge in the three-dimensional space according to the constructed point and all edge points of the edge;
fitting the vertical planes of the three-dimensional space of the rest three sides of the upper surface quadrangle in the same way to obtain four vertical planes of the cubic object corresponding to the current upper surface;
regarding any one side of the quadrangle on the upper surface, marking as side 1, marking as side 2 the side approximately parallel to the side, calculating the distance from all edge points of side 1 to the vertical plane of side 2, simultaneously calculating the distance from all edge points of side 2 to the vertical plane of side 1, and calculating the average value of all the distances as the length of the cubic object corresponding to the current upper surface;
and respectively recording the other two approximately parallel edges of the quadrilateral on the upper surface as an edge 3 and an edge 4, calculating the distances from all edge points of the edge 3 to the vertical plane where the edge 4 is located, simultaneously calculating the distances from all edge points of the edge 4 to the vertical plane where the edge 3 is located, and calculating the average value of all the distances to be used as the width of the cubic object corresponding to the current upper surface.
10. The method for accurately calculating the volume of a cubic object based on depth images as claimed in claim 9, wherein the height of the cubic object is calculated by:
determining a search area containing the cubic object according to the outline of the cubic object, traversing all pixel points on the search area, acquiring corresponding world coordinates of the pixel points, judging whether the minimum value of the distances from each pixel point to four vertical planes in the three-dimensional space is less than or equal to a given point-to-plane minimum distance threshold dt, and processing the next pixel point if the condition is not met;
if the condition is met, calculating the distance from the pixel point to the upper surface, obtaining the distances from all pixel points, meeting the condition, in the search area to the upper surface, wherein one distance is represented by a square column, the number of the square columns exceeds the number of the distances, the initial value of the histogram value of all the square columns is 0, if the distance from the pixel point to the vertical plane is less than dt/2, the histogram value corresponding to the pixel point is added by 2, if the distance from the pixel point to the vertical plane is greater than dt/2 and less than dt, the histogram value corresponding to the pixel point is added by 1, and thus obtaining the distance distribution histogram corresponding to all the pixel points meeting the condition in the search area; traversing each square column value of the histogram, calculating the number of non-zero square columns, the sum of the square column values corresponding to all the non-zero square columns and the corresponding maximum square column label maxBin, and calculating the average value of all the non-zero square column values by dividing the non-zero square column value sum by the number of the non-zero square columns;
the distances of the distance distribution histograms are arranged from small to large, and in the largest square column from the column label maxBin to the column label maxBin/2, the smallest column label d2 with the column value larger than one eighth of the average value of the columns is recorded, and the largest column label d1 with the column value larger than one half of the average value of the columns is recorded;
and if the labels d1 and d2 exist, traversing the rectangular columns with the labels d1 to the maximum rectangular column label maxBin, calculating the average distance corresponding to all the rectangular columns, wherein the distance is the height of the cubic object, and calculating the volume of the cubic object according to the length, the width and the height of the cubic object.
CN201910990406.1A 2019-10-17 2019-10-17 Accurate calculation method for cubic object volume based on depth image Pending CN110689568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910990406.1A CN110689568A (en) 2019-10-17 2019-10-17 Accurate calculation method for cubic object volume based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910990406.1A CN110689568A (en) 2019-10-17 2019-10-17 Accurate calculation method for cubic object volume based on depth image

Publications (1)

Publication Number Publication Date
CN110689568A true CN110689568A (en) 2020-01-14

Family

ID=69113114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910990406.1A Pending CN110689568A (en) 2019-10-17 2019-10-17 Accurate calculation method for cubic object volume based on depth image

Country Status (1)

Country Link
CN (1) CN110689568A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111561872A (en) * 2020-05-25 2020-08-21 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN112254635A (en) * 2020-09-23 2021-01-22 洛伦兹(北京)科技有限公司 Volume measurement method, device and system
CN114979784A (en) * 2022-04-13 2022-08-30 浙江大华技术股份有限公司 Target video playing method and device, electronic device and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111561872A (en) * 2020-05-25 2020-08-21 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN112254635A (en) * 2020-09-23 2021-01-22 洛伦兹(北京)科技有限公司 Volume measurement method, device and system
CN112254635B (en) * 2020-09-23 2022-06-28 洛伦兹(北京)科技有限公司 Volume measurement method, device and system
CN114979784A (en) * 2022-04-13 2022-08-30 浙江大华技术股份有限公司 Target video playing method and device, electronic device and storage medium
CN114979784B (en) * 2022-04-13 2024-01-09 浙江大华技术股份有限公司 Playing method and device of target video, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110570471A (en) cubic object volume measurement method based on depth image
CN108010036B (en) Object symmetry axis detection method based on RGB-D camera
CN107369161B (en) Scattered workpiece point cloud segmentation method based on improved Euclidean clustering
CN111507390B (en) Storage box body identification and positioning method based on contour features
CN107610176B (en) Pallet dynamic identification and positioning method, system and medium based on Kinect
US9327406B1 (en) Object segmentation based on detected object-specific visual cues
CN110689568A (en) Accurate calculation method for cubic object volume based on depth image
JP4865557B2 (en) Computer vision system for classification and spatial localization of bounded 3D objects
CN109272523B (en) Random stacking piston pose estimation method based on improved CVFH (continuously variable frequency) and CRH (Crh) characteristics
CN106683137B (en) Artificial mark based monocular and multiobjective identification and positioning method
CN110751640A (en) Quadrangle detection method of depth image based on angular point pairing
CN109559324B (en) Target contour detection method in linear array image
CN110728246A (en) Cubic object identification method based on depth image
CN112070838B (en) Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN104851095B (en) The sparse solid matching method of workpiece image based on modified Shape context
CN110807781A (en) Point cloud simplification method capable of retaining details and boundary features
CN113362385A (en) Cargo volume measuring method and device based on depth image
CN110751688A (en) Cubic object volume calculation method based on depth image and capable of eliminating noise
Liu et al. Local regularity-driven city-scale facade detection from aerial images
Burger et al. Fast dual decomposition based mesh-graph clustering for point clouds
CN107610174A (en) A kind of plane monitoring-network method and system based on depth information of robust
CN113128346B (en) Target identification method, system and device for crane construction site and storage medium
CN116843742B (en) Calculation method and system for stacking volume after point cloud registration for black coal loading vehicle
CN115546202B (en) Tray detection and positioning method for unmanned forklift

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination