CN115682941B - Packing box geometric dimension measuring method based on structured light camera - Google Patents
Packing box geometric dimension measuring method based on structured light camera Download PDFInfo
- Publication number
- CN115682941B CN115682941B CN202211687709.4A CN202211687709A CN115682941B CN 115682941 B CN115682941 B CN 115682941B CN 202211687709 A CN202211687709 A CN 202211687709A CN 115682941 B CN115682941 B CN 115682941B
- Authority
- CN
- China
- Prior art keywords
- plane
- point cloud
- structured light
- light camera
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012856 packing Methods 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000005259 measurement Methods 0.000 claims abstract description 38
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 238000001514 detection method Methods 0.000 claims abstract description 33
- 238000012937 correction Methods 0.000 claims abstract description 32
- 238000004806 packaging method and process Methods 0.000 claims abstract description 31
- 238000013507 mapping Methods 0.000 claims abstract description 24
- 238000013519 translation Methods 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 7
- 230000002265 prevention Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 230000002093 peripheral effect Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention relates to the technical field of packing box dimension detection, and discloses a packing box geometric dimension measuring method based on a structured light camera, which comprises the following steps: s1: correcting the plane of the workbench to obtain a measurement reference plane; s2: acquiring a depth map of a packaging box to be detected; s3: positioning the packaging box to be detected according to the depth map obtained in the step S2; s4: and (3) measuring the geometric dimension: s41: reconstructing point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating the correction matrix obtained in the step S1 to obtain point cloud to be processed; s42: calculating the length and the width of the packing box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle; s43: the height of the packaging box to be measured is obtained by calculating the distance from the target point cloud to the detection reference plane; compared with the prior art, the invention realizes the aim of measuring the geometric dimension of the packing box quickly and accurately.
Description
Technical Field
The invention relates to the technical field of packing box size detection, in particular to a packing box geometric dimension measuring method based on a structured light camera.
Background
From 2012 to 2021, the business volume of express year in China is increased from 57 hundred million to 1083 hundred million, which is increased by 18 times. In express business, 80% of goods are packaged by cartons, and the national post administration issues an implementation scheme for promoting green packaging of express industry to definitely improve the packaging standardization rate, reduce the packaging cost rate and reduce the transportation cost. The reasonable geometrical dimension information that utilizes the carton packing will be favorable to the planning of commodity circulation transportation, reduces the cost of transportation when improving the volume utilization ratio, consequently how to obtain carton geometrical dimension high-efficiently, accurately is the important factor that the efficiency is carried out to express delivery logistics industry and the cost is reduced. The traditional mode of acquiring the geometric dimension of the carton by manual measurement is difficult to adapt to the express logistics industry with rapid development, and the development of a non-contact measurement technology has unique superiority in aspects of adapting to more complex measurement environments, efficiently and accurately acquiring surface contour information of a measured target and the like. The application research of the light curtain measurement technology provides good reference for non-contact geometric dimension measurement, but the light curtain measurement precision is greatly influenced by the placing precision, the speed precision and the self precision, and the limitation is large. The development of the machine vision technology also provides more choices for the non-contact measurement technology, a binocular vision combined point laser mode is adopted to realize the rapid measurement of the size of regular logistics goods by extracting the space coordinates of key points of the goods, a wired structure light scanning combined binocular vision mode is adopted to realize the measurement of the volume of an irregular object by three-dimensional point cloud integration, a point cloud slicing algorithm is adopted to realize the measurement of the volume of the irregular object, a binocular vision mode is directly adopted to realize the volume measurement of regular packages by various feature extraction algorithms, a two-dimensional image mode is adopted to realize the volume measurement of the regular object by a single-view 3D reconstruction method, a Yolov3 combined Intel RealSense D435i depth camera mode is adopted to position a target object by a YOLO model and then estimate the volume from a single surface to realize the measurement of the volume of the regular object, and the non-contact vision measurement technology such as the above is available; the application of the monocular structured light camera has been found to be very lacking in previous volumetric sizing studies.
Disclosure of Invention
The invention aims to provide a packing box geometric dimension measuring method based on a structured light camera, which is used for solving the technical problems.
A packing box geometric dimension measuring method based on a structured light camera comprises the following steps:
s1: correcting the plane of the workbench to obtain a measurement reference plane, wherein the measurement reference plane is parallel to the imaging plane of the structured light camera, and the mass center of the measurement reference plane is superposed with the mass center of the original point cloud of the plane of the workbench;
s2: acquiring a depth map of the packaging box to be measured;
s3: positioning the packaging box to be measured according to the depth map obtained in the step S2:
s31: traversing pixel values of each point of the detection area by taking the plane of the workbench as a reference, realizing coarse extraction of a target through pixel difference, and judging whether the target exists according to the number of effective difference pixel points; if not, returning to S2; if yes, entering S32;
s32: performing morphological denoising on the target extracted in the step S31;
s33: the function of preventing arm interference is realized through the coordinate position of the connected domain, and the position of the packaging box to be measured is positioned through the connected domain;
s4: and (3) measuring the geometric dimension:
s41: reconstructing a point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating through the correction matrix obtained in the step S1 to obtain a point cloud to be processed;
s42: calculating the length and width of the packaging box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle;
s43: and calculating the distance from the target point cloud to the measuring reference plane to obtain the height of the packaging box to be measured.
According to an embodiment of the present invention, the method for preventing arm interference in S33 is as follows:
s331: determining a detection area of the structured light camera on the working platform according to the fixed height of the structured light camera;
s332: processing the detection area depth map by S32, then carrying out connected domain analysis, and setting an anti-interference band with a fixed pixel width at the peripheral edge of the detection area image;
s333: judging whether the interference prevention belt has a shelter or not according to whether the edge coordinate position of the connected domain is positioned on the interference prevention belt or not; if not, entering S4; if yes, the process returns to S2.
According to an embodiment of the present invention, S1 includes the steps of:
s11: establishing a coordinate system and an XOY plane of the structured light camera by taking the structured light camera as an origin;
s12: acquiring a plane depth map of a workbench and generating point cloud;
s13: fitting the workbench plane point cloud to obtain plane equation parameters to obtain a workbench plane normal vector;
s14: calculating an included angle and a rotation matrix by a normal vector of a workbench plane and a normal vector of a camera imaging plane;
s15: calculating to obtain a point cloud of primary plane correction according to the rotation matrix;
s16: calculating the mass centers of the preliminary plane correction point cloud and the original point cloud to calculate a translation matrix;
s17: and fusing the rotation matrix and the translation matrix to obtain a final transformation matrix for plane correction.
According to an embodiment of the present invention, S13 includes the steps of:
normal vector to the table planeCarrying out normalization dimensionless treatment, and setting normalized normal vector asAnd defining the unit vector direction as pointing to the positive z-axis direction.
According to an embodiment of the present invention, S14 includes the steps of:
the included angle θ is derived as:
namely:
Then, a rotation matrix is derived by the rodlike rotation formula:
according to an embodiment of the present invention, S15 includes the steps of:
to be provided withRepresenting the original point cloud coordinates toAnd representing the point cloud coordinate after the initial correction, wherein the initial correction process is represented by the rotation matrix R multiplied by the original point cloud coordinate:
the coordinate conversion relation is as follows:
according to an embodiment of the present invention, S16 includes the steps of:
calculating a formula by point cloud centroid coordinates:
get the origin cloud centroid asThe primary plane correction point cloud center of mass isThen, the translation amount is:
according to an embodiment of the present invention, S17 includes the steps of:
if the planar rectification process is first rotation and then translation, the transformation matrix can be expressed as:
the final coordinate conversion relation is as follows:
according to an embodiment of the present invention, S41 includes the steps of:
obtaining calibrated internal reference of camera from structured light camera,、Which respectively represent the proportion of pixels in the image plane along the u, v axis direction, i.e. the pixel aspect ratio,、respectively representing x and y coordinates of the image principal point, and obtaining the following coordinates according to the pinhole imaging principle:
and then obtaining a coordinate conversion formula for plane correction according to the S17
According to an embodiment of the present invention, S42 includes the steps of:
s421: obtaining from depth map connected domain analysisTo the target area、Forming a circumscribed rectangle and a Mask graph;
s422: within the range of the circumscribed rectangle, from the leftmost side of the circumscribed rectangleStarting, traversing the pixels of the target depth map from left to right in S31, stopping traversing from left to right when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zero, and reconstructing the point cloud of the pixel points to obtain the space coordinatesSince the point cloud z value corrected by the plane does not affect the position of the edge, only the point cloud z value is takenStoring into left edge mapping point set(ii) a y axis fromStarting, traversing the pixels of the target depth map in the S31 step by step from top to bottom, extracting the mapping points at the left edge until the y axis is traversedExtracting the left edge point set of the target;
s423: within the range of the circumscribed rectangle, from the rightmost side of the circumscribed rectangleStarting, traversing S31 pixels of the target depth map from right to left, and when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zeroAnd stopping traversing from right to left, and reconstructing the point cloud of the pixel point to obtain the space coordinateTaking outStoring into the right edge mapping point set(ii) a y axis fromStarting, traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping point at the right edge until the y axis is traversedExtracting the right edge point set of the target;
s424: mapping the left and right edges of S422 and S423 into a set of pointsAndare combined to obtain a complete target edge mapping point set(ii) a Obtaining only includingSet of points of coordinatesThe minimum circumscribed rectangle can obtain the length and width of the target.
Compared with the prior art, the packing box geometric dimension measuring method based on the structured light camera has the following advantages:
according to the packing box geometric dimension measuring method based on the structured light camera, the point cloud mapping two-dimensional plane is used, the length and the width of the packing box to be measured are calculated through the detection algorithm of extracting the edges and then solving the minimum circumscribed rectangle, the detection accuracy can be guaranteed while the detection time efficiency is considered, and the aim of measuring the geometric dimension of the packing box quickly and accurately can be achieved.
Drawings
FIG. 1 is a flow chart of a packing box geometric dimension measuring method based on a structured light camera of the invention;
FIG. 2 is a schematic view of the measuring system of the present invention measuring the packing box to be measured;
FIG. 3 is a schematic diagram showing the positional relationship between the original table plane and the formed measurement reference plane after the table plane is subjected to the plane correction in step S1;
FIG. 4 is a comparison of the results before and after the target crude extraction is performed by pixel differentiation in step S31;
fig. 5 is a comparison diagram before and after morphological denoising of the target extracted in S31 in step S32;
fig. 6 is a Mask map acquired in step S421;
fig. 7 is a left edge point set diagram obtained in step S422;
fig. 8 is a right edge point set diagram obtained in step S423;
fig. 9 is a diagram of the complete edge point set synthesized in step S424;
FIG. 10 is an origin cloud chart obtained by sampling a packaging box to be measured;
fig. 11 is a down-sampling result after point cloud is sparse by the voxel down-sampling method in step S43;
fig. 12 is a point cloud down-sampled in step S43;
FIG. 13 is a diagram illustrating the result of the statistical filtering performed on FIG. 12 in step S43;
in the figure: 1. structured light camera, 2, support, 3, packing box to be measured.
The implementation and advantages of the functions of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In the following description, for purposes of explanation, numerous implementation details are set forth in order to provide a thorough understanding of various embodiments of the present invention. It should be understood, however, that these implementation details should not be taken to limit the invention. That is, in some embodiments of the invention, such implementation details are not necessary. In addition, some conventional structures and components are shown in simplified schematic form in the drawings.
It should be noted that all directional indicators (such as up, down, left, right, front, back \8230;) in the embodiments of the present invention are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the attached drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to the first, the second, etc. in the present invention are only used for description purposes, do not particularly refer to an order or sequence, and do not limit the present invention, but only distinguish components or operations described in the same technical terms, and are not understood to indicate or imply relative importance or implicitly indicate the number of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In addition, technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
For a further understanding of the contents, features and effects of the present invention, reference will now be made to the following examples, which are illustrated in the accompanying drawings and described in the following detailed description:
referring to fig. 1, the invention discloses a packing box geometric dimension measuring method based on a structured light camera, comprising the following steps:
s1: correcting the plane of the workbench to obtain a measurement reference plane; the measuring reference plane is parallel to the imaging plane of the structured light camera, and the mass center of the measuring reference plane is superposed with the mass center of the original point cloud of the workbench plane;
when the measurement system is installed, an ideal state cannot be achieved usually, and a certain included angle exists between the imaging plane of the structured light camera 1 and the plane of the workbench, namely a certain included angle exists between the XOY plane of the camera coordinate system using the structured light camera 1 as the origin and the plane of the workbench. For the convenience of subsequent data processing, the tilt angle needs to be corrected, and the stage plane is corrected to be parallel to the imaging plane of the structured light camera 1.
S11: establishing a coordinate system and an XOY plane of the structured light camera by taking the structured light camera as an original point;
before the geometric dimension of the packing box 3 to be measured is measured, a measuring system is arranged, as shown in fig. 2, in the invention, a structured light camera 1 is adopted as an acquisition structure of the measuring system, the structured light camera 1 is vertically fixed downwards by an extending bracket 2, an Astra monocular structured light camera of Australian light is adopted as the structured light camera 1, the functions of simultaneously acquiring a color image and a depth image are realized, the real-time measurement and display can be realized, the effective working distance is 0.6m to 8m, the optimal working distance is 1m, and the visual angle of the depth image is (H58.4 degrees, V45.5 degrees). Based on the measurement system, the structured light camera 1 is erected on the bracket 2 at the height of 1.4m from the plane of the workbench, so that the measurement of the size of the packing box in the range of a trapezoidal platform with 660 multiplied by 480mm on the upper surface, 1500 multiplied by 1100mm on the lower surface and 800mm in height can be realized.
S12: acquiring a plane depth map of a workbench and generating point cloud;
referring to fig. 2, the depth map of the worktable plane is obtained by the structured light camera without placing the packing box 3 to be measured, and the corresponding depth map is generated only containingThe point cloud data of (2).
S13: fitting the workbench plane point cloud to obtain plane equation parameters to obtain a workbench plane normal vector;
since the correcting operation of the worktable plane is a pre-operation before the measurement of the size of the packing box 3 to be measured, the correcting operation does not affectThe time complexity of the whole algorithm is detected, and the time can be sacrificed to obtain the precision, so that the measuring system selects a random sampling consistency algorithm to set high iteration times to fit a plane, and the plane can be fitted to obtain plane equation parameters。
to the normal vectorCarrying out normalization dimensionless treatment, and setting normalized normal vector asAnd defining the unit vector direction as pointing to the positive z-axis direction.
S14: calculating an included angle and a rotation matrix by a normal vector of a workbench plane and a normal vector of a camera imaging plane;
the angle θ is derived as:
namely:
Then, a rotation matrix is derived by a rodrieger rotation formula:
s15: calculating to obtain a point cloud of primary plane correction according to the rotation matrix;
to be provided withRepresenting the original point cloud coordinates toPoint cloud coordinates after preliminary correction are represented, and then preliminary correction is carried outThe process is represented by the rotation matrix R by the original point cloud coordinates:
the coordinate conversion relation is as follows:
s16: calculating the mass centers of the preliminary plane correction point cloud and the original point cloud to calculate a translation matrix;
the point cloud subjected to the preliminary plane correction is not only changed in plane angle, but also shifted in the whole point cloud, and in order to correct only the inclined angle of the point cloud plane without changing the position of the point cloud, the position of the preliminary plane correction point cloud needs to be translated from the centroid of the point cloud to the centroid of the original point cloud.
Calculating a formula by point cloud centroid coordinates:
get the origin cloud centroid asThe primary plane correction point cloud center of mass isThen, the translation amount is:
s17: and fusing the rotation matrix and the translation matrix to obtain a final transformation matrix for plane correction.
If the planar rectification process is rotation and then translation, the transformation matrix can be expressed as:
the final coordinate conversion relation is as follows:
after plane correction, a reference plane with a measuring reference plane parallel to the XOY plane and a mass center coincident with the origin cloud mass center can be obtained, the correction effect is shown in figure 3, the bounding box in the figure is an axial bounding box, and the inclined gray point cloud plane is subjected to plane correction to obtain a green point cloud plane parallel to the XOY plane and a green point cloud plane with a mass center coincident with the origin cloud mass center.
S2: obtaining a depth map of the packaging box to be measured:
referring to fig. 2, during measurement, a packing box 3 to be measured is placed in an acquisition range below a structured light camera 1, namely, in an effective measurement range of the structured light camera 1, which corresponds to a region (1) in fig. 2, the structured light camera 1 is controlled by upper computer software to take a picture, the upper computer software is developed based on a Visual Studio platform, the measurement system disclosed by the invention adopts an SDK of an OpenNI2 interface to develop, an OpenCV Visual library is used for image algorithm processing, and a PCL point cloud library is used for point cloud algorithm processing. After the structured light camera 1 takes a picture and collects the picture, the data is transmitted to a computer for processing, and the computer for the system experiment is a glory notebook computer MagicBook 16 Pro (the CPU is R7-5800H, and the GPU is RTX 3050).
S3: positioning the packaging box to be measured according to the depth map obtained in the step S2; the primary work of the measurement of the detection system of the invention is to position the position of the packing case 3 to be measured, the positioning work is realized by image processing based on a 16-bit single-channel depth map, and the positioning algorithm designed by the measurement system of the invention comprises the following steps:
s31: traversing pixel values of each point of the detection area by taking the plane of the workbench as a reference, realizing coarse extraction of a target through pixel difference, and judging whether the target exists according to the number of effective difference pixel points; if not, returning to S2; if yes, the process goes to S32;
referring to fig. 4, the pixel value of the depth map represents the distance from the point to the structured light camera, the pixel value of each point in the detection area is traversed by using the measurement reference plane as the reference plane, and the pixel value within the range of 20mm above the plane is assigned to zero by pixel difference, so that the depth map without the background above the plane 20mm can be obtained. Meanwhile, the total number of pixels of the depth map without the background is counted, and a detection threshold is set to preliminarily judge whether a detection target exists.
S32: performing morphological denoising on the target extracted in the step S31;
referring to fig. 5, the depth map may generate some noise due to a complex environment or structured light camera imaging, and therefore, an image denoising operation is necessary. The morphological operation of OpenCV is based on 8-bit images, so that a 16-bit depth image needs to be converted into an 8-bit gray-scale image, then small noise points are removed by using open operation, holes are filled by using closed operation, and because the upper surfaces of some packing boxes to be measured have the condition of section difference, two areas appear in the 8-bit gray-scale image, and the areas can be connected into a connected domain by using closed operation, so that the positioning error is avoided.
S33: the function of preventing arm interference is realized through the coordinate position of the connected domain, and the position of the packaging box to be measured is positioned through the connected domain;
the analysis of the connected domain mainly has two purposes, namely, the function of preventing arm interference is realized through the connected coordinate position, and the position of the packing box to be measured is positioned through the connected domain. Whether the package is taken by extending hands or not can be judged by the distance of the connected domain from top to bottom and from left to right in the image so as to influence the detection result, and then the range coordinate position of the target is judged by screening the maximum area of the connected domain, please refer to fig. 2, as shown in an area (3) in fig. 2, and simultaneously a Mask image only containing the area of the packing box is generated according to the label value of the target area screened by the connected domain.
The method for preventing the arm interference comprises the following steps:
s331: determining a detection area of the structured light camera on the working platform according to the fixed height of the structured light camera;
s332: processing the depth map of the detection area by S32, then carrying out connected domain analysis, and setting an anti-interference band with a fixed pixel width at the peripheral edge of the image of the detection area;
s333: judging whether the interference prevention belt has a shelter or not according to whether the edge coordinate position of the connected domain is positioned on the interference prevention belt or not; if not, entering S4; if yes, the process returns to S2.
An anti-interference zone of 5 pixels is designed at the edge of the detection zone, the edge interference zone is a zone (2) in fig. 2, and whether an arm is on the interference zone is judged through connected domain analysis. During detection, the packing box 3 to be measured needs to be placed in a detection area corresponding to the area (1) in fig. 2, detection cannot be performed when the packing box 3 to be measured is not placed or when the packing box 3 to be measured is placed on an interference belt, and detection cannot be triggered when the packing box 3 to be measured is placed and is interfered by an arm.
S4: and (3) measuring the geometric dimension:
the measuring system of the invention detects the length, width and height of the packaging box to be measured, and for the packaging box in the logistics industry, the length, width and height of the maximum size of the periphery of the packaging box are actually required. The measuring system divides the measurement into two parts, the length and the width are detected by an improved bounding box algorithm for target point cloud projection, and the height is detected by calculating the distance after the target point cloud filtering downsampling. The method specifically comprises the following steps:
s41: reconstructing point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating the correction matrix obtained in the step S1 to obtain point cloud to be processed;
obtaining calibrated internal reference of camera from structured light camera,、Which respectively represent the proportion of pixels in the image plane along the u, v axis direction, i.e. the pixel aspect ratio,、respectively representing x and y coordinates of the image principal point, and obtaining the following coordinates according to the pinhole imaging principle:
and then according to the coordinate conversion formula of the plane correction obtained in S17:
S42: calculating the length and width of the packaging box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle;
after the target point cloud is obtained through three-dimensional reconstruction of the depth map, the length and the width of the target can be measured through two methods, namely, the length and the width are obtained through solving a minimum directed bounding box for the point cloud, and the length and the width are obtained through solving a minimum directed circumscribed rectangle for the point cloud mapping to a two-dimensional plane. Considering the execution timeliness of the detection algorithm, the length and the width of the minimum circumscribed rectangle cannot be influenced based on the internal data of the point cloud, and the length and the width are only influenced by the edge points of the target point cloud. The algorithm is realized by the following steps:
s421: obtaining a target area from a depth map connected domain analysis、The formed circumscribed rectangle and Mask graph, and the obtained Mask graph is shown in FIG. 6;
s422: within the range of the circumscribed rectangle, from the leftmost side of the circumscribed rectangleStarting, traversing the pixels of the target depth map from left to right S31, stopping traversing from left to right when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zero, and reconstructing the point cloud of the pixel points to obtain the space coordinatesSince the z value of the point cloud corrected by the plane does not affect the position of the edge, only the position of the edge is taken hereStoring into left edge mapping point set(ii) a y axis fromAt the start of the process,traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping points of the left edge until the y-axis is traversedThen, the left edge point set of the target can be extracted, and the extraction result is shown in fig. 7;
s423: within the range of the circumscribed rectangle, from the rightmost side of the circumscribed rectangleStarting, traversing S31 pixels of the target depth map from right to left, stopping traversing from right to left when pixel points are in corresponding positions of the Mask map and the pixel values are not zero, and reconstructing point cloud of the pixel points to obtain space coordinatesGet itStoring into the right edge mapping point set(ii) a y axis fromStarting, traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping point of the right edge until the y-axis is traversedThen, the right edge point set of the target is extracted, and the extraction result is shown in fig. 8;
s424: mapping the left and right edges of S422 and S423 into a set of pointsAndare combined to obtain a complete target edge mapping point set(ii) a Obtaining only includesSet of points of coordinatesThe minimum circumscribed rectangle can obtain the length and width of the target, and the synthesis result is shown in fig. 9.
The algorithm designed by the system can effectively reduce the time complexity of solving the minimum directional circumscribed rectangle.
S43: the height of the packaging box to be measured is obtained by calculating the distance from the target point cloud to the measuring reference plane;
referring to fig. 10 to 13, the height measurement is implemented by calculating the distance from the target point cloud to the reference plane, and since the amount of point cloud data included when the size of the target area is large increases the calculation time, and the depth change of the point set in the neighborhood is small, the measurement system of the present invention uses the voxel down-sampling method to thin out the point cloud, so as to achieve the purpose of maintaining the accuracy and speeding up the calculation time. And after down sampling, statistical filtering is carried out to remove the interference of outliers so as to avoid the instability of the measurement result.
Finally traversing each point of the filtered point cloudCalculating each pointTo the plane of correctionThe calculation formula is as follows:
and counting the maximum value of the calculation result as the height value of the packing box.
The present invention is not limited to the above preferred embodiments, and any modification, equivalent replacement or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A packing box geometric dimension measuring method based on a structured light camera is characterized by comprising the following steps:
s1: correcting the workbench plane to obtain a measurement reference plane, wherein the measurement reference plane is parallel to an imaging plane of the structured light camera, and the mass center of the measurement reference plane is superposed with the mass center of the original point cloud of the workbench plane;
s2: acquiring a depth map of a packaging box to be measured;
s3: positioning the packaging box to be measured according to the depth map obtained in the step S2:
s31: traversing pixel values of each point of the detection area by taking the plane of the workbench as a reference, realizing coarse extraction of a target through pixel difference, and judging whether the target exists according to the number of effective difference pixel points; if not, returning to S2; if yes, the process goes to S32;
s32: performing morphological denoising on the target extracted in the step S31;
s33: the function of preventing arm interference is realized through the coordinate position of the communication domain, and the position of the packing box to be measured is positioned through the communication domain;
s4: and (3) measuring the geometric dimension:
s41: reconstructing point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating the correction matrix obtained in the step S1 to obtain point cloud to be processed;
s42: calculating the length and the width of the packing box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle;
s43: and calculating the distance from the target point cloud to the measuring reference plane to obtain the height of the packaging box to be measured.
2. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 1, wherein the method for preventing the arm interference in S33 is as follows:
s331: determining a detection area of the structured light camera on the working platform according to the fixed height of the structured light camera;
s332: processing the detection area depth map by S32, then carrying out connected domain analysis, and setting an anti-interference band with a fixed pixel width at the peripheral edge of the detection area image;
s333: judging whether the interference prevention belt has a shelter or not according to whether the edge coordinate position of the connected domain is positioned on the interference prevention belt or not; if not, entering S4; if yes, the process returns to S2.
3. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 1, wherein S1 comprises the steps of:
s11: establishing a coordinate system and an XOY plane of the structured light camera by taking the structured light camera as an origin;
s12: acquiring a plane depth map of a workbench and generating point cloud;
s13: fitting the workbench plane point cloud to obtain plane equation parameters to obtain a workbench plane normal vector;
s14: calculating an included angle and a rotation matrix by a normal vector of a workbench plane and a normal vector of a camera imaging plane;
s15: calculating to obtain a point cloud of primary plane correction according to the rotation matrix;
s16: calculating the mass centers of the preliminary plane correction point cloud and the original point cloud to calculate a translation matrix;
s17: and fusing the rotation matrix and the translation matrix to obtain a final transformation matrix for plane correction.
4. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 3, wherein S13 comprises the following steps:
5. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 4, wherein S14 comprises the following steps:
the angle θ is derived as:
namely:
Then, a rotation matrix is derived by a rodrieger rotation formula:
6. the packing case geometric dimension measuring method based on the structured light camera as claimed in claim 5, wherein S15 comprises the steps of:
to be provided withRepresenting the original point cloud coordinates toAnd representing the point cloud coordinate after the initial correction, wherein the initial correction process is represented by the rotation matrix R multiplied by the original point cloud coordinate:
the coordinate conversion relation is as follows:
7. the packing case geometric dimension measuring method based on the structured light camera according to claim 6, wherein S16 comprises the following steps:
calculating a formula by point cloud centroid coordinates:
get the origin cloud centroid asThe centroid of the preliminary plane correction point cloud isThen, the translation amount is:
8. the packing case geometric dimension measuring method based on the structured light camera as claimed in claim 7, wherein S17 comprises the steps of:
if the planar rectification process is rotation and then translation, the transformation matrix can be expressed as:
the final coordinate conversion relation is as follows:
9. the packing case geometric dimension measuring method based on the structured light camera as claimed in claim 8, wherein S41 comprises the steps of:
obtaining calibrated internal reference of camera from structured light camera,、Which respectively represent the proportion of pixels in the image plane along the u, v axis direction, i.e. the pixel aspect ratio,、respectively representing the x and y coordinates of the image principal point, and obtaining the following according to the pinhole imaging principle:
and then obtaining a coordinate conversion formula for plane correction according to the S17
10. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 1, wherein S42 comprises the steps of:
s421: obtaining a target area from a depth map connected domain analysis、Forming a circumscribed rectangle and a Mask graph;
s422: within the range of the circumscribed rectangle, from the leftmost side of the circumscribed rectangleStarting, traversing the pixels of the target depth map from left to right S31, stopping traversing from left to right when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zero, and reconstructing the point cloud of the pixel points to obtain the space coordinatesSince the point cloud z value corrected by the plane does not affect the position of the edge, only the point cloud z value is takenStoring into left edge mapping point set(ii) a y axis fromStarting, traversing the pixels of the target depth map in the S31 step by step from top to bottom, extracting the mapping points at the left edge until the y axis is traversedExtracting the left edge point set of the target;
s423: within the range of the circumscribed rectangle, from the rightmost side of the circumscribed rectangleStarting, traversing S31 pixels of the target depth map from right to left, stopping traversing from right to left when pixel points are in corresponding positions of the Mask map and the pixel values are not zero, and reconstructing point cloud of the pixel points to obtain space coordinatesGet itStoring into the right edge mapping point set(ii) a y axis fromStarting, traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping point at the right edge until the y axis is traversedExtracting the right edge point set of the target;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211687709.4A CN115682941B (en) | 2022-12-27 | 2022-12-27 | Packing box geometric dimension measuring method based on structured light camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211687709.4A CN115682941B (en) | 2022-12-27 | 2022-12-27 | Packing box geometric dimension measuring method based on structured light camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115682941A CN115682941A (en) | 2023-02-03 |
CN115682941B true CN115682941B (en) | 2023-03-07 |
Family
ID=85056207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211687709.4A Active CN115682941B (en) | 2022-12-27 | 2022-12-27 | Packing box geometric dimension measuring method based on structured light camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115682941B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416804A (en) * | 2018-02-11 | 2018-08-17 | 深圳市优博讯科技股份有限公司 | Obtain method, apparatus, terminal device and the storage medium of target object volume |
CN110017773A (en) * | 2019-05-09 | 2019-07-16 | 福建(泉州)哈工大工程技术研究院 | A kind of package volume measuring method based on machine vision |
CN114396875A (en) * | 2022-01-18 | 2022-04-26 | 安徽工业大学 | Rectangular parcel volume measurement method based on vertical shooting of depth camera |
CN114993182A (en) * | 2022-06-28 | 2022-09-02 | 浙江外国语学院 | Conveyor belt-based parcel packaging size measurement system and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830873B (en) * | 2018-06-29 | 2022-02-01 | 京东方科技集团股份有限公司 | Depth image object edge extraction method, device, medium and computer equipment |
-
2022
- 2022-12-27 CN CN202211687709.4A patent/CN115682941B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416804A (en) * | 2018-02-11 | 2018-08-17 | 深圳市优博讯科技股份有限公司 | Obtain method, apparatus, terminal device and the storage medium of target object volume |
CN110017773A (en) * | 2019-05-09 | 2019-07-16 | 福建(泉州)哈工大工程技术研究院 | A kind of package volume measuring method based on machine vision |
CN114396875A (en) * | 2022-01-18 | 2022-04-26 | 安徽工业大学 | Rectangular parcel volume measurement method based on vertical shooting of depth camera |
CN114993182A (en) * | 2022-06-28 | 2022-09-02 | 浙江外国语学院 | Conveyor belt-based parcel packaging size measurement system and method |
Non-Patent Citations (1)
Title |
---|
张志刚等.基于双目立体视觉的物流包装箱尺寸测量研究.2020,第第41卷卷(第第41卷期),第230-236页. * |
Also Published As
Publication number | Publication date |
---|---|
CN115682941A (en) | 2023-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110017773B (en) | Package volume measuring method based on machine vision | |
CN107203973B (en) | Sub-pixel positioning method for center line laser of three-dimensional laser scanning system | |
CN106839977B (en) | Shield dregs volume method for real-time measurement based on optical grating projection binocular imaging technology | |
US9607406B2 (en) | Size measurement device and size measurement method | |
TWI398796B (en) | Pupil tracking methods and systems, and correction methods and correction modules for pupil tracking | |
EP3020023B1 (en) | Systems and methods for producing a three-dimensional face model | |
CN105574921B (en) | Automated texture mapping and animation from images | |
KR20180014677A (en) | System and method for improved scoring of 3d poses and spurious point removal in 3d image data | |
JP3738456B2 (en) | Article position detection method and apparatus | |
CN113223135B (en) | Three-dimensional reconstruction device and method based on special composite plane mirror virtual image imaging | |
CN109903346A (en) | Camera attitude detecting method, device, equipment and storage medium | |
CN110415363A (en) | A kind of object recognition positioning method at random based on trinocular vision | |
WO2019041794A1 (en) | Distortion correction method and apparatus for three-dimensional measurement, and terminal device and storage medium | |
CN114998328A (en) | Workpiece spraying defect detection method and system based on machine vision and readable storage medium | |
CN115682941B (en) | Packing box geometric dimension measuring method based on structured light camera | |
CN110223356A (en) | A kind of monocular camera full automatic calibration method based on energy growth | |
CN106340062B (en) | A kind of generation method and device of three-D grain model file | |
Petrovai et al. | Obstacle detection using stereovision for Android-based mobile devices | |
CN108010084A (en) | A kind of depth camera is rebuild and method, system, the equipment of automatic Calibration | |
CN209342062U (en) | 3D vision guide de-stacking measuring system | |
CN112070844A (en) | Calibration method and device of structured light system, calibration tool diagram, equipment and medium | |
CN116309573A (en) | Defect detection method for printed characters of milk packaging box | |
Pang et al. | An algorithm for extracting the center of linear structured light fringe based on directional template | |
CN115294277A (en) | Three-dimensional reconstruction method and device of object, electronic equipment and storage medium | |
CN115456945A (en) | Chip pin defect detection method, detection device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20230203 Assignee: HONGDA MACHINERY MANUFACTURING (HEYUAN) Co.,Ltd. Assignor: GUANGDONG POLYTECHNIC NORMAL University Contract record no.: X2024980002156 Denomination of invention: A method for measuring the geometric dimensions of packaging boxes based on structured light cameras Granted publication date: 20230307 License type: Common License Record date: 20240222 |
|
EE01 | Entry into force of recordation of patent licensing contract |