CN115682941B - Packing box geometric dimension measuring method based on structured light camera - Google Patents

Packing box geometric dimension measuring method based on structured light camera Download PDF

Info

Publication number
CN115682941B
CN115682941B CN202211687709.4A CN202211687709A CN115682941B CN 115682941 B CN115682941 B CN 115682941B CN 202211687709 A CN202211687709 A CN 202211687709A CN 115682941 B CN115682941 B CN 115682941B
Authority
CN
China
Prior art keywords
plane
point cloud
structured light
light camera
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211687709.4A
Other languages
Chinese (zh)
Other versions
CN115682941A (en
Inventor
吴建毅
肖苏华
刘普京
罗文斌
蒋占四
赖南英
翁泽桂
林于程
赵玉洁
稂亚军
乔明娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN202211687709.4A priority Critical patent/CN115682941B/en
Publication of CN115682941A publication Critical patent/CN115682941A/en
Application granted granted Critical
Publication of CN115682941B publication Critical patent/CN115682941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of packing box dimension detection, and discloses a packing box geometric dimension measuring method based on a structured light camera, which comprises the following steps: s1: correcting the plane of the workbench to obtain a measurement reference plane; s2: acquiring a depth map of a packaging box to be detected; s3: positioning the packaging box to be detected according to the depth map obtained in the step S2; s4: and (3) measuring the geometric dimension: s41: reconstructing point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating the correction matrix obtained in the step S1 to obtain point cloud to be processed; s42: calculating the length and the width of the packing box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle; s43: the height of the packaging box to be measured is obtained by calculating the distance from the target point cloud to the detection reference plane; compared with the prior art, the invention realizes the aim of measuring the geometric dimension of the packing box quickly and accurately.

Description

Packing box geometric dimension measuring method based on structured light camera
Technical Field
The invention relates to the technical field of packing box size detection, in particular to a packing box geometric dimension measuring method based on a structured light camera.
Background
From 2012 to 2021, the business volume of express year in China is increased from 57 hundred million to 1083 hundred million, which is increased by 18 times. In express business, 80% of goods are packaged by cartons, and the national post administration issues an implementation scheme for promoting green packaging of express industry to definitely improve the packaging standardization rate, reduce the packaging cost rate and reduce the transportation cost. The reasonable geometrical dimension information that utilizes the carton packing will be favorable to the planning of commodity circulation transportation, reduces the cost of transportation when improving the volume utilization ratio, consequently how to obtain carton geometrical dimension high-efficiently, accurately is the important factor that the efficiency is carried out to express delivery logistics industry and the cost is reduced. The traditional mode of acquiring the geometric dimension of the carton by manual measurement is difficult to adapt to the express logistics industry with rapid development, and the development of a non-contact measurement technology has unique superiority in aspects of adapting to more complex measurement environments, efficiently and accurately acquiring surface contour information of a measured target and the like. The application research of the light curtain measurement technology provides good reference for non-contact geometric dimension measurement, but the light curtain measurement precision is greatly influenced by the placing precision, the speed precision and the self precision, and the limitation is large. The development of the machine vision technology also provides more choices for the non-contact measurement technology, a binocular vision combined point laser mode is adopted to realize the rapid measurement of the size of regular logistics goods by extracting the space coordinates of key points of the goods, a wired structure light scanning combined binocular vision mode is adopted to realize the measurement of the volume of an irregular object by three-dimensional point cloud integration, a point cloud slicing algorithm is adopted to realize the measurement of the volume of the irregular object, a binocular vision mode is directly adopted to realize the volume measurement of regular packages by various feature extraction algorithms, a two-dimensional image mode is adopted to realize the volume measurement of the regular object by a single-view 3D reconstruction method, a Yolov3 combined Intel RealSense D435i depth camera mode is adopted to position a target object by a YOLO model and then estimate the volume from a single surface to realize the measurement of the volume of the regular object, and the non-contact vision measurement technology such as the above is available; the application of the monocular structured light camera has been found to be very lacking in previous volumetric sizing studies.
Disclosure of Invention
The invention aims to provide a packing box geometric dimension measuring method based on a structured light camera, which is used for solving the technical problems.
A packing box geometric dimension measuring method based on a structured light camera comprises the following steps:
s1: correcting the plane of the workbench to obtain a measurement reference plane, wherein the measurement reference plane is parallel to the imaging plane of the structured light camera, and the mass center of the measurement reference plane is superposed with the mass center of the original point cloud of the plane of the workbench;
s2: acquiring a depth map of the packaging box to be measured;
s3: positioning the packaging box to be measured according to the depth map obtained in the step S2:
s31: traversing pixel values of each point of the detection area by taking the plane of the workbench as a reference, realizing coarse extraction of a target through pixel difference, and judging whether the target exists according to the number of effective difference pixel points; if not, returning to S2; if yes, entering S32;
s32: performing morphological denoising on the target extracted in the step S31;
s33: the function of preventing arm interference is realized through the coordinate position of the connected domain, and the position of the packaging box to be measured is positioned through the connected domain;
s4: and (3) measuring the geometric dimension:
s41: reconstructing a point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating through the correction matrix obtained in the step S1 to obtain a point cloud to be processed;
s42: calculating the length and width of the packaging box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle;
s43: and calculating the distance from the target point cloud to the measuring reference plane to obtain the height of the packaging box to be measured.
According to an embodiment of the present invention, the method for preventing arm interference in S33 is as follows:
s331: determining a detection area of the structured light camera on the working platform according to the fixed height of the structured light camera;
s332: processing the detection area depth map by S32, then carrying out connected domain analysis, and setting an anti-interference band with a fixed pixel width at the peripheral edge of the detection area image;
s333: judging whether the interference prevention belt has a shelter or not according to whether the edge coordinate position of the connected domain is positioned on the interference prevention belt or not; if not, entering S4; if yes, the process returns to S2.
According to an embodiment of the present invention, S1 includes the steps of:
s11: establishing a coordinate system and an XOY plane of the structured light camera by taking the structured light camera as an origin;
s12: acquiring a plane depth map of a workbench and generating point cloud;
s13: fitting the workbench plane point cloud to obtain plane equation parameters to obtain a workbench plane normal vector;
s14: calculating an included angle and a rotation matrix by a normal vector of a workbench plane and a normal vector of a camera imaging plane;
s15: calculating to obtain a point cloud of primary plane correction according to the rotation matrix;
s16: calculating the mass centers of the preliminary plane correction point cloud and the original point cloud to calculate a translation matrix;
s17: and fusing the rotation matrix and the translation matrix to obtain a final transformation matrix for plane correction.
According to an embodiment of the present invention, S13 includes the steps of:
point cloud fitting is carried out to construct a workbench plane equation:
Figure 975430DEST_PATH_IMAGE001
normal vector to the table plane
Figure 95832DEST_PATH_IMAGE002
Carrying out normalization dimensionless treatment, and setting normalized normal vector as
Figure 469045DEST_PATH_IMAGE003
And defining the unit vector direction as pointing to the positive z-axis direction.
According to an embodiment of the present invention, S14 includes the steps of:
the equation for the XOY plane of the camera is:
Figure 598675DEST_PATH_IMAGE004
the normal vector of the XOY plane of the structured light camera can also be obtained as:
Figure 754325DEST_PATH_IMAGE005
the included angle θ is derived as:
Figure 45629DEST_PATH_IMAGE006
namely:
Figure 906138DEST_PATH_IMAGE007
then is provided with
Figure 839459DEST_PATH_IMAGE008
And
Figure 977179DEST_PATH_IMAGE009
cross product of (d):
Figure 314751DEST_PATH_IMAGE010
(Vector)
Figure 396976DEST_PATH_IMAGE011
is composed of
Figure 602830DEST_PATH_IMAGE008
And with
Figure 470423DEST_PATH_IMAGE009
Normalizing the normal vector of the formed plane to obtain
Figure 634688DEST_PATH_IMAGE012
Then, a rotation matrix is derived by the rodlike rotation formula:
Figure 204209DEST_PATH_IMAGE013
according to an embodiment of the present invention, S15 includes the steps of:
to be provided with
Figure 948174DEST_PATH_IMAGE014
Representing the original point cloud coordinates to
Figure 935853DEST_PATH_IMAGE015
And representing the point cloud coordinate after the initial correction, wherein the initial correction process is represented by the rotation matrix R multiplied by the original point cloud coordinate:
Figure 5440DEST_PATH_IMAGE016
the coordinate conversion relation is as follows:
Figure 203203DEST_PATH_IMAGE017
according to an embodiment of the present invention, S16 includes the steps of:
calculating a formula by point cloud centroid coordinates:
Figure 875493DEST_PATH_IMAGE018
get the origin cloud centroid as
Figure 842312DEST_PATH_IMAGE019
The primary plane correction point cloud center of mass is
Figure 955237DEST_PATH_IMAGE020
Then, the translation amount is:
Figure 640297DEST_PATH_IMAGE021
the derived translation matrix is:
Figure 116277DEST_PATH_IMAGE022
according to an embodiment of the present invention, S17 includes the steps of:
if the planar rectification process is first rotation and then translation, the transformation matrix can be expressed as:
Figure 937603DEST_PATH_IMAGE023
final corrected point cloud coordinates
Figure 224359DEST_PATH_IMAGE024
Can be calculated by the following formula:
Figure 131135DEST_PATH_IMAGE025
the final coordinate conversion relation is as follows:
Figure 145227DEST_PATH_IMAGE026
according to an embodiment of the present invention, S41 includes the steps of:
obtaining calibrated internal reference of camera from structured light camera
Figure 352218DEST_PATH_IMAGE027
Figure 934509DEST_PATH_IMAGE028
Figure 203947DEST_PATH_IMAGE029
Which respectively represent the proportion of pixels in the image plane along the u, v axis direction, i.e. the pixel aspect ratio,
Figure 897097DEST_PATH_IMAGE030
Figure 817648DEST_PATH_IMAGE031
respectively representing x and y coordinates of the image principal point, and obtaining the following coordinates according to the pinhole imaging principle:
Figure 570840DEST_PATH_IMAGE032
namely the point cloud under the camera coordinate system
Figure 327575DEST_PATH_IMAGE014
And (3) reconstructing the relation:
Figure 824415DEST_PATH_IMAGE033
and then obtaining a coordinate conversion formula for plane correction according to the S17
Figure 599473DEST_PATH_IMAGE026
And obtaining final point cloud data.
According to an embodiment of the present invention, S42 includes the steps of:
s421: obtaining from depth map connected domain analysisTo the target area
Figure 523567DEST_PATH_IMAGE034
Figure 892232DEST_PATH_IMAGE035
Forming a circumscribed rectangle and a Mask graph;
s422: within the range of the circumscribed rectangle, from the leftmost side of the circumscribed rectangle
Figure 823058DEST_PATH_IMAGE034
Starting, traversing the pixels of the target depth map from left to right in S31, stopping traversing from left to right when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zero, and reconstructing the point cloud of the pixel points to obtain the space coordinates
Figure 593568DEST_PATH_IMAGE036
Since the point cloud z value corrected by the plane does not affect the position of the edge, only the point cloud z value is taken
Figure 813196DEST_PATH_IMAGE037
Storing into left edge mapping point set
Figure 669157DEST_PATH_IMAGE038
(ii) a y axis from
Figure 383166DEST_PATH_IMAGE039
Starting, traversing the pixels of the target depth map in the S31 step by step from top to bottom, extracting the mapping points at the left edge until the y axis is traversed
Figure 742603DEST_PATH_IMAGE040
Extracting the left edge point set of the target;
s423: within the range of the circumscribed rectangle, from the rightmost side of the circumscribed rectangle
Figure 133133DEST_PATH_IMAGE041
Starting, traversing S31 pixels of the target depth map from right to left, and when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zeroAnd stopping traversing from right to left, and reconstructing the point cloud of the pixel point to obtain the space coordinate
Figure 210811DEST_PATH_IMAGE036
Taking out
Figure 728511DEST_PATH_IMAGE037
Storing into the right edge mapping point set
Figure 208034DEST_PATH_IMAGE042
(ii) a y axis from
Figure 769465DEST_PATH_IMAGE039
Starting, traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping point at the right edge until the y axis is traversed
Figure 600018DEST_PATH_IMAGE043
Extracting the right edge point set of the target;
s424: mapping the left and right edges of S422 and S423 into a set of points
Figure 780463DEST_PATH_IMAGE044
And
Figure 989859DEST_PATH_IMAGE042
are combined to obtain a complete target edge mapping point set
Figure 597558DEST_PATH_IMAGE045
(ii) a Obtaining only including
Figure 40041DEST_PATH_IMAGE037
Set of points of coordinates
Figure 758598DEST_PATH_IMAGE045
The minimum circumscribed rectangle can obtain the length and width of the target.
Compared with the prior art, the packing box geometric dimension measuring method based on the structured light camera has the following advantages:
according to the packing box geometric dimension measuring method based on the structured light camera, the point cloud mapping two-dimensional plane is used, the length and the width of the packing box to be measured are calculated through the detection algorithm of extracting the edges and then solving the minimum circumscribed rectangle, the detection accuracy can be guaranteed while the detection time efficiency is considered, and the aim of measuring the geometric dimension of the packing box quickly and accurately can be achieved.
Drawings
FIG. 1 is a flow chart of a packing box geometric dimension measuring method based on a structured light camera of the invention;
FIG. 2 is a schematic view of the measuring system of the present invention measuring the packing box to be measured;
FIG. 3 is a schematic diagram showing the positional relationship between the original table plane and the formed measurement reference plane after the table plane is subjected to the plane correction in step S1;
FIG. 4 is a comparison of the results before and after the target crude extraction is performed by pixel differentiation in step S31;
fig. 5 is a comparison diagram before and after morphological denoising of the target extracted in S31 in step S32;
fig. 6 is a Mask map acquired in step S421;
fig. 7 is a left edge point set diagram obtained in step S422;
fig. 8 is a right edge point set diagram obtained in step S423;
fig. 9 is a diagram of the complete edge point set synthesized in step S424;
FIG. 10 is an origin cloud chart obtained by sampling a packaging box to be measured;
fig. 11 is a down-sampling result after point cloud is sparse by the voxel down-sampling method in step S43;
fig. 12 is a point cloud down-sampled in step S43;
FIG. 13 is a diagram illustrating the result of the statistical filtering performed on FIG. 12 in step S43;
in the figure: 1. structured light camera, 2, support, 3, packing box to be measured.
The implementation and advantages of the functions of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In the following description, for purposes of explanation, numerous implementation details are set forth in order to provide a thorough understanding of various embodiments of the present invention. It should be understood, however, that these implementation details should not be taken to limit the invention. That is, in some embodiments of the invention, such implementation details are not necessary. In addition, some conventional structures and components are shown in simplified schematic form in the drawings.
It should be noted that all directional indicators (such as up, down, left, right, front, back \8230;) in the embodiments of the present invention are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the attached drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to the first, the second, etc. in the present invention are only used for description purposes, do not particularly refer to an order or sequence, and do not limit the present invention, but only distinguish components or operations described in the same technical terms, and are not understood to indicate or imply relative importance or implicitly indicate the number of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In addition, technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
For a further understanding of the contents, features and effects of the present invention, reference will now be made to the following examples, which are illustrated in the accompanying drawings and described in the following detailed description:
referring to fig. 1, the invention discloses a packing box geometric dimension measuring method based on a structured light camera, comprising the following steps:
s1: correcting the plane of the workbench to obtain a measurement reference plane; the measuring reference plane is parallel to the imaging plane of the structured light camera, and the mass center of the measuring reference plane is superposed with the mass center of the original point cloud of the workbench plane;
when the measurement system is installed, an ideal state cannot be achieved usually, and a certain included angle exists between the imaging plane of the structured light camera 1 and the plane of the workbench, namely a certain included angle exists between the XOY plane of the camera coordinate system using the structured light camera 1 as the origin and the plane of the workbench. For the convenience of subsequent data processing, the tilt angle needs to be corrected, and the stage plane is corrected to be parallel to the imaging plane of the structured light camera 1.
S11: establishing a coordinate system and an XOY plane of the structured light camera by taking the structured light camera as an original point;
before the geometric dimension of the packing box 3 to be measured is measured, a measuring system is arranged, as shown in fig. 2, in the invention, a structured light camera 1 is adopted as an acquisition structure of the measuring system, the structured light camera 1 is vertically fixed downwards by an extending bracket 2, an Astra monocular structured light camera of Australian light is adopted as the structured light camera 1, the functions of simultaneously acquiring a color image and a depth image are realized, the real-time measurement and display can be realized, the effective working distance is 0.6m to 8m, the optimal working distance is 1m, and the visual angle of the depth image is (H58.4 degrees, V45.5 degrees). Based on the measurement system, the structured light camera 1 is erected on the bracket 2 at the height of 1.4m from the plane of the workbench, so that the measurement of the size of the packing box in the range of a trapezoidal platform with 660 multiplied by 480mm on the upper surface, 1500 multiplied by 1100mm on the lower surface and 800mm in height can be realized.
S12: acquiring a plane depth map of a workbench and generating point cloud;
referring to fig. 2, the depth map of the worktable plane is obtained by the structured light camera without placing the packing box 3 to be measured, and the corresponding depth map is generated only containing
Figure 85149DEST_PATH_IMAGE014
The point cloud data of (2).
S13: fitting the workbench plane point cloud to obtain plane equation parameters to obtain a workbench plane normal vector;
since the correcting operation of the worktable plane is a pre-operation before the measurement of the size of the packing box 3 to be measured, the correcting operation does not affectThe time complexity of the whole algorithm is detected, and the time can be sacrificed to obtain the precision, so that the measuring system selects a random sampling consistency algorithm to set high iteration times to fit a plane, and the plane can be fitted to obtain plane equation parameters
Figure 863750DEST_PATH_IMAGE046
And (3) point cloud fitting to construct a workbench plane equation:
Figure 527949DEST_PATH_IMAGE001
and the normal vector of the plane is:
Figure 50197DEST_PATH_IMAGE047
to the normal vector
Figure 624398DEST_PATH_IMAGE048
Carrying out normalization dimensionless treatment, and setting normalized normal vector as
Figure 449266DEST_PATH_IMAGE049
And defining the unit vector direction as pointing to the positive z-axis direction.
S14: calculating an included angle and a rotation matrix by a normal vector of a workbench plane and a normal vector of a camera imaging plane;
the equation for the XOY plane of the structured light camera 1 is:
Figure 210548DEST_PATH_IMAGE004
the normal vector of the XOY plane of the structured light camera 1 can also be obtained as:
Figure 661121DEST_PATH_IMAGE050
the angle θ is derived as:
Figure 824249DEST_PATH_IMAGE006
namely:
Figure 820018DEST_PATH_IMAGE007
then by
Figure 599756DEST_PATH_IMAGE008
And
Figure 588440DEST_PATH_IMAGE009
cross product of (c):
Figure 606075DEST_PATH_IMAGE010
(Vector)
Figure 38324DEST_PATH_IMAGE011
is composed of
Figure 774199DEST_PATH_IMAGE008
And
Figure 566575DEST_PATH_IMAGE009
normalizing the normal vector of the formed plane to obtain
Figure 704295DEST_PATH_IMAGE012
Then, a rotation matrix is derived by a rodrieger rotation formula:
Figure 432079DEST_PATH_IMAGE013
s15: calculating to obtain a point cloud of primary plane correction according to the rotation matrix;
to be provided with
Figure 527687DEST_PATH_IMAGE014
Representing the original point cloud coordinates to
Figure 733540DEST_PATH_IMAGE015
Point cloud coordinates after preliminary correction are represented, and then preliminary correction is carried outThe process is represented by the rotation matrix R by the original point cloud coordinates:
Figure 850401DEST_PATH_IMAGE016
the coordinate conversion relation is as follows:
Figure 749087DEST_PATH_IMAGE017
s16: calculating the mass centers of the preliminary plane correction point cloud and the original point cloud to calculate a translation matrix;
the point cloud subjected to the preliminary plane correction is not only changed in plane angle, but also shifted in the whole point cloud, and in order to correct only the inclined angle of the point cloud plane without changing the position of the point cloud, the position of the preliminary plane correction point cloud needs to be translated from the centroid of the point cloud to the centroid of the original point cloud.
Calculating a formula by point cloud centroid coordinates:
Figure 69341DEST_PATH_IMAGE018
get the origin cloud centroid as
Figure 344464DEST_PATH_IMAGE019
The primary plane correction point cloud center of mass is
Figure 315831DEST_PATH_IMAGE020
Then, the translation amount is:
Figure 385419DEST_PATH_IMAGE021
the derived translation matrix is:
Figure 458548DEST_PATH_IMAGE022
s17: and fusing the rotation matrix and the translation matrix to obtain a final transformation matrix for plane correction.
If the planar rectification process is rotation and then translation, the transformation matrix can be expressed as:
Figure 6204DEST_PATH_IMAGE051
final corrected point cloud coordinates
Figure 238602DEST_PATH_IMAGE024
Can be calculated by the following formula:
Figure 338145DEST_PATH_IMAGE025
the final coordinate conversion relation is as follows:
Figure 23204DEST_PATH_IMAGE026
after plane correction, a reference plane with a measuring reference plane parallel to the XOY plane and a mass center coincident with the origin cloud mass center can be obtained, the correction effect is shown in figure 3, the bounding box in the figure is an axial bounding box, and the inclined gray point cloud plane is subjected to plane correction to obtain a green point cloud plane parallel to the XOY plane and a green point cloud plane with a mass center coincident with the origin cloud mass center.
S2: obtaining a depth map of the packaging box to be measured:
referring to fig. 2, during measurement, a packing box 3 to be measured is placed in an acquisition range below a structured light camera 1, namely, in an effective measurement range of the structured light camera 1, which corresponds to a region (1) in fig. 2, the structured light camera 1 is controlled by upper computer software to take a picture, the upper computer software is developed based on a Visual Studio platform, the measurement system disclosed by the invention adopts an SDK of an OpenNI2 interface to develop, an OpenCV Visual library is used for image algorithm processing, and a PCL point cloud library is used for point cloud algorithm processing. After the structured light camera 1 takes a picture and collects the picture, the data is transmitted to a computer for processing, and the computer for the system experiment is a glory notebook computer MagicBook 16 Pro (the CPU is R7-5800H, and the GPU is RTX 3050).
S3: positioning the packaging box to be measured according to the depth map obtained in the step S2; the primary work of the measurement of the detection system of the invention is to position the position of the packing case 3 to be measured, the positioning work is realized by image processing based on a 16-bit single-channel depth map, and the positioning algorithm designed by the measurement system of the invention comprises the following steps:
s31: traversing pixel values of each point of the detection area by taking the plane of the workbench as a reference, realizing coarse extraction of a target through pixel difference, and judging whether the target exists according to the number of effective difference pixel points; if not, returning to S2; if yes, the process goes to S32;
referring to fig. 4, the pixel value of the depth map represents the distance from the point to the structured light camera, the pixel value of each point in the detection area is traversed by using the measurement reference plane as the reference plane, and the pixel value within the range of 20mm above the plane is assigned to zero by pixel difference, so that the depth map without the background above the plane 20mm can be obtained. Meanwhile, the total number of pixels of the depth map without the background is counted, and a detection threshold is set to preliminarily judge whether a detection target exists.
S32: performing morphological denoising on the target extracted in the step S31;
referring to fig. 5, the depth map may generate some noise due to a complex environment or structured light camera imaging, and therefore, an image denoising operation is necessary. The morphological operation of OpenCV is based on 8-bit images, so that a 16-bit depth image needs to be converted into an 8-bit gray-scale image, then small noise points are removed by using open operation, holes are filled by using closed operation, and because the upper surfaces of some packing boxes to be measured have the condition of section difference, two areas appear in the 8-bit gray-scale image, and the areas can be connected into a connected domain by using closed operation, so that the positioning error is avoided.
S33: the function of preventing arm interference is realized through the coordinate position of the connected domain, and the position of the packaging box to be measured is positioned through the connected domain;
the analysis of the connected domain mainly has two purposes, namely, the function of preventing arm interference is realized through the connected coordinate position, and the position of the packing box to be measured is positioned through the connected domain. Whether the package is taken by extending hands or not can be judged by the distance of the connected domain from top to bottom and from left to right in the image so as to influence the detection result, and then the range coordinate position of the target is judged by screening the maximum area of the connected domain, please refer to fig. 2, as shown in an area (3) in fig. 2, and simultaneously a Mask image only containing the area of the packing box is generated according to the label value of the target area screened by the connected domain.
The method for preventing the arm interference comprises the following steps:
s331: determining a detection area of the structured light camera on the working platform according to the fixed height of the structured light camera;
s332: processing the depth map of the detection area by S32, then carrying out connected domain analysis, and setting an anti-interference band with a fixed pixel width at the peripheral edge of the image of the detection area;
s333: judging whether the interference prevention belt has a shelter or not according to whether the edge coordinate position of the connected domain is positioned on the interference prevention belt or not; if not, entering S4; if yes, the process returns to S2.
An anti-interference zone of 5 pixels is designed at the edge of the detection zone, the edge interference zone is a zone (2) in fig. 2, and whether an arm is on the interference zone is judged through connected domain analysis. During detection, the packing box 3 to be measured needs to be placed in a detection area corresponding to the area (1) in fig. 2, detection cannot be performed when the packing box 3 to be measured is not placed or when the packing box 3 to be measured is placed on an interference belt, and detection cannot be triggered when the packing box 3 to be measured is placed and is interfered by an arm.
S4: and (3) measuring the geometric dimension:
the measuring system of the invention detects the length, width and height of the packaging box to be measured, and for the packaging box in the logistics industry, the length, width and height of the maximum size of the periphery of the packaging box are actually required. The measuring system divides the measurement into two parts, the length and the width are detected by an improved bounding box algorithm for target point cloud projection, and the height is detected by calculating the distance after the target point cloud filtering downsampling. The method specifically comprises the following steps:
s41: reconstructing point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating the correction matrix obtained in the step S1 to obtain point cloud to be processed;
obtaining calibrated internal reference of camera from structured light camera
Figure 249918DEST_PATH_IMAGE052
Figure 71243DEST_PATH_IMAGE028
Figure 607267DEST_PATH_IMAGE053
Which respectively represent the proportion of pixels in the image plane along the u, v axis direction, i.e. the pixel aspect ratio,
Figure 779622DEST_PATH_IMAGE030
Figure 275938DEST_PATH_IMAGE054
respectively representing x and y coordinates of the image principal point, and obtaining the following coordinates according to the pinhole imaging principle:
Figure 482928DEST_PATH_IMAGE032
namely the point cloud under the camera coordinate system
Figure 924274DEST_PATH_IMAGE014
And (3) reconstructing the relation:
Figure 318346DEST_PATH_IMAGE033
and then according to the coordinate conversion formula of the plane correction obtained in S17:
Figure 886862DEST_PATH_IMAGE026
and obtaining final point cloud data.
S42: calculating the length and width of the packaging box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle;
after the target point cloud is obtained through three-dimensional reconstruction of the depth map, the length and the width of the target can be measured through two methods, namely, the length and the width are obtained through solving a minimum directed bounding box for the point cloud, and the length and the width are obtained through solving a minimum directed circumscribed rectangle for the point cloud mapping to a two-dimensional plane. Considering the execution timeliness of the detection algorithm, the length and the width of the minimum circumscribed rectangle cannot be influenced based on the internal data of the point cloud, and the length and the width are only influenced by the edge points of the target point cloud. The algorithm is realized by the following steps:
s421: obtaining a target area from a depth map connected domain analysis
Figure 948359DEST_PATH_IMAGE034
Figure 701551DEST_PATH_IMAGE035
The formed circumscribed rectangle and Mask graph, and the obtained Mask graph is shown in FIG. 6;
s422: within the range of the circumscribed rectangle, from the leftmost side of the circumscribed rectangle
Figure 707553DEST_PATH_IMAGE034
Starting, traversing the pixels of the target depth map from left to right S31, stopping traversing from left to right when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zero, and reconstructing the point cloud of the pixel points to obtain the space coordinates
Figure 938814DEST_PATH_IMAGE036
Since the z value of the point cloud corrected by the plane does not affect the position of the edge, only the position of the edge is taken here
Figure 464605DEST_PATH_IMAGE037
Storing into left edge mapping point set
Figure 388698DEST_PATH_IMAGE038
(ii) a y axis from
Figure 881997DEST_PATH_IMAGE039
At the start of the process,traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping points of the left edge until the y-axis is traversed
Figure 916949DEST_PATH_IMAGE040
Then, the left edge point set of the target can be extracted, and the extraction result is shown in fig. 7;
s423: within the range of the circumscribed rectangle, from the rightmost side of the circumscribed rectangle
Figure 297246DEST_PATH_IMAGE041
Starting, traversing S31 pixels of the target depth map from right to left, stopping traversing from right to left when pixel points are in corresponding positions of the Mask map and the pixel values are not zero, and reconstructing point cloud of the pixel points to obtain space coordinates
Figure 657820DEST_PATH_IMAGE036
Get it
Figure 638414DEST_PATH_IMAGE037
Storing into the right edge mapping point set
Figure 477057DEST_PATH_IMAGE042
(ii) a y axis from
Figure 102074DEST_PATH_IMAGE039
Starting, traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping point of the right edge until the y-axis is traversed
Figure 240406DEST_PATH_IMAGE043
Then, the right edge point set of the target is extracted, and the extraction result is shown in fig. 8;
s424: mapping the left and right edges of S422 and S423 into a set of points
Figure 318084DEST_PATH_IMAGE044
And
Figure 85052DEST_PATH_IMAGE042
are combined to obtain a complete target edge mapping point set
Figure 564574DEST_PATH_IMAGE045
(ii) a Obtaining only includes
Figure 142317DEST_PATH_IMAGE037
Set of points of coordinates
Figure 707291DEST_PATH_IMAGE045
The minimum circumscribed rectangle can obtain the length and width of the target, and the synthesis result is shown in fig. 9.
The algorithm designed by the system can effectively reduce the time complexity of solving the minimum directional circumscribed rectangle.
S43: the height of the packaging box to be measured is obtained by calculating the distance from the target point cloud to the measuring reference plane;
referring to fig. 10 to 13, the height measurement is implemented by calculating the distance from the target point cloud to the reference plane, and since the amount of point cloud data included when the size of the target area is large increases the calculation time, and the depth change of the point set in the neighborhood is small, the measurement system of the present invention uses the voxel down-sampling method to thin out the point cloud, so as to achieve the purpose of maintaining the accuracy and speeding up the calculation time. And after down sampling, statistical filtering is carried out to remove the interference of outliers so as to avoid the instability of the measurement result.
Finally traversing each point of the filtered point cloud
Figure 746791DEST_PATH_IMAGE055
Calculating each point
Figure 346400DEST_PATH_IMAGE055
To the plane of correction
Figure 219678DEST_PATH_IMAGE056
The calculation formula is as follows:
Figure 147314DEST_PATH_IMAGE057
and counting the maximum value of the calculation result as the height value of the packing box.
The present invention is not limited to the above preferred embodiments, and any modification, equivalent replacement or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A packing box geometric dimension measuring method based on a structured light camera is characterized by comprising the following steps:
s1: correcting the workbench plane to obtain a measurement reference plane, wherein the measurement reference plane is parallel to an imaging plane of the structured light camera, and the mass center of the measurement reference plane is superposed with the mass center of the original point cloud of the workbench plane;
s2: acquiring a depth map of a packaging box to be measured;
s3: positioning the packaging box to be measured according to the depth map obtained in the step S2:
s31: traversing pixel values of each point of the detection area by taking the plane of the workbench as a reference, realizing coarse extraction of a target through pixel difference, and judging whether the target exists according to the number of effective difference pixel points; if not, returning to S2; if yes, the process goes to S32;
s32: performing morphological denoising on the target extracted in the step S31;
s33: the function of preventing arm interference is realized through the coordinate position of the communication domain, and the position of the packing box to be measured is positioned through the communication domain;
s4: and (3) measuring the geometric dimension:
s41: reconstructing point cloud from the depth map data of the packaging box to be measured obtained by positioning the position of the packaging box to be measured in the step S33, and calculating the correction matrix obtained in the step S1 to obtain point cloud to be processed;
s42: calculating the length and the width of the packing box to be measured by adopting a point cloud mapping two-dimensional plane through a detection algorithm of extracting edges and solving a minimum circumscribed rectangle;
s43: and calculating the distance from the target point cloud to the measuring reference plane to obtain the height of the packaging box to be measured.
2. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 1, wherein the method for preventing the arm interference in S33 is as follows:
s331: determining a detection area of the structured light camera on the working platform according to the fixed height of the structured light camera;
s332: processing the detection area depth map by S32, then carrying out connected domain analysis, and setting an anti-interference band with a fixed pixel width at the peripheral edge of the detection area image;
s333: judging whether the interference prevention belt has a shelter or not according to whether the edge coordinate position of the connected domain is positioned on the interference prevention belt or not; if not, entering S4; if yes, the process returns to S2.
3. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 1, wherein S1 comprises the steps of:
s11: establishing a coordinate system and an XOY plane of the structured light camera by taking the structured light camera as an origin;
s12: acquiring a plane depth map of a workbench and generating point cloud;
s13: fitting the workbench plane point cloud to obtain plane equation parameters to obtain a workbench plane normal vector;
s14: calculating an included angle and a rotation matrix by a normal vector of a workbench plane and a normal vector of a camera imaging plane;
s15: calculating to obtain a point cloud of primary plane correction according to the rotation matrix;
s16: calculating the mass centers of the preliminary plane correction point cloud and the original point cloud to calculate a translation matrix;
s17: and fusing the rotation matrix and the translation matrix to obtain a final transformation matrix for plane correction.
4. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 3, wherein S13 comprises the following steps:
point cloud fitting is carried out to construct a workbench plane equation:
Figure 119507DEST_PATH_IMAGE001
normal vector to the table plane
Figure 36647DEST_PATH_IMAGE002
Carrying out normalized dimensionless treatment, and setting the normalized normal vector as
Figure 81964DEST_PATH_IMAGE003
And defining the unit vector direction as pointing to the positive z-axis direction.
5. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 4, wherein S14 comprises the following steps:
the equation for the XOY plane of the camera is:
Figure 742752DEST_PATH_IMAGE004
the normal vector of the XOY plane of the structured light camera can also be obtained as:
Figure 557125DEST_PATH_IMAGE005
the angle θ is derived as:
Figure 910746DEST_PATH_IMAGE006
namely:
Figure 443358DEST_PATH_IMAGE007
then by
Figure 907837DEST_PATH_IMAGE008
And
Figure 311137DEST_PATH_IMAGE009
cross product of (c):
Figure 835659DEST_PATH_IMAGE010
(Vector)
Figure 855568DEST_PATH_IMAGE011
is composed of
Figure 858159DEST_PATH_IMAGE008
And with
Figure 647123DEST_PATH_IMAGE009
Normalizing the normal vector of the formed plane to obtain
Figure 76968DEST_PATH_IMAGE012
Then, a rotation matrix is derived by a rodrieger rotation formula:
Figure 318593DEST_PATH_IMAGE013
6. the packing case geometric dimension measuring method based on the structured light camera as claimed in claim 5, wherein S15 comprises the steps of:
to be provided with
Figure 124875DEST_PATH_IMAGE014
Representing the original point cloud coordinates to
Figure 33925DEST_PATH_IMAGE015
And representing the point cloud coordinate after the initial correction, wherein the initial correction process is represented by the rotation matrix R multiplied by the original point cloud coordinate:
Figure 634671DEST_PATH_IMAGE016
the coordinate conversion relation is as follows:
Figure 629172DEST_PATH_IMAGE017
7. the packing case geometric dimension measuring method based on the structured light camera according to claim 6, wherein S16 comprises the following steps:
calculating a formula by point cloud centroid coordinates:
Figure 707986DEST_PATH_IMAGE018
get the origin cloud centroid as
Figure 471543DEST_PATH_IMAGE019
The centroid of the preliminary plane correction point cloud is
Figure 243190DEST_PATH_IMAGE020
Then, the translation amount is:
Figure 724987DEST_PATH_IMAGE021
the translation matrix is derived as:
Figure 607492DEST_PATH_IMAGE022
8. the packing case geometric dimension measuring method based on the structured light camera as claimed in claim 7, wherein S17 comprises the steps of:
if the planar rectification process is rotation and then translation, the transformation matrix can be expressed as:
Figure 225555DEST_PATH_IMAGE023
final corrected point cloud coordinates
Figure 433683DEST_PATH_IMAGE024
Can be calculated by the following formula:
Figure 137196DEST_PATH_IMAGE025
the final coordinate conversion relation is as follows:
Figure 557813DEST_PATH_IMAGE026
9. the packing case geometric dimension measuring method based on the structured light camera as claimed in claim 8, wherein S41 comprises the steps of:
obtaining calibrated internal reference of camera from structured light camera
Figure 295962DEST_PATH_IMAGE027
Figure 409412DEST_PATH_IMAGE028
Figure 600222DEST_PATH_IMAGE029
Which respectively represent the proportion of pixels in the image plane along the u, v axis direction, i.e. the pixel aspect ratio,
Figure 824530DEST_PATH_IMAGE030
Figure 417185DEST_PATH_IMAGE031
respectively representing the x and y coordinates of the image principal point, and obtaining the following according to the pinhole imaging principle:
Figure 967115DEST_PATH_IMAGE032
namely the point cloud under the camera coordinate system
Figure 645221DEST_PATH_IMAGE033
And (3) reconstructing the relation:
Figure 673220DEST_PATH_IMAGE034
and then obtaining a coordinate conversion formula for plane correction according to the S17
Figure 120382DEST_PATH_IMAGE026
And obtaining final point cloud data.
10. The packing case geometric dimension measuring method based on the structured light camera as claimed in claim 1, wherein S42 comprises the steps of:
s421: obtaining a target area from a depth map connected domain analysis
Figure 575634DEST_PATH_IMAGE035
Figure 738106DEST_PATH_IMAGE036
Forming a circumscribed rectangle and a Mask graph;
s422: within the range of the circumscribed rectangle, from the leftmost side of the circumscribed rectangle
Figure 569796DEST_PATH_IMAGE035
Starting, traversing the pixels of the target depth map from left to right S31, stopping traversing from left to right when the pixel points are in the corresponding positions of the Mask map and the pixel values are not zero, and reconstructing the point cloud of the pixel points to obtain the space coordinates
Figure 871464DEST_PATH_IMAGE037
Since the point cloud z value corrected by the plane does not affect the position of the edge, only the point cloud z value is taken
Figure 763197DEST_PATH_IMAGE038
Storing into left edge mapping point set
Figure 150316DEST_PATH_IMAGE039
(ii) a y axis from
Figure 520118DEST_PATH_IMAGE040
Starting, traversing the pixels of the target depth map in the S31 step by step from top to bottom, extracting the mapping points at the left edge until the y axis is traversed
Figure 941872DEST_PATH_IMAGE041
Extracting the left edge point set of the target;
s423: within the range of the circumscribed rectangle, from the rightmost side of the circumscribed rectangle
Figure 4506DEST_PATH_IMAGE042
Starting, traversing S31 pixels of the target depth map from right to left, stopping traversing from right to left when pixel points are in corresponding positions of the Mask map and the pixel values are not zero, and reconstructing point cloud of the pixel points to obtain space coordinates
Figure 878921DEST_PATH_IMAGE037
Get it
Figure 317992DEST_PATH_IMAGE038
Storing into the right edge mapping point set
Figure 328674DEST_PATH_IMAGE043
(ii) a y axis from
Figure 562209DEST_PATH_IMAGE040
Starting, traversing the pixels of the target depth map S31 line by line from top to bottom, extracting the mapping point at the right edge until the y axis is traversed
Figure 923920DEST_PATH_IMAGE044
Extracting the right edge point set of the target;
s424: mapping the left and right edges of S422 and S423 into a point set
Figure 635524DEST_PATH_IMAGE045
And
Figure 31870DEST_PATH_IMAGE043
combine to obtain a complete set of target edge mapping points
Figure 436307DEST_PATH_IMAGE046
(ii) a Obtaining only includes
Figure 19735DEST_PATH_IMAGE038
Set of points of coordinates
Figure 535030DEST_PATH_IMAGE046
The minimum circumscribed rectangle can obtain the length and width of the target.
CN202211687709.4A 2022-12-27 2022-12-27 Packing box geometric dimension measuring method based on structured light camera Active CN115682941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211687709.4A CN115682941B (en) 2022-12-27 2022-12-27 Packing box geometric dimension measuring method based on structured light camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211687709.4A CN115682941B (en) 2022-12-27 2022-12-27 Packing box geometric dimension measuring method based on structured light camera

Publications (2)

Publication Number Publication Date
CN115682941A CN115682941A (en) 2023-02-03
CN115682941B true CN115682941B (en) 2023-03-07

Family

ID=85056207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211687709.4A Active CN115682941B (en) 2022-12-27 2022-12-27 Packing box geometric dimension measuring method based on structured light camera

Country Status (1)

Country Link
CN (1) CN115682941B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume
CN110017773A (en) * 2019-05-09 2019-07-16 福建(泉州)哈工大工程技术研究院 A kind of package volume measuring method based on machine vision
CN114396875A (en) * 2022-01-18 2022-04-26 安徽工业大学 Rectangular parcel volume measurement method based on vertical shooting of depth camera
CN114993182A (en) * 2022-06-28 2022-09-02 浙江外国语学院 Conveyor belt-based parcel packaging size measurement system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830873B (en) * 2018-06-29 2022-02-01 京东方科技集团股份有限公司 Depth image object edge extraction method, device, medium and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416804A (en) * 2018-02-11 2018-08-17 深圳市优博讯科技股份有限公司 Obtain method, apparatus, terminal device and the storage medium of target object volume
CN110017773A (en) * 2019-05-09 2019-07-16 福建(泉州)哈工大工程技术研究院 A kind of package volume measuring method based on machine vision
CN114396875A (en) * 2022-01-18 2022-04-26 安徽工业大学 Rectangular parcel volume measurement method based on vertical shooting of depth camera
CN114993182A (en) * 2022-06-28 2022-09-02 浙江外国语学院 Conveyor belt-based parcel packaging size measurement system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张志刚等.基于双目立体视觉的物流包装箱尺寸测量研究.2020,第第41卷卷(第第41卷期),第230-236页. *

Also Published As

Publication number Publication date
CN115682941A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN110017773B (en) Package volume measuring method based on machine vision
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
CN106839977B (en) Shield dregs volume method for real-time measurement based on optical grating projection binocular imaging technology
US9607406B2 (en) Size measurement device and size measurement method
TWI398796B (en) Pupil tracking methods and systems, and correction methods and correction modules for pupil tracking
EP3020023B1 (en) Systems and methods for producing a three-dimensional face model
CN105574921B (en) Automated texture mapping and animation from images
KR20180014677A (en) System and method for improved scoring of 3d poses and spurious point removal in 3d image data
JP3738456B2 (en) Article position detection method and apparatus
CN113223135B (en) Three-dimensional reconstruction device and method based on special composite plane mirror virtual image imaging
CN109903346A (en) Camera attitude detecting method, device, equipment and storage medium
CN110415363A (en) A kind of object recognition positioning method at random based on trinocular vision
WO2019041794A1 (en) Distortion correction method and apparatus for three-dimensional measurement, and terminal device and storage medium
CN114998328A (en) Workpiece spraying defect detection method and system based on machine vision and readable storage medium
CN115682941B (en) Packing box geometric dimension measuring method based on structured light camera
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
CN106340062B (en) A kind of generation method and device of three-D grain model file
Petrovai et al. Obstacle detection using stereovision for Android-based mobile devices
CN108010084A (en) A kind of depth camera is rebuild and method, system, the equipment of automatic Calibration
CN209342062U (en) 3D vision guide de-stacking measuring system
CN112070844A (en) Calibration method and device of structured light system, calibration tool diagram, equipment and medium
CN116309573A (en) Defect detection method for printed characters of milk packaging box
Pang et al. An algorithm for extracting the center of linear structured light fringe based on directional template
CN115294277A (en) Three-dimensional reconstruction method and device of object, electronic equipment and storage medium
CN115456945A (en) Chip pin defect detection method, detection device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20230203

Assignee: HONGDA MACHINERY MANUFACTURING (HEYUAN) Co.,Ltd.

Assignor: GUANGDONG POLYTECHNIC NORMAL University

Contract record no.: X2024980002156

Denomination of invention: A method for measuring the geometric dimensions of packaging boxes based on structured light cameras

Granted publication date: 20230307

License type: Common License

Record date: 20240222

EE01 Entry into force of recordation of patent licensing contract