CN116485887A - Unsupervised 3D carton detection method and system - Google Patents

Unsupervised 3D carton detection method and system Download PDF

Info

Publication number
CN116485887A
CN116485887A CN202310084182.4A CN202310084182A CN116485887A CN 116485887 A CN116485887 A CN 116485887A CN 202310084182 A CN202310084182 A CN 202310084182A CN 116485887 A CN116485887 A CN 116485887A
Authority
CN
China
Prior art keywords
point cloud
carton
data set
axis data
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310084182.4A
Other languages
Chinese (zh)
Other versions
CN116485887B (en
Inventor
周志刚
陈勇超
李昌昊
孔祥宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Proge Technology Co ltd
Original Assignee
Hubei Proge Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Proge Technology Co ltd filed Critical Hubei Proge Technology Co ltd
Priority to CN202310084182.4A priority Critical patent/CN116485887B/en
Publication of CN116485887A publication Critical patent/CN116485887A/en
Application granted granted Critical
Publication of CN116485887B publication Critical patent/CN116485887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses an unsupervised 3D carton detection method and system, which are used for carrying out filtering treatment on point cloud data by inputting an original point cloud; screening out a carton point cloud by using K-M e a n s clustering; and taking the carton single-channel aerial view as an input diagram of a C an n y edge detection algorithm, and then carrying out contour detection and C an n y edge detection to obtain a target carton position area. The invention improves the detection efficiency of the logistics cartons and improves the accuracy of detecting the sizes of the cartons by logistics; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.

Description

Unsupervised 3D carton detection method and system
Technical Field
The invention relates to the technical field of intelligent warehouse logistics, and particularly discloses an unsupervised 3D carton detection method and system.
Background
In order to improve the utilization rate of storage space, the sorting efficiency of goods, the space utilization rate of trucks, and reduce the operation cost, the space position where a target object (logistics paper box) is placed needs to be known when goods are loaded in the storage logistics industry, then the target object is grasped, and then the target object is carried and placed in a proper position, so that the space is utilized to the maximum in a container (truck) with specified specification.
The detection of the spatial position of the object is a key for realizing automatic and intelligent commodity carrying of logistics, and has important significance for improving the efficiency of logistics operation. In past studies, object spatial position detection studies have focused mainly on detection in two-dimensional images. However, when serious occlusion and noise occur in the detected picture, the detection algorithm is often inaccurate. The three-dimensional detection algorithm technology for the object which recently appears can better acquire some space structure information such as the position, the size, the direction and the like of the object. The three-dimensional detection technology is to perform omnibearing detection on a detected object by using various methods to obtain the three-dimensional coordinates of the whole detected object. The existing method for completing three-dimensional detection tasks by using monocular or binocular vision is easily influenced by object shielding, viewpoint changes and scale changes, so that the problems of poor detection precision, poor robustness and the like are caused.
Therefore, the existing method using monocular or binocular vision to complete the three-dimensional detection task is easily influenced by object shielding, viewpoint change and scale change, so that the detection accuracy is poor and the robustness is poor, which is a technical problem to be solved in the prior art.
Disclosure of Invention
The invention provides an unsupervised 3D carton detection method and system, and aims to solve the technical problems that the existing method using monocular or binocular vision to complete a three-dimensional detection task is easily affected by object shielding, viewpoint changes and scale changes, so that the detection precision is poor and the robustness is poor.
One aspect of the invention relates to an unsupervised 3D carton detection method comprising the steps of:
processing point cloud: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering;
processing an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
Further, the step of processing the point cloud includes:
collecting point cloud data, placing a 3D camera right above a shooting area, adjusting the angle of the camera to enable the ground to be perpendicular to a Z axis in a camera coordinate system, and shooting a target area to obtain a background image;
After the background image data are stored, taking the logistics paper box as an object to be detected, and shooting a target area by a 3D camera;
shooting by a 3D camera to obtain a background point cloud data set and a carton point cloud data set, and obtaining a Z-axis data set of the carton surface point cloud and a Z-axis data set of the conveyor belt point cloud according to the shot background point cloud data set and carton point cloud data set;
adopting an unsupervised learning clustering algorithm based on Euclidean distance to perform clustering analysis on the obtained Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area;
and respectively extracting the Z-axis data set of the point cloud of the surface of the carton obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and carrying out point cloud segmentation to obtain the point cloud of the surface of the carton.
Further, a background point cloud data set and a carton point cloud data set are obtained through shooting by a 3D camera, and the obtained background point cloud is obtained according to shootingIn the step of acquiring the Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud on the conveyor belt, the X-axis data set of the point cloud on the surface of the carton is assumed to be B x ={X b1 ,X b2 ,...,X bn Y-axis data set of the point cloud on the surface of the carton is B y ={Y b1 ,Y b2 ,...,Y bn X-axis data set of the } conveyor belt point cloud is G x ={X g1 ,X g2 ,...,X gn Y-axis data set of the } and conveyer belt point cloud is G y ={Y g1 ,,Y g Because cartons and conveyor belts often have different height differences, the cartons and conveyor belts are divided into Z-axis data sets B of the surface point clouds of the cartons according to the Z-axis data characteristics of the point clouds z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z b1 ,Z b2 ,...,Z bn Z-axis data set of } and conveyer belt point cloud is G z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z g1 ,Z g2 ,...,Z gn }。
Further, the step of processing the image includes:
preprocessing the three-dimensional point cloud, reducing the dimension, and converting the Z-axis data set of the partitioned carton surface point cloud into a carton single-channel aerial view according to depth information;
obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm;
after edge detection processing is carried out by a Canny edge detection algorithm, according to the obtained edge information, a corner point is obtained by adopting a contour detection algorithm;
and acquiring angular point information by adopting a contour detection algorithm, and acquiring the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to acquire the minimum circumscribed rectangle.
Further, the method for obtaining the corner information by adopting a contour detection algorithm, and obtaining the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm comprises the following steps:
Calculating the circumscribed rectangle of a certain contour area according to a direct calculation method, recording the length, the width and the area of the circumscribed rectangle, obtaining the minimum circumscribed rectangle RectMin, obtaining the area value of the minimum circumscribed rectangle RectMin, giving the area value to a variable AreaMin, and setting the rotation angle alpha=0°;
rotating the outline area by an angle theta, obtaining a minimum circumscribed rectangle RectTmp after rotation, obtaining an area value and giving the area value to a variable Areatmp;
setting a rotation angle alpha=alpha+theta, comparing the sizes of the Areatmp and Areatmin, assigning a small area value to Areatmin, and assigning the rotation angle at that time to beta=alpha, and assigning rectangular information to RectMin=RectTmp;
acquiring a minimum circumscribed rectangle RectMin and a rotation angle alpha corresponding to the rectangle RectMi;
and reversely rotating the calculated rectangle RectMin by a beta angle to obtain the minimum circumscribed rectangle.
Another aspect of the invention relates to an unsupervised 3D carton inspection system comprising:
the point cloud processing module is used for processing the point cloud: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering;
an image processing module for processing the image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
Further, the point cloud processing module includes:
the background shooting unit is used for acquiring point cloud data, placing a 3D camera right above a shooting area, adjusting the angle of the camera to enable the ground to be perpendicular to a Z axis in a camera coordinate system, and shooting a target area to obtain a background image;
the carton shooting unit is used for taking the logistics cartons as objects to be detected after the background image data are stored, and shooting the target area by the 3D camera;
the collection acquisition unit is used for acquiring a background point cloud data collection and a carton point cloud data collection through shooting by a 3D camera, and acquiring a Z-axis data collection of the carton surface point cloud and a Z-axis data collection of the conveyor belt point cloud according to the shot background point cloud data collection and carton point cloud data collection;
the clustering unit is used for carrying out cluster analysis on the obtained Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt by adopting an unsupervised learning clustering algorithm based on Euclidean distance; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area;
the extraction unit is used for respectively extracting the Z-axis data set of the point cloud of the surface of the cardboard box obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and carrying out point cloud segmentation to obtain the point cloud of the surface of the cardboard box.
Further, in the set acquisition unit, it is assumed that the X-axis data set of the point cloud on the surface of the carton is B x ={X b1 ,X b2 ,...,X bn Y-axis data set of the point cloud on the surface of the carton is B y ={Y b1 ,Y b2 ,...,Y bn X-axis data set of the } conveyor belt point cloud is G x ={X g1 ,X g2 ,...,X gn Y-axis data set of the } and conveyer belt point cloud is G y ={Y g1 ,Y g2 ,...,Y gn Because cartons and conveyor belts tend to have different height differences, the Z-axis data characteristic of the point cloud is divided into Z-axis data sets B of the point cloud on the surface of the cartons z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z b1 ,Z b2 ,...,Z bn Z-axis data set of } and conveyer belt point cloud is G z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z g1 ,Z g2 ,...,Z gn }。
Further, the image processing module includes:
the splitting unit is used for carrying out preprocessing dimension reduction operation on the three-dimensional point cloud, and the Z-axis data set of the point cloud on the surface of the carton obtained by splitting is converted into a carton single-channel aerial view according to the depth information;
the edge detection calculation unit is used for obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm;
the corner acquisition unit is used for acquiring a corner by adopting a contour detection algorithm according to the acquired edge information after edge detection processing is carried out by a Canny edge detection algorithm;
the external rectangle obtaining unit is used for obtaining angular point information by adopting a contour detection algorithm, and obtaining the length and the width of the carton by using a minimum external rectangle fitting algorithm to obtain a minimum external rectangle.
Further, the circumscribed rectangle acquisition unit includes:
a first assignment subunit, configured to calculate an external rectangle of a certain contour area according to a direct calculation method, record the length, width and area of the external rectangle, obtain a minimum external rectangle RectMin, obtain an area value of the minimum external rectangle RectMin, assign the area value to a variable area min, and set a rotation angle α=0°
A second assignment subunit, configured to rotate the contour region by an angle θ, obtain a rotated minimum bounding rectangle RectTmp, obtain an area value thereof, and assign the area value to the variable AreaTmp
A third assignment subunit for setting a rotation angle α=α+θ, comparing the sizes of the areas tmp and the areas min, assigning a small area value to the areas min, and assigning the rotation angle at this time to β=α, and rectangular information to rectmin=recttmp;
a first obtaining subunit, configured to obtain a smallest circumscribed rectangle RectMin and a rotation angle α corresponding to the rectangle RectMi;
and the second acquisition subunit is used for reversely rotating the calculated rectangle RectMin by a beta angle to obtain the minimum circumscribed rectangle.
The beneficial effects obtained by the invention are as follows:
the invention discloses an unsupervised 3D carton detection method and system, which are implemented by processing point cloud: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering; processing an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area. According to the unsupervised 3D carton detection method and system disclosed by the invention, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of an unsupervised 3D carton detection method provided by the present invention;
FIG. 2 is a detailed flowchart of an embodiment of the steps of the processing of the point cloud shown in FIG. 1;
FIG. 3 is a detailed flowchart illustrating an embodiment of the steps of processing the image shown in FIG. 1;
FIG. 4 is a detailed flowchart of an embodiment of the steps shown in FIG. 3 for obtaining corner information using a contour detection algorithm, and then obtaining the length and width of the carton using a minimum bounding rectangle fitting algorithm to obtain a minimum bounding rectangle;
fig. 5 is a background view of a conveyor belt (ground) in the present invention, in which a 3D camera is placed directly above the conveyor belt (ground) and the lower conveyor belt (ground) is photographed;
fig. 6 is a schematic diagram of a paper box obtained when a 3D camera photographs a target area with a logistics paper box as an object to be measured;
FIG. 7 is a schematic view of the point cloud of the present invention with the carton placed on the ground;
FIG. 8 is a graph of K-Means clustering effect of the surface area of the carton and the area of the conveyor belt obtained by judging the number of point clouds according to the invention;
fig. 9 is a schematic diagram of a point cloud of a surface of a cardboard box, wherein a Z-axis data set of the point cloud of the surface of the cardboard box and a Z-axis data set of the point cloud of a conveyor belt (ground) obtained after clustering are respectively extracted according to the number of the point clouds, and the point clouds are segmented;
FIG. 10 is a bird's eye view of the carton under the single pass condition of the present invention;
FIG. 11 is a schematic diagram of an edge detection result obtained by Canny edge detection according to the present invention;
FIG. 12 is a schematic diagram of the corner detection result of the corner obtained by adopting a contour detection algorithm according to the edge information after Canny edge detection processing;
FIG. 13 is a series of minimum circumscribed rectangular views of the carton of the present invention;
fig. 14 is a functional block diagram of an embodiment of an unsupervised 3D carton inspection system provided by the present invention;
FIG. 15 is a functional block diagram of an embodiment of the point cloud processing module shown in FIG. 14;
FIG. 16 is a functional block diagram of an embodiment of the image processing module shown in FIG. 14;
FIG. 17 is a functional block diagram of an embodiment of the circumscribed rectangle acquisition unit shown in FIG. 16.
Reference numerals illustrate:
10. a point cloud processing module; 20. an image processing module; 11. a background shooting unit; 12. a carton shooting unit; 13. a set acquisition unit; 14. a clustering unit; 15. an extraction unit; 21. a dividing unit; 22. an edge detection calculation unit; 23. a corner acquisition unit; 24. externally connecting a rectangular acquisition unit; 241. a first assignment subunit; 242. a second assignment subunit; 243. a third assignment subunit; 244. a first acquisition subunit; 245. and a second acquisition subunit.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a first embodiment of the present invention proposes an aspect of the present invention that relates to an unsupervised 3D carton detection method, which includes the following steps:
step S100, processing of point cloud: inputting an original point cloud, and filtering point cloud data; and screening out the carton point cloud by using K-Means clustering.
Filtering the point cloud data, and screening out the carton point cloud by using K-Means clustering.
Step S200, processing the image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
And taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
Compared with the prior art, the unsupervised 3D carton detection method disclosed by the embodiment has the advantages that the point cloud is processed: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering; processing an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area. The embodiment adopts an unsupervised learning technology, and data is not required to be marked; converting the geometrical information of the point cloud into a two-dimensional space by utilizing the characteristics of the point cloud, and converting the point cloud data into a bird's eye view according to the depth information of the point cloud; the geometrical information of the point cloud is represented in an image mode, so that the space redundancy of the point cloud can be reduced, and the occupied bit information of each point can be reduced; in order to reduce the interference of noise on image processing, the detection precision of the contour of the separated paper box is improved by adopting a K-Means clustering algorithm, so that the detection precision of the position of the paper box is improved, and finally, the detection of the position of the logistics paper box is realized and experimental verification is carried out.
Further, please refer to fig. 2, fig. 2 is a detailed flow chart of an embodiment of step S100 shown in fig. 1, in this embodiment, step S100 includes:
and S110, collecting point cloud data, placing a 3D camera right above a shooting area, adjusting the camera angle to enable the ground to be perpendicular to a Z axis in a camera coordinate system, and shooting a target area to obtain a background image.
Firstly, collecting point cloud data, placing a 3D camera right above a conveyor belt (ground), and shooting a conveyor belt (ground) below to obtain a complete and clear background diagram of the conveyor belt (ground), as shown in fig. 5.
And step S120, after the background image data are stored, taking the logistics cartons as objects to be detected, and shooting the target area by using a 3D camera.
After the background image data are stored, taking the logistics cartons as objects to be detected, and shooting the cartons transported on a conveyor belt (ground) by a 3D camera, as shown in fig. 6.
Step S130, shooting by a 3D camera to obtain a background point cloud data set and a carton point cloud data set, and obtaining a Z-axis data set of the carton surface point cloud and a Z-axis data set of the conveyor belt point cloud according to the shot background point cloud data set and the carton point cloud data set.
A point cloud data set of a conveyor belt (ground) and a point cloud data set of a carton are obtained through shooting by a 3D camera, and a point cloud schematic diagram of the carton placed on the ground is shown in fig. 7. Assume that the X-axis data set of the point cloud on the surface of the carton is B x ={X b1 ,X b2 ,...,X bn Y-axis data set of the point cloud on the surface of the carton is B y ={Y b1 ,Y b2 ,...,Y bn X-axis data set of the } conveyor belt point cloud is G x ={X g1 ,X g2 ,...,X gn Y-axis data set of the } and conveyer belt point cloud is G y ={Y g1 ,Y g2 ,...,Y gn Because cartons and conveyor belts tend to have different height differences, the Z-axis data characteristic of point clouds is divided into Z-axis data sets B of point clouds on the surfaces of the cartons z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z b1 ,Z b2 ,...,Z bn Z-axis data set of } and conveyer belt point cloud is G z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z g1 ,Z g2 ,...,Z gn }。
Step S140, performing cluster analysis on the obtained Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt by adopting an unsupervised learning clustering algorithm based on Euclidean distance; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area.
The acquired Z-axis data set of the surface point cloud of the carton and the Z-axis data set of the surface point cloud of the conveyor belt (ground) adopt an unsupervised learning Clustering algorithm (K-Means Clustering) based on Euclidean distance, and the similarity in the data set is found by grouping the observed values, so that the similarity between the observed values in the same cluster is more similar to the similarity between the observed values in another cluster. The n observations are intended to be divided into k clusters, where each observation belongs to a cluster (cluster center or cluster centroid) with the exact nearest mean as a cluster. As shown in fig. 8, the color clusters are classified into two types, namely a black point cloud subset and a blue point cloud subset, and the blue point subset (middle area) is determined to be a carton surface area according to the number of the point clouds, and the black point subset (peripheral area) is determined to be a conveyor belt (ground) area.
And S150, respectively extracting the Z-axis data set of the point cloud of the surface of the carton obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and carrying out point cloud segmentation to obtain the point cloud of the surface of the carton.
And respectively extracting the Z-axis data set of the point cloud of the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt (ground) according to the number of the point clouds after clustering, and carrying out point cloud segmentation, wherein the extracted point clouds of the surface of the carton are shown in fig. 9.
It can be seen that the point cloud segmentation processing can effectively remove noise points, better realize the segmentation of the main object (the surface of the paper box) and the object structures such as the background (the conveyor belt or the ground), has simple algorithm steps, greatly lightens the point cloud data, and remarkably improves the segmentation effect compared with the conventional segmentation mode.
Compared with the prior art, the unsupervised 3D carton detection method provided by the embodiment has the advantages that the point cloud data are collected, a 3D camera is placed right above a shooting area, the camera angle is adjusted so that the ground is perpendicular to a Z axis in a camera coordinate system, and a target area is shot to obtain a background image; after the background image data are stored, taking the logistics paper box as an object to be detected, and shooting a target area by a 3D camera; shooting by a 3D camera to obtain a background point cloud data set and a carton point cloud data set, and obtaining a Z-axis data set of the carton surface point cloud and a Z-axis data set of the conveyor belt point cloud according to the shot background point cloud data set and carton point cloud data set; adopting an unsupervised learning clustering algorithm based on Euclidean distance to perform clustering analysis on the obtained Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area; and respectively extracting the Z-axis data set of the point cloud of the surface of the carton obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and carrying out point cloud segmentation to obtain the point cloud of the surface of the carton. According to the unsupervised 3D carton detection method disclosed by the embodiment, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
Preferably, please refer to fig. 3, fig. 3 is a detailed flow chart of an embodiment of step S20 shown in fig. 1, in which step S200 includes:
s20, preprocessing dimension reduction operation is carried out on the three-dimensional point cloud, and the Z-axis data set of the point cloud on the surface of the carton obtained through segmentation is converted into a carton single-channel aerial view according to depth information.
In order to accelerate the detection speed of the carton spatial position, preprocessing dimension reduction operation is required to be performed on the three-dimensional point cloud, and a Z-axis data set of the segmented carton surface point cloud is converted into a bird's-eye view image according to depth information, wherein the bird's-eye view image of the carton under the condition of single passage is shown in fig. 10.
And S20, obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm.
Edge information is obtained on the aerial view of the carton by adopting a Can n y edge detection algorithm, and the edge diagram of the carton surface obtained by performing Can n y edge detection is shown in a graph 1 1. The general approach to edge detection is to convolve the image with an integer order differential gradient operator template.
The Canny edge detection algorithm steps are as follows:
step S221, gaussian filtering is performed on the gray-scale image.
Step S222, calculating the gradient of the image using the difference template of the size.
And S223, performing non-maximum value inhibition on the gradient amplitude, reserving a local maximum value, and refining the edge.
Step S224, detecting and edge connection by using a double-threshold algorithm.
And step S230, after edge detection processing is carried out by a Canny edge detection algorithm, obtaining corner points by adopting a contour detection algorithm according to the obtained edge information.
After Canny edge detection processing, a contour detection algorithm is adopted to obtain corner points according to the edge information, as shown in fig. 12.
And S240, acquiring angular point information by adopting a contour detection algorithm, and acquiring the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to acquire a minimum circumscribed rectangle.
The contour detection algorithm is adopted to obtain angular point information, and the minimum circumscribed rectangle fitting algorithm is used to obtain the length and width of the carton, as shown in fig. 13, which is the minimum circumscribed rectangle of the carton after series processing. The minimum circumscribed rectangle fitting algorithm uses Graham algorithm based on a plane scanning method to calculate the convex hull of the outline of the carton. There are two general methods for computing the bounding rectangle of an object in an image:
1. the direct calculation method is obtained by calculating the maximum and minimum values of the distribution coordinates of the objects in the image.
2. The image object is rotated at equal intervals within the range of 90 degrees by the equal-interval rotation searching method, the circumscribed rectangle parameters of the outline of the image object in the coordinate system direction are recorded each time, and the minimum circumscribed rectangle is obtained by calculating the circumscribed rectangle area.
Compared with the prior art, the unsupervised 3D carton detection method provided by the embodiment has the advantages that the dimension reduction operation is performed on the three-dimensional point cloud in a preprocessing mode, and the Z-axis data set of the point cloud on the surface of the carton obtained through segmentation is converted into a carton single-channel aerial view according to depth information; obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm; after edge detection processing is carried out by a Canny edge detection algorithm, according to the obtained edge information, a corner point is obtained by adopting a contour detection algorithm; and acquiring angular point information by adopting a contour detection algorithm, and acquiring the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to acquire the minimum circumscribed rectangle. According to the unsupervised 3D carton detection method disclosed by the embodiment, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
Further, please refer to fig. 4, fig. 4 is a detailed flow chart of an embodiment of step S240 shown in fig. 3, in which step S240 includes:
step S241, calculating the circumscribed rectangle of a certain contour area according to a direct calculation method, recording the length, width and area of the circumscribed rectangle, obtaining the minimum circumscribed rectangle RectMin, obtaining the area value thereof, giving the variable area min, and setting the rotation angle α=0°.
Step S242, the outline area is rotated by an angle θ, the rotated minimum circumscribed rectangle RectTmp is obtained, and the area value is obtained and assigned to the variable AreaTmp.
Step S243, setting a rotation angle α=α+θ, comparing the sizes of the AreaTmp and the AreaMin, assigning a small area value to the AreaMin, and assigning the rotation angle at this time to β=α, and assigning rectangular information to rectmin=recttmp.
Step S244, a minimum circumscribed rectangle RectMin and a rotation angle α corresponding to the rectangle RectMi are obtained.
Step S245, reversely rotating the calculated rectangle RectMin by a beta angle to obtain the minimum circumscribed rectangle.
Experimental verification and result analysis:
and selecting a paper box with the length of 360mm, the width of 273mm and the height of 223mm as an object to be detected, wherein the height of a camera is 1465mm. Important indexes of model performance quality are adopted here: accuracy (AP, aver age Precision), recall (R, recall), cross-over ratio (MloU, mean lntersection over Union), and speed of detection (FPS, frames Per Second). The cross-over ratio evaluation index (intersection overunion, loU) is the superposition degree of the region position predicted by the system and the region position marked in the original picture, and the average cross-over ratio MloU is calculated through multiple experiments. P (Precision), precision): accuracy refers to the probability of detecting correctness among all detected targets. R (Recall ): recall refers to the probability of correct recognition in all positive samples. In order to balance the two indexes of P and R, an AP index is proposed as shown in formula (1):
In the formula (1), AP is the accuracy of the evaluation model for a certain type of defect, TP is the positive number of samples predicted to be positive, TN is the negative number of samples predicted to be negative, FP is the negative number of samples predicted to be positive, and FN is the positive number of samples predicted to be negative.
The experiment obtains a final carton surface point cloud area through an unsupervised 3D carton detection algorithm, calculates the cross-over ratio with a manufactured 3D point cloud label, obtains an average cross-over ratio by taking an average value of the cross-over ratio, and measures the accuracy, recall rate and detection speed of the cross-over ratio, wherein the table 1 is an experimental result.
Table 1 table of experimental results
From the experimental data, it can be seen that the unsupervised 3D carton detection technology provided by the embodiment has excellent detection results on the logistics cartons in the logistics industry, the accuracy and recall rate are 100%, the average intersection ratio is 96.1413%, and the detection speed can reach 45 frames per second, so that the detection technology can meet the requirements of effectiveness and high precision in industrial scenes.
Compared with the prior art, the unsupervised 3D carton detection method provided by the embodiment calculates the circumscribed rectangle of a certain outline area according to a direct calculation method, records the length, width and area of the circumscribed rectangle, acquires the minimum circumscribed rectangle RectMin, obtains the area value of the minimum circumscribed rectangle RectMin, gives the area value to a variable AreaMin, and sets the rotation angle alpha=0°; rotating the outline area by an angle theta, obtaining a minimum circumscribed rectangle RectTmp after rotation, obtaining an area value and giving the area value to a variable Areatmp; setting a rotation angle alpha=alpha+theta, comparing the sizes of the Areatmp and Areatmin, assigning a small area value to Areatmin, and assigning the rotation angle at that time to beta=alpha, and assigning rectangular information to RectMin=RectTmp; acquiring a minimum circumscribed rectangle RectMin and a rotation angle alpha corresponding to the rectangle RectMi; and reversely rotating the calculated rectangle RectMin by a beta angle to obtain the minimum circumscribed rectangle. According to the unsupervised 3D carton detection method disclosed by the embodiment, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
As shown in fig. 14, fig. 14 is a functional block diagram of an embodiment of an unsupervised 3D carton detection system provided by the present invention, in this embodiment, the unsupervised 3D carton detection system includes a point cloud processing module 10 and an image processing module 20, where the point cloud processing module 10 is configured to process a point cloud: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering; an image processing module 20, configured to process an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
In the point cloud processing module 10, the point cloud data is filtered, and then the carton point cloud is screened out by using K-Means clustering.
In the image processing module 20, the single-channel aerial view of the carton is used as an input image of a Canny edge detection algorithm, and then contour detection and Canny edge detection are performed to obtain a target carton position area.
Compared with the prior art, the unsupervised 3D carton detection system provided in this embodiment adopts the point cloud processing module 10 and the image processing module 20, and processes the point cloud: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering; processing an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area. The unsupervised 3D carton detection system provided by the embodiment does not need to mark data; converting the geometrical information of the point cloud into a two-dimensional space by utilizing the characteristics of the point cloud, and converting the point cloud data into a bird's eye view according to the depth information of the point cloud; the geometrical information of the point cloud is represented in an image mode, so that the space redundancy of the point cloud can be reduced, and the occupied bit information of each point can be reduced; in order to reduce the interference of noise on image processing, the detection precision of the contour of the separated paper box is improved by adopting a K-Means clustering algorithm, so that the detection precision of the position of the paper box is improved, and finally, the detection of the position of the logistics paper box is realized and experimental verification is carried out.
Further, please refer to fig. 15, fig. 15 is a schematic diagram of a functional module of an embodiment of the point cloud processing module shown in fig. 14, in this embodiment, the point cloud processing module 10 includes a background shooting unit 11, a carton shooting unit 12 and an aggregate acquisition unit 13, wherein the background shooting unit 11 is configured to collect point cloud data, place a 3D camera directly above a shooting area, adjust a camera angle so that a ground is perpendicular to a Z axis in a camera coordinate system, and shoot a target area to obtain a background image; the carton shooting unit 12 is used for taking the logistics cartons as objects to be detected after storing the background image data, and shooting a target area by a 3D camera; a set acquisition unit 13, configured to acquire a background point cloud data set and a carton point cloud data set by shooting with a 3D camera, and acquire a Z-axis data set of a carton surface point cloud and a Z-axis data set of a conveyor belt point cloud according to the background point cloud data set and the carton point cloud data set acquired by shooting; the clustering unit 14 is used for performing cluster analysis on the obtained Z-axis data set of the point cloud of the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt by adopting an unsupervised learning clustering algorithm based on Euclidean distance; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area; the extracting unit 15 is configured to extract the Z-axis data set of the point cloud of the surface of the carton obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and perform point cloud segmentation to obtain the point cloud of the surface of the carton.
In the background shooting unit 11, first, the point cloud data is acquired, a 3D camera is placed right above a conveyor belt (ground), and shooting is performed on a conveyor belt (ground) below, so as to obtain a complete and clear background diagram of the conveyor belt (ground), as shown in fig. 5.
In the case photographing unit 12, after the background image data is stored, the case transported on the conveyor (ground) is photographed by a 3D camera using the logistics case as an object to be measured, as shown in fig. 6.
In the collection acquisition unit 13, a conveyor belt (ground) point cloud data collection and a carton point cloud data collection are obtained by 3D camera shooting, as shown in fig. 7, in which the carton is placed on the groundA point cloud schematic. Assuming that the X-axis data set of the point cloud on the surface of the carton isThe Y-axis data set of the point cloud on the surface of the carton is B y ={Y b1 ,Y b2 ,...,Y bn X-axis data set of the } conveyor belt point cloud is G x ={X g1 ,X g2 ,...,X gn Y-axis data set of the conveyer belt point cloud is +.>Because cartons and conveyor belts often have different height differences, the cartons and conveyor belts are divided into Z-axis data sets B of the surface point clouds of the cartons according to the Z-axis data characteristics of the point clouds z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z b1 ,Z b2 ,...,Z bn Z-axis data set of } and conveyer belt point cloud is G z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z g1 ,Z g2 ,...,Z gn }。
In the clustering unit 14, the acquired Z-axis data set of the carton surface point cloud and the Z-axis data set of the conveyor belt (ground) point cloud adopt an unsupervised learning clustering algorithm (K-MeansClustering) based on euclidean distance, and the similarity in the data set is found by grouping the observed values, so that the similarity between the observed values in the same cluster is more similar to the similarity between the observed values in another cluster. The n observations are intended to be divided into k clusters, where each observation belongs to the cluster with the nearest mean (cluster center or cluster centroid) as a cluster. As shown in fig. 8, the color clusters are classified into two types, namely a black point cloud subset and a blue point cloud subset, and the blue point subset (middle area) is determined as a carton surface area, and the black point subset (peripheral area) is determined as a conveyor belt (ground) area according to the number of the point clouds.
In the extracting unit 15, the Z-axis data set of the point cloud on the surface of the cardboard box obtained after clustering and the Z-axis data set of the point cloud on the conveyor belt (ground) are extracted according to the number of the point clouds, and the point clouds are segmented, as shown in fig. 9, to obtain the point clouds on the surface of the cardboard box.
It can be seen that the point cloud segmentation processing can effectively remove noise points, better realize the segmentation of the main object (the surface of the paper box) and the object structures such as the background (the conveyor belt or the ground), has simple algorithm steps, greatly lightens the point cloud data, and remarkably improves the segmentation effect compared with the conventional segmentation mode.
Compared with the prior art, the point cloud processing module 10 adopts the background shooting unit 11, the carton shooting unit 12 and the integrated acquisition unit 13, and places a 3D camera right above a shooting area by acquiring point cloud data, adjusts the camera angle to enable the ground to be perpendicular to a Z axis in a camera coordinate system, and shoots a target area to obtain a background image; after the background image data are stored, taking the logistics paper box as an object to be detected, and shooting a target area by a 3D camera; shooting by a 3D camera to obtain a background point cloud data set and a carton point cloud data set, and obtaining a Z-axis data set of the carton surface point cloud and a Z-axis data set of the conveyor belt point cloud according to the shot background point cloud data set and carton point cloud data set; adopting an unsupervised learning clustering algorithm based on Euclidean distance to perform clustering analysis on the obtained Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area; and respectively extracting the Z-axis data set of the point cloud of the surface of the carton obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and carrying out point cloud segmentation to obtain the point cloud of the surface of the carton. According to the unsupervised 3D carton detection system provided by the embodiment, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
Preferably, please refer to fig. 1 6, fig. 1 6 is a functional block diagram of an embodiment of the image processing module shown in fig. 1 4, in which in this embodiment, the image processing module 2 0 includes a segmentation unit 2 1, an edge detection and calculation unit 2 2, a corner point obtaining unit 2 3, and an external rectangle obtaining unit 2 4, wherein the segmentation unit 2 1 is configured to perform preprocessing dimension reduction operation on a three-dimensional point cloud, and the segmented Z-axis data set of the point cloud on the surface of the carton is converted into a single-channel bird's eye view of the carton according to depth information; an edge detection calculation unit 2 2, configured to obtain edge information on the single-channel aerial view of the carton by using a C a n n y edge detection algorithm; the corner obtaining unit 2 3 is configured to obtain a corner by using a contour detection algorithm according to the obtained edge information after performing edge detection processing by using the C a n n y edge detection algorithm; and the circumscribed rectangle obtaining unit 2 4 is used for obtaining corner information by adopting a contour detection algorithm, and obtaining the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to obtain a minimum circumscribed rectangle.
In the dividing unit 2 1, in order to accelerate the detection speed of the spatial position of the carton, a preprocessing dimension-reducing operation needs to be performed on the three-dimensional point cloud, and the Z-axis data set of the separated carton surface point cloud is converted into a bird's-eye view image according to the depth information, as shown in fig. 1 0, which is a bird's-eye view image of the carton under the condition of single passage.
In the edge detection computing unit 2 2, edge information is obtained on the bird's eye view of the cardboard box by using a C an n y edge detection algorithm, and the cardboard box surface edge map obtained by performing C a n y edge detection is shown in fig. 1 1. Edge detection is achieved by convolving the image with an integer order differential gradient operator template.
The edge detection by the edge detection calculation unit 2 2 is specifically as follows: carrying out Gaussian filtering treatment on the gray level image; calculating the gradient of the image by using the difference template of the size; non-maximum value inhibition is carried out on the gradient amplitude, local maximum values are reserved, and edges are thinned; edge connection is detected and detected by a double threshold algorithm.
In the corner point obtaining unit 2 3, after the C a n n y edge detection processing, a contour detection algorithm is adopted to obtain a corner point according to the edge information thereof, as shown in fig. 12.
In the circumscribed rectangle obtaining unit 24, the contour detection algorithm is adopted to obtain the corner information, and the minimum circumscribed rectangle fitting algorithm is used to obtain the length and width of the carton, as shown in fig. 13, which is the minimum circumscribed rectangle of the carton after series processing. The minimum circumscribed rectangle fitting algorithm uses Graham algorithm based on a plane scanning method to calculate the convex hull of the outline of the carton. There are generally two types of bounding rectangle calculations for objects in an image:
1. The direct calculation method is obtained by calculating the maximum and minimum values of the distribution coordinates of the objects in the image.
2. The image object is rotated at equal intervals within the range of 90 degrees by the equal-interval rotation searching method, the circumscribed rectangle parameters of the outline of the image object in the coordinate system direction are recorded each time, and the minimum circumscribed rectangle is obtained by calculating the circumscribed rectangle area.
Compared with the prior art, the unsupervised 3D carton detection system provided by the embodiment is characterized in that the image processing module 20 adopts the segmentation unit 21, the edge detection and calculation unit 22, the corner point acquisition unit 23 and the external rectangle acquisition unit 24, and the Z-axis data set of the surface point cloud of the carton obtained by segmentation is converted into a single-channel aerial view of the carton according to depth information by preprocessing and dimension reduction operation on the three-dimensional point cloud; obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm; after edge detection processing is carried out by a Canny edge detection algorithm, according to the obtained edge information, a corner point is obtained by adopting a contour detection algorithm; and acquiring angular point information by adopting a contour detection algorithm, and acquiring the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to acquire the minimum circumscribed rectangle. According to the unsupervised 3D carton detection system disclosed by the embodiment, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
Further, referring to fig. 17, fig. 17 is a schematic functional block diagram of an embodiment of an circumscribed rectangle acquiring unit shown in fig. 16, in this embodiment, the circumscribed rectangle acquiring unit 24 includes a first assignment subunit 241, a second assignment subunit 242, a third assignment subunit 243, a first acquiring subunit 244, and a second acquiring subunit 245, where the first assignment subunit 241 is configured to calculate a circumscribed rectangle of a certain contour area according to a direct calculation method, record a length, a width, and an area of the circumscribed rectangle, acquire a minimum circumscribed rectangle RectMin, and obtain an area value thereof to be assigned to a variable area min, and set a rotation angle α=0°; a second assignment subunit 242, configured to rotate the contour area by an angle θ, obtain a rotated minimum bounding rectangle RectTmp, obtain an area value thereof, and assign the area value to the variable AreaTmp; a third assignment subunit 243 configured to set a rotation angle α=α+θ, compare the sizes of the areas tmp and the areas min, assign a small area value to the areas min, and assign the rotation angle at this time to β=α, and assign rectangular information to rectmin=recttmp; a first obtaining subunit 244, configured to obtain a smallest circumscribed rectangle RectMin and a rotation angle α corresponding to the rectangle RectMi; the second obtaining subunit 245 is configured to reversely rotate the calculated rectangle RectMin by an angle β to obtain a minimum circumscribed rectangle.
Experimental verification and result analysis:
and selecting a paper box with the length of 360mm, the width of 273mm and the height of 223mm as an object to be detected, wherein the height of a camera is 1465mm. Important indexes of model performance quality are adopted here: accuracy (AP, averagePrecision), recall (R, recall), cross-over (MloU, meanIntersectionover Union), and speed of detection (FPS, frames Per Second). The cross-over ratio evaluation index (intersection over union, loU) is the superposition degree of the region position predicted by the system and the region position marked in the original picture, and the average cross-over ratio MloU is calculated through multiple experiments. P (Precision), precision): accuracy refers to the probability of detecting correctness among all detected targets. R (Recall ): recall refers to the probability of correct recognition in all positive samples. In order to balance the two indexes of P and R, an AP index is proposed as shown in formula (1):
in the formula (2), AP is the accuracy of the evaluation model for a certain type of defect, TP is the positive number of samples predicted to be positive, TN is the negative number of samples predicted to be negative, FP is the negative number of samples predicted to be positive, and FN is the positive number of samples predicted to be negative.
The experiment obtains a final carton surface point cloud area through an unsupervised 3D carton detection algorithm, calculates the cross-over ratio with the manufactured 3D point cloud label, obtains the average cross-over ratio by taking the average value of the cross-over ratio, and measures the accuracy, recall rate and detection speed of the cross-over ratio, and the following table 2 is an experimental result.
Table 2 table of experimental results
From the experimental data, it can be seen that the unsupervised 3D carton detection technology provided by the embodiment has excellent detection results on the logistics cartons in the logistics industry, the accuracy and recall rate are 100%, the average intersection ratio is 96.1413%, and the detection speed can reach 45 frames per second, so that the detection technology can meet the requirements of effectiveness and high precision in industrial scenes.
Compared with the prior art, the unsupervised 3D carton detection system provided in this embodiment, the external rectangle obtaining unit 24 adopts the first assignment subunit 241, the second assignment subunit 242, the third assignment subunit 243, the first obtaining subunit 244 and the second obtaining subunit 245, calculates the external rectangle of a certain outline area according to a direct calculation method, records the length, the width and the area of the external rectangle, obtains the minimum external rectangle RectMin, obtains the area value of the minimum external rectangle, and assigns the area value to the variable area min, and sets the rotation angle α=0°; rotating the outline area by an angle theta, obtaining a minimum circumscribed rectangle RectTmp after rotation, obtaining an area value and giving the area value to a variable Areatmp; setting a rotation angle alpha=alpha+theta, comparing the sizes of the Areatmp and Areatmin, assigning a small area value to Areatmin, and assigning the rotation angle at that time to beta=alpha, and assigning rectangular information to RectMin=RectTmp; acquiring a minimum circumscribed rectangle RectMin and a rotation angle alpha corresponding to the rectangle RectMi; and reversely rotating the calculated rectangle RectMin by a beta angle to obtain the minimum circumscribed rectangle. According to the unsupervised 3D carton detection system disclosed by the embodiment, a visual algorithm is used for replacing manual operation, so that the detection efficiency of logistics cartons is improved, and the accuracy of logistics carton size detection is improved; the whole performance is improved by using the point cloud, the bird's-eye view image and other data correspondingly generated through a fusion scheme, visual texture information and the space geometric information of the point cloud are fused, and high-precision object space position detection is realized; meanwhile, the carton is supported to move randomly within the effective detection range of the sensor, detection can be performed in a dark environment, data do not need to be marked, and a solid foundation is further laid for intelligent storage.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. An unsupervised 3D carton detection method is characterized by comprising the following steps:
processing point cloud: inputting an original point cloud, and filtering point cloud data; screening out a carton point cloud by using K-Means clustering;
processing an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
2. The unsupervised 3D carton inspection method according to claim 1, wherein the step of processing the point cloud comprises:
Collecting point cloud data, placing a 3D camera right above a shooting area, adjusting the angle of the camera to enable the ground to be perpendicular to a Z axis in a camera coordinate system, shooting a target area to obtain a background image, wherein the background is generally the ground or a conveyor belt;
after the background image data are stored, taking the logistics paper box as an object to be detected, and shooting a target area by a 3D camera;
shooting by a 3D camera to obtain a background point cloud data set and a carton point cloud data set, and acquiring a Z-axis data set of a carton surface point cloud and a Z-axis data set of a background point cloud according to the shot background point cloud data set and the shot carton point cloud data set;
performing cluster analysis on the obtained Z-axis data set of the point cloud on the surface of the carton and the Z-axis data set of the point cloud of the conveyor belt by adopting an unsupervised learning clustering algorithm based on Euclidean distance; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area;
and respectively extracting the Z-axis data set of the point cloud of the surface of the carton and the Z-axis data set of the background point cloud according to the number of the point clouds after clustering, and carrying out point cloud segmentation to obtain the point cloud of the surface of the carton.
3. The method for detecting the unsupervised 3D carton according to claim 2, wherein the background point cloud data set and the carton point cloud data set are obtained by shooting with a 3D camera, and the Z-axis data set of the carton surface point cloud and the Z-axis number of the conveyor belt point cloud are obtained according to the background point cloud data set and the carton point cloud data set obtained by shootingIn the step of collecting, the X-axis data collection of the point cloud on the surface of the carton is assumed to be B x ={X b1 ,X b2 ,...,X bn Y-axis data set of the point cloud on the surface of the carton is B y ={Y b1 ,Y b2 ,...,Y bn X-axis data set of the } conveyor belt point cloud is G x ={X g1 ,X g2 ,...,X gn Y-axis data set of the } and conveyer belt point cloud is G y ={Y g1 ,Y g2 ,...,Y gn Because cartons and conveyor belts tend to have different height differences, the Z-axis data characteristic of the point cloud is divided into Z-axis data sets B of the point cloud on the surface of the cartons z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z b1 ,Z b2 ,...,Z bn Z-axis data set of } and conveyer belt point cloud is G z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z g1 ,Z g2 ,...,Z gn }。
4. The unsupervised 3D carton inspection method according to claim 2, wherein the step of processing the image comprises:
preprocessing the three-dimensional point cloud, reducing the dimension, and converting the Z-axis data set of the partitioned carton surface point cloud into a carton single-channel aerial view according to depth information;
obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm;
After edge detection processing is carried out by a Canny edge detection algorithm, according to the obtained edge information, a contour detection algorithm is adopted to obtain corner points;
and acquiring angular point information by adopting a contour detection algorithm, and acquiring the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to acquire the minimum circumscribed rectangle.
5. The method for unsupervised 3D carton inspection according to claim 4, wherein the step of obtaining the minimum bounding rectangle by obtaining corner information using a contour inspection algorithm and obtaining the length and width of the carton using a minimum bounding rectangle fitting algorithm comprises:
calculating the circumscribed rectangle of a certain contour area according to a direct calculation method, recording the length, the width and the area of the circumscribed rectangle, obtaining the minimum circumscribed rectangle RectMin, obtaining the area value of the minimum circumscribed rectangle RectMin, giving the area value to a variable AreaMin, and setting the rotation angle alpha=0°;
rotating the outline area by an angle theta, obtaining a minimum circumscribed rectangle RectTmp after rotation, obtaining an area value and giving the area value to a variable Areatmp;
setting a rotation angle alpha=alpha+theta, comparing the sizes of the Areatmp and Areatmin, assigning a small area value to Areatmin, and assigning the rotation angle at that time to beta=alpha, and assigning rectangular information to RectMin=RectTmp;
Acquiring a minimum circumscribed rectangle RectMin and a rotation angle alpha corresponding to the rectangle RectMi;
and reversely rotating the calculated rectangle RectMin by a beta angle to obtain the minimum circumscribed rectangle.
6. An unsupervised 3D carton inspection system, comprising:
a point cloud processing module (10) for processing point cloud: inputting an original point cloud, and filtering point cloud data; realizing point cloud data segmentation classification by using K-Means clustering;
an image processing module (20) for processing an image: and taking the carton single-channel aerial view as an input diagram of a Canny edge detection algorithm, and then carrying out contour detection and Canny edge detection to obtain a target carton position area.
7. The unsupervised 3D carton inspection system according to claim 6, wherein the point cloud processing module (10) comprises:
the background shooting unit (11) is used for acquiring point cloud data, placing a 3D camera right above a shooting area, adjusting the angle of the camera so that the ground is perpendicular to a Z axis in a camera coordinate system, and shooting a target area to obtain a background image;
the carton shooting unit (12) is used for taking the logistics cartons as objects to be detected after the background image data are stored, and shooting the target area by the 3D camera;
The collection acquisition unit (13) is used for shooting through a 3D camera to obtain a background point cloud data collection and a carton point cloud data collection, and acquiring a Z-axis data collection of the carton surface point cloud and a Z-axis data collection of the conveyor belt point cloud according to the shot background point cloud data collection and the shot carton point cloud data collection;
the clustering unit (14) is used for carrying out clustering analysis on the obtained Z-axis data set of the carton surface point cloud and the Z-axis data set of the conveyor belt point cloud by adopting an unsupervised learning clustering algorithm based on Euclidean distance; after clustering, the color point cloud is divided into two main types, namely a black point cloud subset and a blue point cloud subset, the blue point Yun Ziji is set as a carton surface area according to the quantity and the quantity of the point clouds, and the black point cloud subset is set as a conveyor belt area;
and the extraction unit (15) is used for respectively extracting the Z-axis data set of the point cloud of the surface of the paper box obtained after clustering and the Z-axis data set of the point cloud of the conveyor belt according to the number of the point clouds, and carrying out point cloud segmentation to obtain the point cloud of the surface of the paper box.
8. The unsupervised 3D carton inspection system according to claim 6, wherein in the set acquisition unit (13), the X-axis data set of the carton surface point cloud is assumed to be B x ={X b1 ,X b2 ,…,X bn Y-axis data set of the point cloud on the surface of the carton is B y ={Y b1 ,Y b2 ,...,Y bn X-axis data set of the } conveyor belt point cloud is G x ={X g1 ,X g2 ,...,X gn Y-axis data set of the } and conveyer belt point cloud is G y ={Y g1 ,Y g2 ,...,Y gn Because cartons and conveyor belts tend to have different height differences, the Z-axis data characteristic of the point cloud is divided into Z-axis data sets B of the point cloud on the surface of the cartons z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z b1 ,Z b2 ,...,Z bn ' and transferZ-axis data set with point cloud is G z ={X p ∈(X B ∩X G ),Y p ∈(Y B ∩Y G )|Z g1 ,Z g2 ,...,Z gn }。
9. The unsupervised 3D carton inspection system according to claim 7, wherein the image processing module (20) comprises:
the splitting unit (21) is used for carrying out preprocessing dimension reduction operation on the three-dimensional point cloud, and the Z-axis data set of the point cloud on the surface of the carton obtained by splitting is converted into a carton single-channel aerial view according to the depth information;
an edge detection calculation unit (22) for obtaining edge information on the carton single-channel aerial view by adopting a Canny edge detection algorithm;
the corner obtaining unit (23) is used for obtaining a corner by adopting a contour detection algorithm according to the obtained edge information after the edge detection processing is carried out by a Canny edge detection algorithm;
and the circumscribed rectangle acquisition unit (24) is used for acquiring angular point information by adopting a contour detection algorithm, and then obtaining the length and the width of the carton by using a minimum circumscribed rectangle fitting algorithm to obtain a minimum circumscribed rectangle.
10. The unsupervised 3D carton inspection system according to claim 9, wherein the circumscribed rectangular acquisition unit (24) comprises:
a first assignment subunit (241) configured to calculate an circumscribed rectangle of a certain contour area according to a direct calculation method, record the length, width and area of the circumscribed rectangle, obtain a minimum circumscribed rectangle RectMin, obtain an area value thereof, assign the area value to a variable area min, and set a rotation angle α=0°;
a second assignment subunit (242) configured to rotate the contour area by an angle θ, obtain a rotated minimum bounding rectangle RectTmp, obtain an area value thereof, and assign the area value to the variable AreaTmp;
a third assignment subunit (243) for setting a rotation angle α=α+θ, comparing the sizes of the areas tmp and the areas min, assigning a small area value to the areas min, and assigning the rotation angle at that time to β=α, and rectangular information to rectmin=recttmp;
a first obtaining subunit (244) configured to obtain a smallest circumscribed rectangle RectMin and a rotation angle α corresponding to the rectangle RectMi;
and a second acquisition subunit (245) configured to reversely rotate the calculated rectangle RectMin by an angle β to obtain a minimum circumscribed rectangle.
CN202310084182.4A 2023-01-16 2023-01-16 Unsupervised 3D carton detection method and system Active CN116485887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310084182.4A CN116485887B (en) 2023-01-16 2023-01-16 Unsupervised 3D carton detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310084182.4A CN116485887B (en) 2023-01-16 2023-01-16 Unsupervised 3D carton detection method and system

Publications (2)

Publication Number Publication Date
CN116485887A true CN116485887A (en) 2023-07-25
CN116485887B CN116485887B (en) 2024-02-02

Family

ID=87218362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310084182.4A Active CN116485887B (en) 2023-01-16 2023-01-16 Unsupervised 3D carton detection method and system

Country Status (1)

Country Link
CN (1) CN116485887B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254499A1 (en) * 2014-03-07 2015-09-10 Chevron U.S.A. Inc. Multi-view 3d object recognition from a point cloud and change detection
CA2943068A1 (en) * 2015-09-25 2017-03-25 Logical Turn Services Inc. Dimensional acquisition of packages
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
US10262226B1 (en) * 2017-05-16 2019-04-16 State Farm Mutual Automobile Insurance Company Systems and methods regarding 2D image and 3D image ensemble prediction models
US20200175260A1 (en) * 2018-11-30 2020-06-04 Qualcomm Incorporated Depth image based face anti-spoofing
CN111580131A (en) * 2020-04-08 2020-08-25 西安邮电大学 Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
CN111982009A (en) * 2020-02-26 2020-11-24 深圳市安达自动化软件有限公司 Draw-bar box 3D size detection system and method
CN112418250A (en) * 2020-12-01 2021-02-26 怀化学院 Optimized matching method for complex 3D point cloud
CN112800524A (en) * 2021-02-05 2021-05-14 河北工业大学 Pavement disease three-dimensional reconstruction method based on deep learning
CN113570722A (en) * 2021-07-22 2021-10-29 中铁二十局集团有限公司 Surrounding rock crack information extraction and integrity coefficient rapid determination method
FR3115387A1 (en) * 2020-10-20 2022-04-22 Biomerieux Method for classifying an input image representing a particle in a sample
CN114565644A (en) * 2022-03-02 2022-05-31 湖南中科助英智能科技研究院有限公司 Three-dimensional moving object detection method, device and equipment
CN114638891A (en) * 2022-01-30 2022-06-17 中国科学院自动化研究所 Target detection positioning method and system based on image and point cloud fusion
WO2022188094A1 (en) * 2021-03-11 2022-09-15 华为技术有限公司 Point cloud matching method and apparatus, navigation method and device, positioning method, and laser radar
CN115423862A (en) * 2022-08-19 2022-12-02 华中农业大学 Method for measuring leaf area of seedling under shielding condition
CN115457354A (en) * 2022-07-25 2022-12-09 深圳元戎启行科技有限公司 Fusion method, 3D target detection method, vehicle-mounted device and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254499A1 (en) * 2014-03-07 2015-09-10 Chevron U.S.A. Inc. Multi-view 3d object recognition from a point cloud and change detection
CA2943068A1 (en) * 2015-09-25 2017-03-25 Logical Turn Services Inc. Dimensional acquisition of packages
US10262226B1 (en) * 2017-05-16 2019-04-16 State Farm Mutual Automobile Insurance Company Systems and methods regarding 2D image and 3D image ensemble prediction models
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
US20200175260A1 (en) * 2018-11-30 2020-06-04 Qualcomm Incorporated Depth image based face anti-spoofing
CN111982009A (en) * 2020-02-26 2020-11-24 深圳市安达自动化软件有限公司 Draw-bar box 3D size detection system and method
CN111580131A (en) * 2020-04-08 2020-08-25 西安邮电大学 Method for identifying vehicles on expressway by three-dimensional laser radar intelligent vehicle
FR3115387A1 (en) * 2020-10-20 2022-04-22 Biomerieux Method for classifying an input image representing a particle in a sample
CN112418250A (en) * 2020-12-01 2021-02-26 怀化学院 Optimized matching method for complex 3D point cloud
CN112800524A (en) * 2021-02-05 2021-05-14 河北工业大学 Pavement disease three-dimensional reconstruction method based on deep learning
WO2022188094A1 (en) * 2021-03-11 2022-09-15 华为技术有限公司 Point cloud matching method and apparatus, navigation method and device, positioning method, and laser radar
CN113570722A (en) * 2021-07-22 2021-10-29 中铁二十局集团有限公司 Surrounding rock crack information extraction and integrity coefficient rapid determination method
CN114638891A (en) * 2022-01-30 2022-06-17 中国科学院自动化研究所 Target detection positioning method and system based on image and point cloud fusion
CN114565644A (en) * 2022-03-02 2022-05-31 湖南中科助英智能科技研究院有限公司 Three-dimensional moving object detection method, device and equipment
CN115457354A (en) * 2022-07-25 2022-12-09 深圳元戎启行科技有限公司 Fusion method, 3D target detection method, vehicle-mounted device and storage medium
CN115423862A (en) * 2022-08-19 2022-12-02 华中农业大学 Method for measuring leaf area of seedling under shielding condition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WENKAI ZHANG: "Application Research of ship overload identification algorithm based on lidar point cloud", 《2022 2ND INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING AND MECHATRONICS TECHNOLOGY (ICEEMT)》 *
刘剑;龚志恒;吴成东;岳恒;高恩阳;: "基于深度和LLE的人体动作趋势分析研究", 控制工程, no. 06 *
李迁;肖春蕾;陈洁;杨达昌;: "基于机载LiDAR点云和建筑物轮廓线构建DSM的方法", 国土资源遥感, no. 02 *

Also Published As

Publication number Publication date
CN116485887B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN111507390B (en) Storage box body identification and positioning method based on contour features
US11227405B2 (en) Determining positions and orientations of objects
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN111723721A (en) Three-dimensional target detection method, system and device based on RGB-D
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN110286124A (en) Refractory brick measuring system based on machine vision
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN107292869B (en) Image speckle detection method based on anisotropic Gaussian kernel and gradient search
CN108009494A (en) A kind of intersection wireless vehicle tracking based on unmanned plane
CN107886539B (en) High-precision gear visual detection method in industrial scene
CN111382658B (en) Road traffic sign detection method in natural environment based on image gray gradient consistency
CN113554646B (en) Intelligent urban road pavement detection method and system based on computer vision
CN113221648A (en) Fusion point cloud sequence image guideboard detection method based on mobile measurement system
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN114463425B (en) Workpiece surface featureless point positioning method based on probability Hough straight line detection
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN113781413A (en) Electrolytic capacitor positioning method based on Hough gradient method
CN116843742B (en) Calculation method and system for stacking volume after point cloud registration for black coal loading vehicle
Yawen et al. Research on vehicle detection technology based on SIFT feature
CN116485887B (en) Unsupervised 3D carton detection method and system
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences
CN114187269B (en) Rapid detection method for surface defect edge of small component
CN109934817A (en) The external contouring deformity detection method of one seed pod

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant