CN112949463B - Method and system for establishing and detecting aggregate grading rapid detection model - Google Patents

Method and system for establishing and detecting aggregate grading rapid detection model Download PDF

Info

Publication number
CN112949463B
CN112949463B CN202110219609.8A CN202110219609A CN112949463B CN 112949463 B CN112949463 B CN 112949463B CN 202110219609 A CN202110219609 A CN 202110219609A CN 112949463 B CN112949463 B CN 112949463B
Authority
CN
China
Prior art keywords
aggregate
point cloud
channel
dimensional
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110219609.8A
Other languages
Chinese (zh)
Other versions
CN112949463A (en
Inventor
李伟
杨明
裴莉莉
郝雪丽
石丽
刘汉烨
丁健刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202110219609.8A priority Critical patent/CN112949463B/en
Publication of CN112949463A publication Critical patent/CN112949463A/en
Application granted granted Critical
Publication of CN112949463B publication Critical patent/CN112949463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W30/00Technologies for solid waste management
    • Y02W30/50Reuse, recycling or recovery technologies
    • Y02W30/91Use of waste materials as fillers for mortars or concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of three-dimensional point cloud processing and machine learning, and discloses a method and a system for establishing and detecting an aggregate grading rapid detection model. The method collects three-dimensional point cloud data of coarse aggregates, extracts 52 characteristic factors with strong gear correlation of the three-dimensional aggregates, performs quality regression and category prediction after image pretreatment, and obtains the grading of the aggregates. The method avoids errors caused by subjective factors in the traditional method, has high speed, high efficiency and good robustness, can realize the automation of aggregate grading detection, and provides quality guarantee for road construction.

Description

Method and system for establishing and detecting aggregate grading rapid detection model
Technical Field
The invention belongs to the technical field of three-dimensional point cloud processing and machine learning, and particularly relates to a method and a system for establishing and detecting an aggregate grading rapid detection model.
Background
The proportion of aggregate used in pavement construction in concrete is about 70-80%, and the aggregate is an important constituent material of concrete, and the aggregate accounts for the largest cost in asphalt pavement materials. Meanwhile, aggregate grading can also directly influence the service performance of asphalt pavement and the quality of concrete structure engineering.
The general industrial grading detection method for mixed aggregates is to arrange the calibers of the screens layer by layer from large to small through a mixing building with a plurality of layers of screens, and then manually and roughly measure the gear and quality of coarse aggregates. In addition, most of aggregate delivery adopts intermittent grading, namely, one or a plurality of continuous grain grades are removed from the middle in the whole interval of mineral aggregate grain distribution, so that discontinuous grading is formed to improve the production efficiency, thereby causing the loss of grain grade content, and meanwhile, the grading index of the delivered aggregate has great errors due to subjective factors such as screen loopholes, screen inclination and the like, so that the engineering quality of a concrete pavement is seriously influenced.
At present, in the field of machine learning, research work at home and abroad is mainly focused on morphological characteristic evaluation indexes of aggregates, a systematic aggregate shape classification system cannot be formed, and the existing technology only extracts characteristics such as the upper surface and the contour of the aggregates aiming at traditional two-dimensional images, has low correlation of the characteristics on grading measurement parameters such as quality and gears, and is difficult to predict grading and realize automatic detection through an image processing or machine learning method. In addition, the traditional method utilizes single characteristic parameters, and three-dimensional characteristics affecting aggregate grading and shape are not fully considered, so that the grading accuracy is low.
Disclosure of Invention
The invention aims to provide a method and a system for establishing and detecting an aggregate grading rapid detection model, which are used for solving the problems of low aggregate grading accuracy and efficiency in the prior art.
In order to realize the tasks, the invention adopts the following technical scheme:
a method for establishing an aggregate grading rapid detection model comprises the following steps:
step1: acquiring three-dimensional point cloud data of multiple groups of aggregates and three-dimensional point cloud data of a background, and respectively carrying out channel separation on the three-dimensional point cloud data of each group of aggregates and the three-dimensional point cloud data of the background to acquire an original depth map of each group of aggregates in x, y and z channels and a depth map of the background in x, y and z channels;
each group of aggregate comprises at least one aggregate, and gears and quality of all aggregates are collected;
step2: subtracting the original depth map of each group of aggregate in the z channel from the depth map of the background in the z channel respectively to obtain a plurality of z channel depth maps, denoising the plurality of z channel depth maps to obtain a plurality of denoised z channel depth maps;
step3: calculating the area of the connected domain in each denoised z-channel depth map in the step2, and merging the connected domains with the area of the connected domain larger than a first threshold value in each denoised z-channel depth map into a template image to obtain a plurality of template images;
step 4: dividing the template images obtained in the step3 to obtain a plurality of groups of aggregate effective areas, wherein each group of aggregate effective areas comprises I Xr 、I Yr And I Zr Wherein I Zr Representing an effective area obtained by dividing the corresponding z-channel depth map in the step2 by each template image, and I Xr Representing an effective area obtained by segmenting the original depth map of the corresponding aggregate in the step1 in the x channel by each template image, I Yr Representing an effective area obtained by dividing the original depth image of the corresponding aggregate in the step1 in the y channel by each template image;
step 5: converting the multiple groups of aggregate effective areas obtained in the step 4 into aggregate point cloud models according to PCL operators, wherein the aggregate point cloud models comprise multiple aggregate point clouds, and extracting three-dimensional features and two-dimensional features of each aggregate point cloud;
step 6: and (3) taking all three-dimensional features and two-dimensional features of the aggregate point cloud model obtained in the step (5) as feature data sets, establishing an XGBoost model, taking the feature data sets obtained in the step (4) as input, taking the gear and quality of all aggregates obtained in the step (1) as a label set, training the XGBoost model, and taking the trained XGBoost model as an aggregate grading rapid detection model.
Further, the denoising in step2 includes the steps of:
1) Threshold segmentation is carried out on the z-channel depth map;
2) And converting the image subjected to threshold segmentation into a real height image according to the z-axis resolution, denoising the real height image by sequentially executing opening operation and closing operation, and obtaining a denoised z-channel depth image.
Further, in step3, the first threshold is 300 pixels.
Further, each aggregate point cloud in the aggregate point cloud model of step 5 satisfies:
1) The pixel distance between the eight neighborhood point sets is smaller than a second threshold, and the second threshold is 15 pixels;
2) The number of the connected point sets in each aggregate point cloud is larger than a third threshold, and the third threshold is 500 pixels.
A rapid detection method for aggregate grading comprises the following steps:
step I: collecting three-dimensional point cloud data of aggregate to be detected;
step II: acquiring a characteristic dataset of three-dimensional point cloud data of aggregate to be detected according to the steps 1-5 of the method for establishing any aggregate grading rapid detection model;
step III: inputting the characteristic data set of the three-dimensional point cloud data of the aggregate to be detected into the aggregate grading rapid detection model obtained by any one of the aggregate grading rapid detection model building methods, and obtaining the gear and quality of the aggregate to be detected.
The aggregate grading rapid detection system comprises a data acquisition module, a denoising module, a template acquisition module, an aggregate effective area generation module, a feature acquisition module, a model training module and a detection module;
the data acquisition module is used for acquiring three-dimensional point cloud data of a plurality of groups of aggregates and three-dimensional point cloud data of a background, and respectively carrying out channel separation on the three-dimensional point cloud data of each group of aggregates and the three-dimensional point cloud data of the background to acquire an original depth map of each group of aggregates in x, y and z channels and a depth map of the background in x, y and z channels; each group of aggregate comprises at least one aggregate, and gears and quality of all aggregates are collected;
the denoising module is used for subtracting the original depth map of each group of aggregate in the z channel, which is obtained by the data acquisition module, from the depth map of the background in the z channel respectively to obtain a plurality of z channel depth maps, denoising the plurality of z channel depth maps, and obtaining a plurality of denoised z channel depth maps;
the template acquisition module is used for calculating the area of the connected domain in each denoised z-channel depth map in the denoising module, and combining the connected domains with the area of the connected domain larger than a first threshold value in each denoised z-channel depth map into a template image to obtain a plurality of template images;
the aggregate effective area generating module is used for obtaining a plurality of template images by adopting the template obtaining module to divide so as to obtain a plurality of groups of aggregate effective areas, and each group of aggregate effective areas comprises I Xr 、I Yr And I Zr Wherein I Zr Representing an effective area obtained by dividing the corresponding z-channel depth map in the step2 by each template image, and I Xr Representing an effective area obtained by segmenting the original depth map of the corresponding aggregate in the step1 in the x channel by each template image, I Yr Representing an effective area obtained by dividing the original depth image of the corresponding aggregate in the step1 in the y channel by each template image;
the characteristic acquisition module is used for converting a plurality of groups of aggregate effective areas obtained by the aggregate effective area generation module into an aggregate point cloud model according to the PCL operator, wherein the aggregate point cloud model comprises a plurality of aggregate point clouds, and three-dimensional characteristics and two-dimensional characteristics of each aggregate point cloud are extracted;
the model training module is used for taking all three-dimensional features and two-dimensional features of the aggregate point cloud model obtained by the feature obtaining module as feature data sets, establishing an XGBoost model, taking the feature data sets obtained by the aggregate effective area generating module as input, taking the gear and quality of all aggregates as a label set, training the XGBoost model, and taking the trained XGBoost model as an aggregate grading rapid detection model;
the detection module is used for acquiring three-dimensional point cloud data of the aggregate to be detected, acquiring a characteristic data set of the three-dimensional point cloud data of the aggregate to be detected, inputting the characteristic data set of the three-dimensional point cloud data of the aggregate to be detected into the aggregate grading rapid detection model, and acquiring the gear and quality of the aggregate to be detected.
Further, the denoising in the denoising module includes the following steps:
1) Threshold segmentation is carried out on the z-channel depth map;
2) And converting the image subjected to threshold segmentation into a real height image according to the z-axis resolution, denoising the real height image by sequentially executing opening operation and closing operation, and obtaining a denoised z-channel depth image.
Further, the first threshold is 300 pixels.
Further, each aggregate point cloud in the aggregate point cloud model in the belonging feature acquisition module satisfies:
1) The pixel distance between the eight neighborhood point sets is smaller than a second threshold, and the second threshold is 15 pixels;
2) The number of the connected point sets in each aggregate point cloud is larger than a third threshold, and the third threshold is 500 pixels.
Compared with the prior art, the invention has the following technical characteristics:
(1) According to the invention, three-dimensional reconstruction is performed by collecting three-dimensional information of the aggregate, the extracted characteristics have stronger correlation with the characteristics of the aggregate, and real-time and accurate grading detection can be realized.
(2) Traditional grading detection methods rely on manual measurement, which is time-consuming and labor-consuming. The invention researches the aggregate automatic three-dimensional detection method, is quick and efficient, greatly shortens the grading prediction time and provides technical support for guaranteeing the construction quality.
Drawings
FIG. 1 is a pre-processed aggregate depth image of the present invention;
FIG. 2 is an aggregate point cloud visualization generated by the present invention;
FIG. 3 is a comparison of the grading prediction and the true grading achieved by the method of the present invention;
FIG. 4 is an aggregate three-dimensional feature dataset made in accordance with the present invention;
fig. 5 is the result of the gradation calculation of the present invention.
Detailed Description
First, technical words appearing in the present invention are explained:
XGBoost model: the model is based on an artificial intelligent neural network of a massive parallel boosted tree, and is jointly decided by a plurality of associated decision trees.
Opening and closing operations: the open operation means corrosion first and then expansion, and the closed operation means corrosion first and then expansion; the corrosion is represented by taking the center point of the B structure as the center, finding points which can meet the B structure in the A, and the expansion is represented by placing each point of the A structure to the center point of the B structure and expanding the B structure.
Aggregate: one of the main constituent materials of concrete. Including natural aggregate such as crushed stone, pebble, pumice, natural sand, etc., and artificial aggregate such as cinder, slag, haydite, expanded perlite, etc.
Gear of aggregate: the maximum aggregate size classification condition specified in JTG E42-2005 Highway engineering aggregate test procedure is characterized by the unit of the particle size being mm.
The mass of the aggregate: in the present invention, the mass of the individual aggregate particles is in kg.
The embodiment discloses a method for establishing an aggregate grading rapid detection model, which comprises the following steps:
step1: acquiring three-dimensional point cloud data of multiple groups of aggregates and three-dimensional point cloud data of a background, and respectively carrying out channel separation on the three-dimensional point cloud data of each group of aggregates and the three-dimensional point cloud data of the background to acquire an original depth map of each group of aggregates in x, y and z channels and a depth map of the background in x, y and z channels;
each group of aggregate comprises at least one aggregate, and gears and quality of all aggregates are collected;
step2: subtracting the original depth map of each group of aggregate in the z channel from the depth map of the background in the z channel to obtain a plurality of z channel depth maps I Z Denoising the plurality of z-channel depth maps to obtain a plurality of denoised z-channel depth maps;
step3: calculating the area of a connected domain in each denoised z-channel depth map, screening out connected domains with the area of the connected domain larger than a first threshold value in each denoised z-channel depth map, merging the connected domains larger than the first threshold value into a template image, and obtaining a plurality of template images;
step 4: dividing the template images obtained in the step3 to obtain a plurality of groups of aggregate effective areas, wherein each group of aggregate effective areas comprises I Xr 、I Yr And I Zr Wherein I Zr Representing an effective area obtained by dividing the corresponding z-channel depth map in the step2 by each template image, and I Xr Representing an effective area obtained by segmenting the original depth map of the corresponding aggregate in the step1 in the x channel by each template image, I Yr Representing an effective area obtained by dividing the original depth image of the corresponding aggregate in the step1 in the y channel by each template image;
step 5: converting a plurality of groups of aggregate effective areas into an aggregate point cloud model according to a PCL operator, wherein the aggregate point cloud model comprises a plurality of aggregate point clouds, and extracting three-dimensional characteristics and two-dimensional characteristics of each aggregate point cloud;
step 6: and (3) taking all three-dimensional features and two-dimensional features of the aggregate point cloud model obtained in the step (5) as feature data sets, establishing an XGBoost model, taking the feature data sets obtained in the step (4) as input, taking the gear and quality of all aggregates in the step (1) as a label set, training the XGBoost model, and taking the trained XGBoost model as an aggregate grading rapid detection model.
Specifically, the three-dimensional point cloud data in step1 includes a depth map, a millimeter value altitude map, a gray map, an effective data area, a frame count, a time stamp, an encoder position, an encoder sequence, an X-direction offset, an X-direction resolution, a Y-direction resolution, a Z-direction offset, a Z-direction resolution, an image width, an image height, and whether there is a gray value. The height map, the millimeter value height map, the gray level map and the effective data area are stored in a two-dimensional image.
Specifically, in step1, aggregate and background images are separated by channels to obtain I Xi 、I Yi 、I Zi (i=o, b, aggregate and background respectively), where I Xi Record X-axis position information, I Yi Record Y-axis position information, I Zi Height information is recorded.
Specifically, the denoising in step2 includes the steps of:
1) Depth map for z channel I Z Threshold segmentation is carried out, and irrelevant noise points except a background and a target are removed;
2) The image after threshold segmentation is converted into a real height image according to the Z-axis resolution, the real height image is subjected to denoising by sequentially executing opening operation and closing operation, and a denoised Z-channel depth image is obtained, wherein the resolution range of the image generated by the embodiment is 624 x 350-3120 x 1750.
Specifically, the open operation is used to eliminate small interfering objects, where assuming that X and Z are two sets in two-dimensional european space, representing a target aggregate image and one structural element (window), respectively, the open operation is defined as follows:
wherein,,the representation is that the X theta Z corrosion image is restored to the original image after being expanded by the structural element Z; in order to keep the three-dimensional characteristic scale of the aggregate unchanged, the corrosion effect on the edge of the aggregate is reduced by closing operation with the same scale.
Preferably, the two steps of opening operation and closing operation are combined into a group, and 7 times of opening and closing combinations with different scales are executed to obtain a denoised z-channel depth map I Zd The sizes of the structural elements selected each time are respectively 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 and 3.5.
Specifically, in step3, the first threshold is 300 pixels.
Specifically, the segmentation in the step 4 means that the template image is used as a mask to select a corresponding region in the corresponding image to complete segmentation.
Specifically, in step 5, three-channel xyz data are converted into PLY point cloud format through PCL operator so as to convert multiple groups of aggregate effective areas into aggregate point cloud model.
Specifically, each aggregate point cloud in the aggregate point cloud model in step 5 satisfies:
1) The pixel distance between the eight neighborhood point sets is smaller than a second threshold, and the second threshold is 15 pixels;
2) The number of the connected point sets in each aggregate point cloud is larger than a third threshold, and the third threshold is 500 pixels.
Specifically, in step 5, the acquiring the three-dimensional characteristic of each aggregate point cloud includes the following sub-steps:
step1: obtaining a convex hull of the current aggregate point cloud, and obtaining convex hull characteristics;
step2: extracting convex hull center, volume, area, optimal external cube and height parameters by using ConvexHull < pcl > operator;
step3: extracting three-dimensional features of aggregate, e.g. longest axis L of best circumscribing cube 1 Second long axis L 2 And a third long axis L 3 The length values, the equivalent ellipse length and minor axis, etc., are specifically shown in table 1.
TABLE 1
The coordinate system in Step3 is oriented such that the longest edge of the frame is aligned with the x-axis, the second longest edge is aligned with the y-axis, and the smallest edge is aligned with the z-axis.
Specifically, in step 5, the acquiring the two-dimensional characteristic of each aggregate point cloud includes the following sub-steps:
step1: mapping the three-dimensional aggregate point cloud model to a two-dimensional matrix;
step2: the aggregate characteristics of the two-dimensional matrix are extracted by operators such as cvContourArea, cvArcLength of opencv, and the two-dimensional characteristics such as contour length, surface area and the like are obtained, and are specifically shown in table 2.
TABLE 2
Specifically, in step 6, the feature data set is stored in csv format, and each row is represented as an aggregate, wherein, the 1 st row of the parameter header is the aggregate serial number, and the 2 nd to 53 th rows are the extracted 52 three-dimensional feature parameters.
Specifically, in the training process of step 6, the data set is processed according to the following steps: 2: the scale of 2 is divided into a training set, a test set and a verification set, and the XGBoost regression and classification model based on machine learning is input. After a regression tree is established, the three-dimensional features obtained in the step one are compressed to one-dimensional feature vectors, the one-dimensional feature vectors are input into a detection model of the invention for processing, a predicted value of a number group is close to a true value and has generalization capability through greedy strategy and secondary optimization, an output value of a cross entropy loss function is optimized through a gradient descent method, the weight of a network is updated through counter propagation, parameters of each operation are continuously adjusted, the model and the super parameters are continuously improved until loss reaches a minimum value, and then a predicted result of aggregate gear and quality is obtained.
The embodiment also discloses a rapid detection method for aggregate grading, which comprises the following steps:
step I: collecting three-dimensional point cloud data of aggregate to be detected;
step II: acquiring a characteristic dataset of three-dimensional point cloud data of aggregate to be detected according to the steps 1-5 of the method for establishing the aggregate grading rapid detection model;
step III: and inputting the characteristic data set of the three-dimensional point cloud data of the aggregate to be detected into an aggregate grading rapid detection model to obtain the gear and quality of the aggregate to be detected.
The embodiment also discloses a rapid aggregate grading detection system, which comprises a data acquisition module, a denoising module, a template acquisition module, an aggregate effective area generation module, a characteristic acquisition module, a model training module and a detection module;
the data acquisition module is used for acquiring three-dimensional point cloud data of a plurality of groups of aggregates and three-dimensional point cloud data of a background, and respectively carrying out channel separation on the three-dimensional point cloud data of each group of aggregates and the three-dimensional point cloud data of the background to acquire an original depth map of each group of aggregates in x, y and z channels and a depth map of the background in x, y and z channels; each group of aggregate comprises at least one aggregate, and gears and quality of all aggregates are collected;
the denoising module is used for subtracting the original depth map of each group of aggregate in the z channel, which is obtained by the data acquisition module, from the depth map of the background in the z channel respectively to obtain a plurality of z channel depth maps, denoising the plurality of z channel depth maps, and obtaining a plurality of denoised z channel depth maps;
the template acquisition module is used for calculating the area of the connected domain in each denoised z-channel depth map in the denoising module, and combining the connected domains with the area of the connected domain larger than a first threshold value in each denoised z-channel depth map into a template image to obtain a plurality of template images;
the aggregate effective area generating module is used for obtaining a plurality of template images by adopting the template obtaining module to divide so as to obtain a plurality of groups of aggregate effective areas, and each group of aggregate effective areas comprises I Xr 、I Yr And I Zr Wherein I Zr Representing an effective area obtained by dividing the corresponding z-channel depth map in the step2 by each template image, and I Xr Representing an effective area obtained by segmenting the original depth map of the corresponding aggregate in the step1 in the x channel by each template image, I Yr Representing an effective area obtained by dividing the original depth image of the corresponding aggregate in the step1 in the y channel by each template image;
the characteristic acquisition module is used for converting a plurality of groups of aggregate effective areas obtained by the aggregate effective area generation module into an aggregate point cloud model according to the PCL operator, wherein the aggregate point cloud model comprises a plurality of aggregate point clouds, and three-dimensional characteristics and two-dimensional characteristics of each aggregate point cloud are extracted;
the model training module is used for taking all three-dimensional features and two-dimensional features of the aggregate point cloud model obtained by the feature obtaining module as feature data sets, establishing an XGBoost model, taking the feature data sets obtained by the aggregate effective area generating module as input, taking the gear and quality of all aggregates as a label set, training the XGBoost model, and taking the trained XGBoost model as an aggregate grading rapid detection model;
the detection module is used for acquiring three-dimensional point cloud data of the aggregate to be detected, acquiring a characteristic data set of the three-dimensional point cloud data of the aggregate to be detected, inputting the characteristic data set of the three-dimensional point cloud data of the aggregate to be detected into the aggregate grading rapid detection model, and acquiring the gear and quality of the aggregate to be detected.
Specifically, the denoising in the denoising module includes the following steps:
1) Threshold segmentation is carried out on the z-channel depth map;
2) And converting the image subjected to threshold segmentation into a real height image according to the z-axis resolution, denoising the real height image by sequentially executing opening operation and closing operation, and obtaining a denoised z-channel depth image.
Specifically, the first threshold is 300 pixels.
Specifically, each aggregate point cloud in the aggregate point cloud model in the belonging feature acquisition module satisfies the following conditions:
1) The pixel distance between the eight neighborhood point sets is smaller than a second threshold, and the second threshold is 15 pixels;
2) The number of the connected point sets in each aggregate point cloud is larger than a third threshold, and the third threshold is 500 pixels.
Specifically, the data acquisition module adopts a Gocator2300 sensor module to be fixed with the conveyor belt and arranged right above the aggregate object stage, and adopts a smart200PLC control module to control a three-dimensional camera on the conveyor belt to scan aggregate and obtain point cloud data of the aggregate. And the step length of the conveyor belt and the acquisition resolution of the camera are synchronized by adopting an s7-200 encoder module so as to ensure that the acquired three-dimensional characteristic size accords with a true value.
Example 1
In this embodiment, a method for establishing an aggregate grading rapid detection model is disclosed, wherein the scale of establishing an aggregate characteristic data set is as follows: collecting 2200 Guangdongs and 2200 Gansu aggregates, and preparing 4.75, 9.6, 13.2 and 16mm four-grade aggregate three-dimensional characteristic data sets, wherein the data sets are stored in a csv format and comprise 52 three-dimensional characteristics, quality, gear and needle-shaped.
The specific labeling process when the label set is established is as follows, 4400 aggregates sequentially pass through 16, 13.2, 9.6 and 4.75mm particle-size square hole sieves specified in JTGE42-2005 highway engineering aggregate test procedure, the non-passers are labeled according to the current aperture, then needle-shaped identification is carried out on the aggregates according to the specified particle size by using a gauge, the non-needle-shaped particles with the particle length larger than the corresponding spacing on the needle-shaped gauge are selected as needle-shaped particles, the non-needle-shaped particles with the corresponding spacing passing through the needle-shaped gauge are subjected to sheet-shaped particle identification one by one, and the non-needle-shaped particles with the thickness smaller than the corresponding aperture width passing through the sheet-shaped gauge are selected as sheet-shaped particles. In this embodiment, the needles and the sheets are not distinguished, and are collectively labeled as needle sheets, and the data set is produced as shown in fig. 4.
As shown in fig. 1, which shows the denoising process of the present embodiment, it can be seen that the image quality has been significantly improved after 3 sets of opening and closing operations, the noise point has been substantially completely removed after all 7 sets of opening and closing operations, and there is substantially no information loss. Fig. 1 (a) is an original view, fig. 1 (b) is a first opening operation and the structural element size is 0.5, fig. 1 (c) is a first closing operation and the structural element size is 0.5, fig. 1 (d) is a second opening operation and the structural element size is 1.0, fig. 1 (e) is a second closing operation and the structural element size is 1.0, fig. 1 (f) is a third opening operation and the structural element size is 1.5, fig. 1 (g) is a third closing operation and the structural element size is 1.5, fig. 1 (h) is a fourth opening operation and the structural element size is 2.0, fig. 1 (i) is a fourth closing operation and the structural element size is 2.0, fig. 1 (j) is a fifth opening operation and the structural element size is 2.5, fig. 1 (k) is a sixth opening operation and the structural element size is 1.5, fig. 1 (h) is a fourth opening operation and the structural element size is 2.0, fig. 1 (m) is a seventh opening operation and the structural element size is 3.5, and the structural element size is 1.0.
As shown in FIG. 3, the comparison of the predicted grading and the actual grading in the embodiment can be seen that the method meets the error within 5%, and the calculation time is within 10 seconds, so that the method can meet the engineering requirement of rapid grading detection of aggregate compared with the existing grading manual detection method which needs a plurality of hours.
To illustrate the effectiveness of the method of the present invention, the inventors performed a rapid aggregate grading test based on three-dimensional feature factors. The operating system used was Win10 64 and the CPU was Intel (R) Core (TM) i5-9400 CPU@2.90GHz.
According to the invention, three-dimensional point cloud data of the background and the aggregate are continuously collected respectively for grading prediction. Separating three channels of the two depth images respectively, and then performing difference removal on the two images to remove the background; denoising through continuous opening and closing operation to obtain aggregate binary images with interference noise removed; then calculating and counting the connected domains and the number of the aggregates to be used as a three-channel effective area template; then converting the three-channel image into a 3D point cloud model, generating a template to extract a point cloud effective area, and dividing and communicating aggregate point clouds in the effective area; then three-dimensional characteristics of each aggregate are extracted respectively and input into a trained classification and regression model, and a prediction result of the quality and gear of each aggregate is obtained; finally, the grading is counted and obtained, and the calculation result is shown in fig. 5, wherein the first column is the aggregate gear, the second column is the aggregate ratio of different gears, the third column is the passing rate, and the fourth column is the screen residue rate.

Claims (9)

1. The method for establishing the aggregate grading rapid detection model is characterized by comprising the following steps of:
step1: acquiring three-dimensional point cloud data of multiple groups of aggregates and three-dimensional point cloud data of a background, and respectively carrying out channel separation on the three-dimensional point cloud data of each group of aggregates and the three-dimensional point cloud data of the background to acquire an original depth map of each group of aggregates in x, y and z channels and a depth map of the background in x, y and z channels;
each group of aggregate comprises at least one aggregate, and gears and quality of all aggregates are collected;
step2: subtracting the original depth map of each group of aggregate in the z channel from the depth map of the background in the z channel respectively to obtain a plurality of z channel depth maps, denoising the plurality of z channel depth maps to obtain a plurality of denoised z channel depth maps;
step3: calculating the area of the connected domain in each denoised z-channel depth map in the step2, and merging the connected domains with the area of the connected domain larger than a first threshold value in each denoised z-channel depth map into a template image to obtain a plurality of template images;
step 4: dividing the template images obtained in the step3 to obtain a plurality of groups of aggregate effective areas, wherein each group of aggregate effective areas comprises I Xr 、I Yr And I Zr Wherein I Zr Representing an effective area obtained by dividing the corresponding z-channel depth map in the step2 by each template image, and I Xr Representing an effective area obtained by segmenting the original depth map of the corresponding aggregate in the step1 in the x channel by each template image, I Yr Representing an effective area obtained by dividing the original depth image of the corresponding aggregate in the step1 in the y channel by each template image;
step 5: converting the multiple groups of aggregate effective areas obtained in the step 4 into aggregate point cloud models according to PCL operators, wherein the aggregate point cloud models comprise multiple aggregate point clouds, and extracting three-dimensional features and two-dimensional features of each aggregate point cloud;
step 6: and (3) taking all three-dimensional features and two-dimensional features of the aggregate point cloud model obtained in the step (5) as feature data sets, establishing an XGBoost model, taking the feature data sets obtained in the step (4) as input, taking the gear and quality of all aggregates obtained in the step (1) as a label set, training the XGBoost model, and taking the trained XGBoost model as an aggregate grading rapid detection model.
2. The method for building an aggregate gradation rapid inspection model according to claim 1, wherein the denoising in step2 comprises the steps of:
1) Threshold segmentation is carried out on the z-channel depth map;
2) And converting the image subjected to threshold segmentation into a real height image according to the z-axis resolution, denoising the real height image by sequentially executing opening operation and closing operation, and obtaining a denoised z-channel depth image.
3. The method for building an aggregate gradation rapid inspection model according to claim 1, wherein the first threshold value in the step3 is 300 pixels.
4. The method for building an aggregate gradation rapid inspection model according to claim 1, wherein each aggregate point cloud in the aggregate point cloud model of step 5 satisfies:
1) The pixel distance between the eight neighborhood point sets is smaller than a second threshold, and the second threshold is 15 pixels;
2) The number of the connected point sets in each aggregate point cloud is larger than a third threshold, and the third threshold is 500 pixels.
5. The rapid aggregate grading detection method is characterized by comprising the following steps of:
step I: collecting three-dimensional point cloud data of aggregate to be detected;
step II: acquiring a characteristic dataset of three-dimensional point cloud data of aggregate to be detected according to steps 1-5 of the method for establishing an aggregate grading rapid detection model according to any one of claims 1-4;
step III: inputting the characteristic data set of the three-dimensional point cloud data of the aggregate to be detected into the aggregate grading rapid detection model obtained by the method for establishing the aggregate grading rapid detection model according to any one of claims 1-4, and obtaining the gear and quality of the aggregate to be detected.
6. The aggregate grading rapid detection system is characterized by comprising a data acquisition module, a denoising module, a template acquisition module, an aggregate effective area generation module, a characteristic acquisition module, a model training module and a detection module;
the data acquisition module is used for acquiring three-dimensional point cloud data of a plurality of groups of aggregates and three-dimensional point cloud data of a background, and respectively carrying out channel separation on the three-dimensional point cloud data of each group of aggregates and the three-dimensional point cloud data of the background to acquire an original depth map of each group of aggregates in x, y and z channels and a depth map of the background in x, y and z channels; each group of aggregate comprises at least one aggregate, and gears and quality of all aggregates are collected;
the denoising module is used for subtracting the original depth map of each group of aggregate in the z channel, which is obtained by the data acquisition module, from the depth map of the background in the z channel respectively to obtain a plurality of z channel depth maps, denoising the plurality of z channel depth maps, and obtaining a plurality of denoised z channel depth maps;
the template acquisition module is used for calculating the area of the connected domain in each denoised z-channel depth map in the denoising module, and combining the connected domains with the area of the connected domain larger than a first threshold value in each denoised z-channel depth map into a template image to obtain a plurality of template images;
the aggregate effective area generating module is used for obtaining a plurality of template images by adopting the template obtaining module to divide so as to obtain a plurality of groups of aggregate effective areas, and each group of aggregate effective areas comprises I Xr 、I Yr And I Zr Wherein I Zr Representing an effective area obtained by dividing the corresponding z-channel depth map in the step2 by each template image, and I Xr Representing an effective area obtained by segmenting the original depth map of the corresponding aggregate in the step1 in the x channel by each template image, I Yr Representing an effective area obtained by dividing the original depth image of the corresponding aggregate in the step1 in the y channel by each template image;
the characteristic acquisition module is used for converting a plurality of groups of aggregate effective areas obtained by the aggregate effective area generation module into an aggregate point cloud model according to the PCL operator, wherein the aggregate point cloud model comprises a plurality of aggregate point clouds, and three-dimensional characteristics and two-dimensional characteristics of each aggregate point cloud are extracted;
the model training module is used for taking all three-dimensional features and two-dimensional features of the aggregate point cloud model obtained by the feature obtaining module as feature data sets, establishing an XGBoost model, taking the feature data sets obtained by the aggregate effective area generating module as input, taking the gear and quality of all aggregates as a label set, training the XGBoost model, and taking the trained XGBoost model as an aggregate grading rapid detection model;
the detection module is used for acquiring three-dimensional point cloud data of the aggregate to be detected, acquiring a characteristic data set of the three-dimensional point cloud data of the aggregate to be detected, inputting the characteristic data set of the three-dimensional point cloud data of the aggregate to be detected into the aggregate grading rapid detection model, and acquiring the gear and quality of the aggregate to be detected.
7. The aggregate gradation rapid detection system of claim 6, wherein the denoising in the denoising module comprises the steps of:
1) Threshold segmentation is carried out on the z-channel depth map;
2) And converting the image subjected to threshold segmentation into a real height image according to the z-axis resolution, denoising the real height image by sequentially executing opening operation and closing operation, and obtaining a denoised z-channel depth image.
8. The rapid aggregate gradation detection system of claim 6, wherein the first threshold is 300 pixels.
9. The rapid aggregate grading detection system according to claim 6, wherein each aggregate point cloud in the aggregate point cloud model in the belonging feature acquisition module satisfies:
1) The pixel distance between the eight neighborhood point sets is smaller than a second threshold, and the second threshold is 15 pixels;
2) The number of the connected point sets in each aggregate point cloud is larger than a third threshold, and the third threshold is 500 pixels.
CN202110219609.8A 2021-02-26 2021-02-26 Method and system for establishing and detecting aggregate grading rapid detection model Active CN112949463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110219609.8A CN112949463B (en) 2021-02-26 2021-02-26 Method and system for establishing and detecting aggregate grading rapid detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110219609.8A CN112949463B (en) 2021-02-26 2021-02-26 Method and system for establishing and detecting aggregate grading rapid detection model

Publications (2)

Publication Number Publication Date
CN112949463A CN112949463A (en) 2021-06-11
CN112949463B true CN112949463B (en) 2023-08-04

Family

ID=76246581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110219609.8A Active CN112949463B (en) 2021-02-26 2021-02-26 Method and system for establishing and detecting aggregate grading rapid detection model

Country Status (1)

Country Link
CN (1) CN112949463B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004113834A2 (en) * 2003-06-17 2004-12-29 Troxler Electronic Laboratories, Inc. Method of determining a dimension of a sample of a construction material and associated appartus
CN109523552A (en) * 2018-10-24 2019-03-26 青岛智能产业技术研究院 Three-dimension object detection method based on cone point cloud
CN110458119A (en) * 2019-08-15 2019-11-15 中国水利水电科学研究院 A kind of aggregate gradation method for quickly identifying of non-contact measurement
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025642B (en) * 2016-01-27 2018-06-22 百度在线网络技术(北京)有限公司 Vehicle's contour detection method and device based on point cloud data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004113834A2 (en) * 2003-06-17 2004-12-29 Troxler Electronic Laboratories, Inc. Method of determining a dimension of a sample of a construction material and associated appartus
CN109523552A (en) * 2018-10-24 2019-03-26 青岛智能产业技术研究院 Three-dimension object detection method based on cone point cloud
CN110458119A (en) * 2019-08-15 2019-11-15 中国水利水电科学研究院 A kind of aggregate gradation method for quickly identifying of non-contact measurement
CN110689008A (en) * 2019-09-17 2020-01-14 大连理工大学 Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多特征因子的路用集料粒径计算神经网络模型;裴莉莉;孙朝云;户媛姣;李伟;高尧;郝雪丽;;华南理工大学学报(自然科学版)(第06期);全文 *

Also Published As

Publication number Publication date
CN112949463A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN110779937B (en) Casting product internal defect intelligent detection device
CN102914545B (en) Gear defect detection method and system based on computer vision
CN106355166A (en) Monitoring video and remote sensing image-based dust-haze spreading path drawing and source determination method
CN105388162B (en) Raw material silicon chip surface scratch detection method based on machine vision
CN110648364B (en) Multi-dimensional space solid waste visual detection positioning and identification method and system
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN106529410A (en) Haze diffusion path mapping and source determination method based on surveillance video
CN108921201A (en) Dam defect identification and classification method based on feature combination and CNN
WO2022166232A1 (en) Rock identification method, system and apparatus, terminal, and readable storage medium
CN104463199A (en) Rock fragment size classification method based on multiple features and segmentation recorrection
CN108828608B (en) Laser radar background data filtering method in vehicle detection method
CN105069395B (en) Roadmarking automatic identifying method based on Three Dimensional Ground laser scanner technique
CN114022474A (en) Particle grading rapid detection method based on YOLO-V4
CN117392097A (en) Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm
CN115205255A (en) Stone automatic grading method and system based on deep learning
CN116415843A (en) Multi-mode remote sensing auxiliary mine ecological environment evaluation method for weak network environment
CN112949463B (en) Method and system for establishing and detecting aggregate grading rapid detection model
CN111797730B (en) Automatic analysis method for cement clinker lithofacies
CN117269954B (en) Real-time identification method for multiple hidden diseases of ground penetrating radar road based on YOLO
CN116612132A (en) 3D point cloud target segmentation method based on aggregate characteristics
CN110517220A (en) A kind of surface of aggregate quantity detection method based on laser three-D data
CN116434054A (en) Intensive remote sensing ground object extraction method based on line-plane combination
CN102054278A (en) Object tracking method based on grid contraction
CN114494240A (en) Ballastless track slab crack measurement method based on multi-scale cooperation deep learning
Seyedin et al. Designing and programming an efficient software for sizing and counting various particles using image processing technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Wei

Inventor after: Yang Ming

Inventor after: Pei Lili

Inventor after: Hao Xueli

Inventor after: Shi Li

Inventor after: Liu Hanye

Inventor after: Ding Jiangang

Inventor before: Li Wei

Inventor before: Yang Ming

Inventor before: Pei Lili

Inventor before: Hao Xueli

Inventor before: Shi Li

Inventor before: Liu Hanye

Inventor before: Ding Jiangang

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210611

Assignee: Shaanxi Xinglang Keao Network Technology Co.,Ltd.

Assignor: CHANG'AN University

Contract record no.: X2023980049357

Denomination of invention: Establishment, Detection Method and System of a Rapid Detection Model for Aggregate Grading

Granted publication date: 20230804

License type: Common License

Record date: 20231205