CN114677331A - Pipe defect detection method and device based on fusion of gray level image and depth data - Google Patents
Pipe defect detection method and device based on fusion of gray level image and depth data Download PDFInfo
- Publication number
- CN114677331A CN114677331A CN202210203782.3A CN202210203782A CN114677331A CN 114677331 A CN114677331 A CN 114677331A CN 202210203782 A CN202210203782 A CN 202210203782A CN 114677331 A CN114677331 A CN 114677331A
- Authority
- CN
- China
- Prior art keywords
- defect
- depth
- defect candidate
- candidate region
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Abstract
The invention discloses a pipe defect detection method and device based on fusion of gray level images and depth data, and relates to the technical field of machine vision detection. The method comprises the following steps: acquiring surface characteristic information of a sample to be detected; wherein the surface feature information comprises depth data and a grayscale image; obtaining a defect candidate region with depth information according to the depth dataObtaining a defect candidate area with gray features according to the gray image and the trained target detection modelDefect candidate region with depth informationAnd defect candidate region having gradation characteristicsAnd performing fusion matching to obtain a defect detection result. The invention can integrate the multi-dimensional characteristics of depth, chromatic aberration and the like, realizes the quantification of the severity of the defect on the basis of qualitative analysis, and greatly improves the accuracy of defect detection.
Description
Technical Field
The invention relates to the technical field of machine vision detection, in particular to a pipe defect detection method and device based on fusion of gray level images and depth data.
Background
The surface defect of the pipe product is a key factor for determining the quality characteristic of the pipe product, and the defect can be effectively detected, so that the defect detection is very important for realizing the grading of the product and reducing the customer complaints. The surface defect detection method commonly used at the present stage mostly depends on an industrial camera to acquire a surface two-dimensional gray image, and utilizes a specific detection algorithm to position and identify the defects. The method only utilizes the reflection characteristics of light to carry out qualitative analysis on the defects, and lacks consideration on the dimension of the depth direction of the defects, so that the identified defects have the practical problems of inaccurate detection, uncertain severity and the like.
The surface profile of the pipe is in a circular arc shape, and is different from the traditional products such as hot-rolled strip steel, medium plate and the like, the defect detection difficulty is increased due to the irregular surface profile, and the existing detection equipment is not enough to support the intelligent detection task. In addition, although the depth information can be detected by the distance sensor, the depth of the defect on the surface of the pipe cannot be directly confirmed due to the existence of the arc-shaped surface, and the depth information cannot be directly used for depth measurement of the defect.
Disclosure of Invention
The invention provides a method for detecting defects in a three-dimensional image, and aims to solve the problems that defects identified in the prior art are inaccurate in detection, ambiguous in severity and difficult to detect due to irregular surface profiles, and the existing detection equipment is insufficient in supporting intelligent detection tasks.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the invention provides a pipe defect detection method based on fusion of gray level images and depth data, which is implemented by electronic equipment and comprises the following steps:
s1, obtaining surface characteristic information of the sample to be detected; wherein the surface feature information comprises depth data and a grayscale image.
S3, obtaining defect candidate area with gray feature according to gray image and trained target detection model
S4, defect candidate area with depth informationAnd defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
Optionally, the acquiring the surface feature information of the sample to be detected in S1 includes:
and acquiring the depth data and the gray level image of the sample to be detected by using a 3D camera.
The depth data includes surface three-dimensional characteristics of the sample to be detected.
The grayscale image includes the color, texture, and morphological characteristics of the sample to be detected.
Optionally, the obtaining of the defect candidate region with depth information according to the depth data in S2 includes:
and fitting to obtain a profile equation of the surface of the sample to be detected according to the depth data, projecting the depth data by taking the profile equation of the surface obtained by fitting as a reference surface to obtain a surface relative depth feature, and performing threshold segmentation processing on the surface relative depth feature to obtain a defect candidate region with depth information.
The defect candidate region having the depth information is a region exceeding a predetermined depth.
Alternatively, the profile equation of the surface of the sample to be detected is shown in the following formula (1):
Ax2+Bz2+Cxz+Dx+Ez+F=0 (1)
wherein A, B, C, D, E and F are contour equation parameters; x is a transverse position distribution coordinate in the depth data; and z is the surface depth value of the sample to be detected in the depth data.
Optionally, the fitting process of the profile equation parameters includes:
and processing the depth data line by line, taking effective data in each line of depth data as sampling point data of the surface profile, and fitting the sampling point data through an RANSAC algorithm to obtain profile equation parameters.
The effective data is the depth value from the surface of the sample to be detected and is data larger than 0.
Optionally, projecting the depth data with a contour equation of the surface obtained by fitting as a reference plane, and obtaining the relative depth feature of the surface includes:
sampling point X (X) in X direction in depth data1,x2,...xw) The profile equation is sequentially input to the surface to obtain a Z-direction reference surface depth value Z '(Z'1,z′2,...z′w) Calculating the relative depth characteristic of the projected surface according to the following formula (2):
wherein the content of the first and second substances,the projected surface relative depth features; z is the surface depth value of the pipe at the sampling point x; w is the number of valid sample point data.
Optionally, the performing a threshold segmentation process on the surface relative depth feature to obtain a defect candidate region with depth information includes:
and selecting an independent connected region with the absolute value of the surface relative depth characteristic larger than a preset detection threshold value as a defect candidate region with depth information.
The defect candidate area with the depth information includes position coordinates, size and defect depth of the defect candidate area.
Optionally, the obtaining of the defect candidate region with gray scale features according to the gray scale image and the trained target detection model in S3 includes:
and inputting the gray level image into the trained target detection model to obtain a defect candidate area with gray level characteristics.
The training data of the target detection model is a real defect picture.
The defect candidate region having the gradation feature includes the position coordinates, the size, and the defect type of the defect candidate region.
Optionally, defect candidate region to be provided with depth information in S4And defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result, wherein the defect detection result comprises the following steps:
defect candidate region with depth informationThe first class of defects of (2) is depth class defects.
Go through one by oneIf the defect candidate region is a defect candidate region having a gray featureIn is present inThe defect candidate region in (1) corresponds to the defect candidate region position, thenThe corresponding defect candidate region category is identified asOf the defect candidate region in (2) is determined.
If in the defect candidate area with gray scale featureIn, is absent andthe defect candidate region in (1) corresponds to the defect candidate region position, thenThe secondary defect category of the defect candidate region in (1) is to be classified.
Zhongyao withThe first-level defect category of the matched defect candidate area is color difference defect and defectDepth value of 0, class of secondary defectThe resulting defect class is identified.
On the other hand, the invention provides a pipe defect detection device based on fusion of gray level images and depth data, which is applied to a pipe defect detection method based on fusion of gray level images and depth data, and comprises the following steps:
the acquisition module is used for acquiring surface characteristic information of a sample to be detected; wherein the surface feature information comprises depth data and a grayscale image.
A depth defect candidate region module for obtaining a defect candidate region with depth information according to the depth data
A gray defect candidate region module for obtaining a defect candidate region with gray characteristics according to the gray image and the trained target detection model
An output module for outputting the defect candidate region with the depth informationAnd defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
Optionally, the obtaining module is further configured to:
and acquiring the depth data and the gray level image of the sample to be detected by using a 3D camera.
The depth data includes surface three-dimensional characteristics of the sample to be detected.
The grayscale image includes the color, texture, and morphological characteristics of the sample to be detected.
Optionally, the depth defect candidate region module is further configured to:
and fitting to obtain a profile equation of the surface of the sample to be detected according to the depth data, projecting the depth data by taking the profile equation of the surface obtained by fitting as a reference surface to obtain a surface relative depth feature, and performing threshold segmentation processing on the surface relative depth feature to obtain a defect candidate region with depth information.
The defect candidate region having the depth information is a region exceeding a predetermined depth.
Alternatively, the profile equation of the surface of the sample to be detected is shown in the following formula (1):
Ax2+Bz2+Cxz+Dx+Ez+F=0 (1)
wherein A, B, C, D, E and F are contour equation parameters; x is a transverse position distribution coordinate in the depth data; and z is the surface depth value of the sample to be detected in the depth data.
Optionally, the depth defect candidate region module is further configured to:
and processing the depth data line by line, taking effective data in each line of depth data as sampling point data of the surface profile, and fitting the sampling point data through an RANSAC algorithm to obtain profile equation parameters.
The effective data is the depth value from the surface of the sample to be detected and is data larger than 0.
Optionally, the depth defect candidate region module is further configured to:
sampling point X (X) in X direction in depth data1,x2,...xw) The contour equation sequentially input to the surface is used to obtain a Z-direction reference surface depth value Z '(Z'1,z′2,...z′w) Calculating the relative depth characteristic of the projected surface according to the following formula (2):
wherein the content of the first and second substances,the projected surface relative depth features; z is the surface depth value of the pipe at the sampling point x; w is the number of valid sample point data.
Optionally, the depth defect candidate region module is further configured to:
and selecting an independent connected region with the absolute value of the surface relative depth characteristic larger than a preset detection threshold value as a defect candidate region with depth information.
The defect candidate area with the depth information includes position coordinates, size and defect depth of the defect candidate area.
Optionally, the gray defect candidate region module is further configured to:
and inputting the gray level image into the trained target detection model to obtain a defect candidate area with gray level characteristics.
The training data of the target detection model is a real defect picture.
The defect candidate region having the gradation feature includes the position coordinates, the size, and the defect type of the defect candidate region.
Optionally, the output module is further configured to:
defect candidate area with depth informationThe first class of defects of (2) is depth class defects.
Traverse one by oneIf the defect candidate area is a defect candidate area with gray scale featuresIn is present inThe defect candidate region in (1) corresponds to the defect candidate region position, thenThe corresponding defect candidate region category is identified asOf the defect candidate region in (2) is determined.
If in the defect candidate area with gray scale featureIn, is absent andthe defect candidate region in (1) corresponds to the defect candidate region position, thenThe secondary defect category of the defect candidate region in (1) is to be classified.
Zhongyao withThe first-level defect type of the matched defect candidate area is a color difference defect, the depth value of the defect is 0, and the second-level defect type isThe resulting defect class is identified.
In one aspect, an electronic device is provided, and the electronic device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the pipe defect detection method based on grayscale image and depth data fusion.
In one aspect, a computer-readable storage medium is provided, where at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the pipe defect detection method based on grayscale image and depth data fusion.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in the scheme, aiming at the problems existing in the surface defect detection of the conventional pipe, on the basis of a general gray image defect detection means, the gray image acquired by the 3D camera and the depth data are subjected to fusion detection, so that not only can a two-dimensional gray image for detecting a defect target be obtained, but also the three-dimensional depth characteristics of the surface are synchronously output. And fitting the depth data to obtain a pipe surface profile equation, and projecting the depth data to the profile datum plane to obtain surface relative depth information of the pipe surface profile equation. And finally, performing multi-level label definition on the defect types by combining the candidate regions extracted from the depth data and the candidate regions identified on the gray level image, and finally outputting a quantized defect detection result. The problem that the severity of the defect cannot be quantified due to the fact that the gray level image is used alone is avoided, accuracy of defect judgment is obviously improved due to the increased depth dimension characteristics, defect detection is improved to a more intelligent level, and powerful support is provided for reducing labor cost and intelligently grading products. The method effectively improves the defect detection accuracy, the detected defect data is more reliable, and the maximum guarantee is provided for intelligent classification of products.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a pipe defect detection method based on fusion of a gray image and depth data according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of relative depth acquisition of the surface of a pipe provided by an embodiment of the present invention;
FIG. 3 is a diagram illustrating a fusion process of defect candidate regions under two types of data detection according to an embodiment of the present invention;
FIG. 4 is a block diagram of a pipe defect detection apparatus based on fusion of gray scale images and depth data according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for detecting a pipe defect based on fusion of a grayscale image and depth data, which can be implemented by an electronic device. As shown in fig. 1, a flow chart of a pipe defect detection method based on fusion of grayscale images and depth data, a processing flow of the method may include the following steps:
And S1, acquiring the surface characteristic information of the sample to be detected.
Wherein the surface feature information comprises depth data and a grayscale image.
Optionally, the acquiring the surface feature information of the sample to be detected in S1 includes:
and acquiring the depth data and the gray level image of the sample to be detected by using a 3D camera.
The depth data includes surface three-dimensional characteristics of the sample to be detected.
The grayscale image includes the color, texture, and morphological characteristics of the sample to be detected.
In a possible implementation, the pipe surface feature information may be collected by using 6 annular 3D cameras, and the pipe surface feature information may include a grayscale image with characteristics of color, texture, shape, and the like, and depth data with three-dimensional characteristics of the surface.
Optionally, a color filter may be added to the camera to shield ambient light from interference.
The 3D camera can synchronously acquire the gray level image and the depth data at the same time, and the size and the shooting position of the two data are consistent. Wherein the grayscale image size is 2560 x 1080, the data range is [0,255 ]; the size of the depth data matrix is 2560 x 1080, the data takes the camera target surface as a zero point, the distance from the surface of the shot pipe to the camera target surface is a depth value, and the measured depth precision value is 0.1 mm.
Optionally, the obtaining, according to the depth data, the defect candidate region with depth information in S2 includes:
and S21, fitting according to the depth data to obtain a profile equation of the surface of the sample to be detected.
Alternatively, the profile equation of the surface of the sample to be detected is shown in the following formula (1):
Ax2+Bz2+Cxz+Dx+Ez+F=0 (1)
wherein A, B, C, D, E and F are contour equation parameters; x is a transverse position distribution coordinate in the depth data; and z is the surface depth value of the sample to be detected in the depth data.
Optionally, the fitting process of the profile equation parameters includes:
and processing the depth data line by line, taking effective data in each line of depth data as sampling point data of the surface profile, and fitting the sampling point data through an RANSAC algorithm to obtain profile equation parameters.
The effective data is the depth value from the surface of the sample to be detected and is data larger than 0.
In a possible embodiment, as shown in fig. 2, the fitting process of the equation may be to process the depth data line by line, where each line of depth data may be used as sampling point data of the surface profile, the number of the sampling points is 2560, and the depth value of the area where the pipe is not shot is-100000 And 0, shooting the depth value of the pipe area to be a number larger than 0. Taking effective data of a pipe area as a sampling point, wherein X (X)1,x2,...xw) Distributing coordinate sampling points, Z (Z), for the transverse position of the surface of the pipe1,z2,...zw) And fitting the sampling point data by using a RANSAC (Random Sample Consensus) algorithm to obtain profile equation parameters A, B, C, D, E and F, wherein the w is the number of effective data points corresponding to the depth value of the surface of the pipe at the point x.
In the RANSAC algorithm fitting process, 16 sampling points are randomly selected from w effective data points each time for fitting, and the best fitting parameters are obtained by sequentially iterating for 100 rounds.
And S22, projecting the depth data by taking the contour equation of the surface obtained by fitting as a datum plane to obtain the relative depth characteristic of the surface.
Optionally, projecting the depth data with a profile equation of the surface obtained by fitting as a reference plane, and obtaining the relative depth feature of the surface includes:
sampling point X (X) in X direction in depth data1,x2,...xw) The contour equation sequentially input to the surface is used to obtain a Z-direction reference surface depth value Z '(Z'1,z′2,...z′w) Calculating the relative depth characteristic of the projected surface according to the following formula (2):
wherein the content of the first and second substances,the projected surface relative depth features; z is the surface depth value of the pipe at the sampling point x; w is the number of valid sample point data.
In one possible embodiment, the invalid data is a particular negative number set by the camera.
And S23, performing threshold segmentation processing on the surface relative depth features to obtain defect candidate areas with depth information.
The defect candidate region having the depth information is a region exceeding a predetermined depth.
Optionally, the performing a threshold segmentation process on the surface relative depth feature to obtain a defect candidate region with depth information includes:
and selecting an independent connected region with the absolute value of the surface relative depth characteristic larger than a preset detection threshold value as a defect candidate region with depth information.
The defect candidate area with the depth information includes position coordinates, size and defect depth of the defect candidate area.
In one possible embodiment, the relative depth information on the surfaceSetting the detection threshold value according to the actual requirement, for example, setting the detection threshold value to be 0.25mm, and the relative depth valueThe independent connected region having an absolute value greater than the detection threshold is defined as a defect candidate regionIncluding the position coordinates, the size and the defect depth of the candidate region.
S3, obtaining defect candidate area with gray feature according to gray image and trained target detection model
Optionally, the obtaining of the defect candidate region with gray scale features according to the gray scale image and the trained target detection model in S3 includes:
and inputting the gray level image into the trained target detection model to obtain a defect candidate area with gray level characteristics.
The training data of the target detection model is a real defect picture.
The defect candidate region having the gradation feature includes the position coordinates, the size, and the defect type of the defect candidate region.
In a possible implementation manner, the defect is identified by using a target detection model on the gray-scale image, the model can be selected from but not limited to a Yolov5 model, a retinaNet model and an EfficientDet model, and the model is trained and learned by using real defect picture samples collected in advance.
Wherein, sample data contains folding, scab, drawing and splitting, fish tail, bump flat, absciss layer, dirty, iron scale, oil mark totally 9 categories, and every category sample quantity is more than 1000, and the sample set is according to 8: 1: the scale region of 1 is divided into a training set, a verification set and a test set, and is used for training a model and verifying model indexes. The training process may use flipping, clipping, scaling, mixing, adding noise, etc. to perform sample amplification.
After the gray level image is input into the trained model, the recognized defect candidate region can be output The defect classification includes not only the position coordinates and the size of the candidate region, but also the corresponding defect classification.
S4, defect candidate area with depth informationAnd defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
Optionally, the defect candidate region to be provided with depth information in S4And defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result, wherein the defect detection result comprises the following steps:
defect candidate region with depth informationThe first class of defects of (2) is depth class defects.
Go through one by oneIf the defect candidate region is a defect candidate region having a gray featureIn is present inThe defect candidate region in (1) corresponds to the defect candidate region position, thenThe corresponding defect candidate region category is identified asOf the defect candidate region in (2) is determined.
If in the defect candidate area with gray scale feature In is absent anddefect candidate region corresponding to the defect candidate region position in (2), thenThe secondary defect category of the defect candidate region in (1) is to be classified.
Zhongyao withThe first-level defect type of the matched defect candidate area is a color difference defect, the depth value of the defect is 0, and the second-level defect type isThe resulting defect class is identified.
In one possible implementation, as shown in fig. 3, the category corresponding to the defect candidate box is represented by a multi-level label, and the definition rule of the final category is performed according to the following steps: go through one by oneThe defect candidate region in (1) is defined as a depth defect by its class, if possibleFinding the defect candidate region corresponding to the position of the defect candidate regionThe identified category is used as the secondary category (including folding, scarring, tearing, scratching, collapsing, delaminating) of the defect candidate region, as inIf the defect candidate area corresponding to the position of the defect candidate area is not found, defining the secondary class of the defect candidate area as to-be-classified; then go throughZhongyao withDefining the first class of the matched defect candidate area as a color difference class defect, the depth value of the defect as 0, and the second class (including dirt, iron scale and oil mark) of the defect candidate area The category obtained by the identification in (1) is taken as the standard; and finally, obtaining complete information (position coordinates, size, defect category and defect depth) of the surface defects of the pipe.
In the embodiment of the invention, aiming at the problems of the surface defect detection of the existing pipe, on the basis of a general gray image defect detection means, the gray image acquired by the 3D camera and the depth data are subjected to fusion detection, so that not only can a two-dimensional gray image for detecting a defect target be obtained, but also the three-dimensional depth characteristics of the surface are synchronously output. And fitting the depth data to obtain a pipe surface profile equation, and projecting the depth data onto the profile datum plane to obtain surface relative depth information of the pipe surface profile equation. And finally, combining the candidate area extracted from the depth data and the candidate area identified on the gray image, performing multi-level label definition on the defect type, and finally outputting a quantized defect detection result. The problem that the severity of the defects cannot be quantified due to the fact that only gray level images are used is avoided, the accuracy of defect judgment is obviously improved due to the added depth dimension characteristics, the defect detection is improved to a more intelligent level, and powerful support is provided for reducing labor cost and intelligently grading products. The method effectively improves the defect detection accuracy, the detected defect data is more reliable, and the maximum guarantee is provided for intelligent classification of products.
As shown in fig. 4, an embodiment of the present invention provides a pipe defect detecting apparatus 400 based on fusion of grayscale images and depth data, where the apparatus 400 is applied to implement a pipe defect detecting method based on fusion of grayscale images and depth data, and the apparatus 400 includes:
an obtaining module 410, configured to obtain surface feature information of a sample to be detected; wherein the surface feature information comprises depth data and a grayscale image.
A depth defect candidate region module 420, configured to obtain a defect candidate region with depth information according to the depth data
A gray defect candidate region module 430, configured to obtain a defect candidate region with gray features according to the gray image and the trained target detection model
An output module 440 for outputting the defect candidate region with depth informationAnd defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
Optionally, the obtaining module 410 is further configured to:
and acquiring the depth data and the gray level image of the sample to be detected by using a 3D camera.
The depth data includes surface three-dimensional characteristics of the sample to be detected.
The grayscale image includes the color, texture, and morphological characteristics of the sample to be detected.
Optionally, the depth defect candidate region module 420 is further configured to:
and fitting to obtain a profile equation of the surface of the sample to be detected according to the depth data, projecting the depth data by taking the profile equation of the surface obtained by fitting as a reference surface to obtain a surface relative depth feature, and performing threshold segmentation processing on the surface relative depth feature to obtain a defect candidate region with depth information.
The defect candidate region having the depth information is a region exceeding a predetermined depth.
Alternatively, the profile equation of the surface of the sample to be detected is shown in the following formula (1):
Ax2+Bz2+Cxz+Dx+Ez+F=0 (1)
wherein A, B, C, D, E and F are contour equation parameters; x is a transverse position distribution coordinate in the depth data; and z is the surface depth value of the sample to be detected in the depth data.
Optionally, the depth defect candidate region module 420 is further configured to:
and processing the depth data line by line, taking effective data in each line of depth data as sampling point data of the surface profile, and fitting the sampling point data through an RANSAC algorithm to obtain profile equation parameters.
The effective data is the depth value from the surface of the sample to be detected and is data larger than 0.
Optionally, the depth defect candidate region module 420 is further configured to:
sampling point X (X) in X direction in depth data1,x2,...xw) The contour equation sequentially input to the surface is used to obtain a Z-direction reference surface depth value Z '(Z'1,z′2,...z′w) Calculating the relative depth characteristic of the projected surface according to the following formula (2):
wherein the content of the first and second substances,the projected surface relative depth features; z is the surface depth value of the pipe at the sampling point x; w is the number of valid sample point data.
Optionally, the depth defect candidate region module 420 is further configured to:
and selecting an independent connected region with the absolute value of the surface relative depth characteristic larger than a preset detection threshold value as a defect candidate region with depth information.
The defect candidate area with the depth information includes position coordinates, size and defect depth of the defect candidate area.
Optionally, the gray defect candidate area module 430 is further configured to:
and inputting the gray level image into the trained target detection model to obtain a defect candidate area with gray level characteristics.
The training data of the target detection model is a real defect picture.
The defect candidate region having the gradation feature includes the position coordinates, the size, and the defect type of the defect candidate region.
Optionally, the output module 440 is further configured to:
defect candidate area with depth informationThe first class of defects of (2) is depth class defects.
Traverse one by oneIf the defect candidate area is a defect candidate area with gray scale featuresIn is present inDefect candidate region corresponding to the defect candidate region position in (2), thenThe corresponding defect candidate region category is identified asOf the defect candidate region in (2) is determined.
If in the defect candidate area with gray scale featureIn, is absent andthe defect candidate region in (1) corresponds to the defect candidate region position, thenThe secondary defect category of the defect candidate region in (1) is to be classified.
Zhongyao withThe first-level defect type of the matched defect candidate area is a color difference defect, the depth value of the defect is 0, and the second-level defect type isThe resulting defect class is identified.
In the embodiment of the invention, aiming at the problems existing in the surface defect detection of the conventional pipe, on the basis of a general gray image defect detection means, the gray image acquired by the 3D camera and the depth data are subjected to fusion detection, so that not only can a two-dimensional gray image for detecting a defect target be obtained, but also the three-dimensional depth characteristics of the surface are synchronously output. And fitting the depth data to obtain a pipe surface profile equation, and projecting the depth data to the profile datum plane to obtain surface relative depth information of the pipe surface profile equation. And finally, performing multi-level label definition on the defect types by combining the candidate regions extracted from the depth data and the candidate regions identified on the gray level image, and finally outputting a quantized defect detection result. The problem that the severity of the defect cannot be quantified due to the fact that the gray level image is used alone is avoided, accuracy of defect judgment is obviously improved due to the increased depth dimension characteristics, defect detection is improved to a more intelligent level, and powerful support is provided for reducing labor cost and intelligently grading products. The method effectively improves the defect detection accuracy, the detected defect data is more reliable, and the maximum guarantee is provided for intelligent classification of products.
Fig. 5 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present invention, where the electronic device 500 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 501 and one or more memories 502, where the memory 502 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 501 to implement the following method:
s1, obtaining surface characteristic information of the sample to be detected; wherein the surface feature information comprises depth data and a grayscale image.
S3, obtaining defect candidate area with gray feature according to gray image and trained target detection model
S4, defect candidate area with depth informationAnd defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
In an exemplary embodiment, a computer readable storage medium, such as a memory including instructions executable by a processor in a terminal, is also provided to perform the above-described method for pipe defect detection based on grayscale image and depth data fusion. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A pipe defect detection method based on fusion of gray level images and depth data is characterized by comprising the following steps:
s1, obtaining surface characteristic information of the sample to be detected; wherein the surface feature information comprises depth data and a grayscale image;
S3, obtaining defect candidate area with gray feature according to the gray image and the trained target detection model
S4, using the defect candidate area with depth informationAnd is provided with Defect candidate region of gray scale featurePerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
2. The method according to claim 1, wherein the obtaining of the surface characteristic information of the sample to be detected in S1 comprises:
acquiring depth data and a gray image of a sample to be detected by using a 3D camera;
the depth data comprises surface three-dimensional characteristics of a sample to be detected;
the grayscale image includes color, texture, and morphological characteristics of the sample to be detected.
3. The method according to claim 1, wherein the obtaining of the defect candidate region with depth information from the depth data in S2 comprises:
fitting to obtain a profile equation of the surface of the sample to be detected according to the depth data, projecting the depth data by taking the profile equation of the surface obtained by fitting as a reference surface to obtain a surface relative depth feature, and performing threshold segmentation processing on the surface relative depth feature to obtain a defect candidate region with depth information;
Wherein the defect candidate region having depth information is a region exceeding a specified depth size.
4. The method according to claim 3, wherein the equation of the profile of the surface of the sample to be detected is represented by the following formula (1):
Ax2+Bz2+Cxz+Dx+Ez+F=0 (1)
wherein A, B, C, D, E and F are contour equation parameters; x is a transverse position distribution coordinate in the depth data; and z is the surface depth value of the sample to be detected in the depth data.
5. The method of claim 4, wherein the fitting process of the profile equation parameters comprises:
processing the depth data line by line, taking effective data in each line of depth data as sampling point data of a surface profile, and fitting the sampling point data through an RANSAC algorithm to obtain profile equation parameters;
the effective data is the depth value from the surface of the sample to be detected and is data larger than 0.
6. The method of claim 3, wherein projecting the depth data with reference to a profile equation of the surface obtained by fitting comprises:
sampling point X (X) in X direction in depth data1,x2,...xw) The contour equation sequentially input to the surface is used to obtain a Z-direction reference surface depth value Z '(Z' 1,z′2,...z′w) And calculating the relative depth characteristics of the projected surface according to the following formula (2):
7. The method of claim 3, wherein the thresholding the surface relative depth features to obtain defect candidate regions with depth information comprises:
selecting an independent connected region with the absolute value of the surface relative depth characteristic larger than a preset detection threshold value as a defect candidate region with depth information;
the defect candidate area with the depth information comprises position coordinates, size and defect depth of the defect candidate area.
8. The method according to claim 1, wherein the obtaining of the defect candidate region with gray scale features according to the gray scale image and the trained target detection model in S3 comprises:
inputting the gray level image into a trained target detection model to obtain a defect candidate area with gray level characteristics;
training data of the target detection model are real defect pictures;
the defect candidate area with the gray feature comprises position coordinates, size and defect type of the defect candidate area.
9. The method according to claim 1, wherein the defect candidate region with depth information in S4And defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result, wherein the defect detection result comprises the following steps:
the defect candidate region with depth informationThe first-level defect category of (2) is depth type defect;
go through one by oneIf the defect candidate region in (2) is presentDefect candidate region of gray scale featureIn is present inThe defect candidate region in (1) corresponds to the defect candidate region position, thenThe corresponding defect candidate region category is identified asA secondary defect class of the defect candidate region in (1);
if in the defect candidate area with the gray featureIn, is absent andthe defect candidate region in (1) corresponds to the defect candidate region position, thenThe secondary defect category of the defect candidate region in (1) is to be classified;
10. A pipe defect detection device based on fusion of gray level images and depth data is characterized by comprising:
The acquisition module is used for acquiring surface characteristic information of a sample to be detected; wherein the surface feature information comprises depth data and a grayscale image;
a depth defect candidate region module for obtaining a defect candidate region with depth information according to the depth data
A gray defect candidate region module for obtaining a defect candidate region with gray characteristics according to the gray image and the trained target detection model
An output module for outputting the defect candidate region with depth informationAnd defect candidate region having gradation characteristicsPerforming fusion matching to obtain a defect detection result; the defect detection result comprises position coordinates, size, defect category and defect depth of the defect; the defect classes include a primary defect class and a secondary defect class.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210203782.3A CN114677331A (en) | 2022-03-02 | 2022-03-02 | Pipe defect detection method and device based on fusion of gray level image and depth data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210203782.3A CN114677331A (en) | 2022-03-02 | 2022-03-02 | Pipe defect detection method and device based on fusion of gray level image and depth data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114677331A true CN114677331A (en) | 2022-06-28 |
Family
ID=82073215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210203782.3A Pending CN114677331A (en) | 2022-03-02 | 2022-03-02 | Pipe defect detection method and device based on fusion of gray level image and depth data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114677331A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645364A (en) * | 2023-07-18 | 2023-08-25 | 金乡县金沪合金钢有限公司 | Alloy steel casting air hole defect detection method based on image data |
-
2022
- 2022-03-02 CN CN202210203782.3A patent/CN114677331A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645364A (en) * | 2023-07-18 | 2023-08-25 | 金乡县金沪合金钢有限公司 | Alloy steel casting air hole defect detection method based on image data |
CN116645364B (en) * | 2023-07-18 | 2023-10-27 | 金乡县金沪合金钢有限公司 | Alloy steel casting air hole defect detection method based on image data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110569837B (en) | Method and device for optimizing damage detection result | |
WO2021000524A1 (en) | Hole protection cap detection method and apparatus, computer device and storage medium | |
CN110210448B (en) | Intelligent face skin aging degree identification and evaluation method | |
CN110766095B (en) | Defect detection method based on image gray level features | |
CN106920245B (en) | Boundary detection method and device | |
CN109816645B (en) | Automatic detection method for steel coil loosening | |
CN111126393A (en) | Vehicle appearance refitting judgment method and device, computer equipment and storage medium | |
JP2014228357A (en) | Crack detecting method | |
JP3808182B2 (en) | Vehicle repair cost estimation system and recording medium storing repair cost estimation program | |
CN113237889A (en) | Multi-scale ceramic detection method and system | |
CN111462056B (en) | Workpiece surface defect detection method, device, equipment and storage medium | |
CN106022379A (en) | Method and device for detecting depreciation degree of screen | |
CN110633711A (en) | Computer device and method for training feature point detector and feature point detection method | |
CN113850749A (en) | Method for training defect detector | |
CN116228684A (en) | Battery shell appearance defect image processing method and device | |
CN115656182A (en) | Sheet material point cloud defect detection method based on tensor voting principal component analysis | |
CN114677331A (en) | Pipe defect detection method and device based on fusion of gray level image and depth data | |
CN116485764A (en) | Structural surface defect identification method, system, terminal and medium | |
EP0563897A1 (en) | Defect inspection system | |
Olson | Adaptive-scale filtering and feature detection using range data | |
CN112070748A (en) | Metal oil pipe defect detection method and device | |
CN115713750A (en) | Lane line detection method and device, electronic equipment and storage medium | |
CN115546139A (en) | Defect detection method and device based on machine vision and electronic equipment | |
Li et al. | Vehicle seat detection based on improved RANSAC-SURF algorithm | |
CN114298983A (en) | Intelligent identification and measurement method for defects of anticorrosive layer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |