CN115147448A - Image enhancement and feature extraction method for automatic welding - Google Patents
Image enhancement and feature extraction method for automatic welding Download PDFInfo
- Publication number
- CN115147448A CN115147448A CN202210549755.1A CN202210549755A CN115147448A CN 115147448 A CN115147448 A CN 115147448A CN 202210549755 A CN202210549755 A CN 202210549755A CN 115147448 A CN115147448 A CN 115147448A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- template
- foreground
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003466 welding Methods 0.000 title claims abstract description 54
- 238000000605 extraction Methods 0.000 title claims abstract description 34
- 230000011218 segmentation Effects 0.000 claims abstract description 36
- 238000003708 edge detection Methods 0.000 claims abstract description 17
- 238000000926 separation method Methods 0.000 claims abstract description 9
- 230000002708 enhancing effect Effects 0.000 claims abstract description 6
- 238000010606 normalization Methods 0.000 claims abstract description 4
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 3
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 38
- 230000008569 process Effects 0.000 claims description 29
- 238000001914 filtration Methods 0.000 claims description 20
- 230000003044 adaptive effect Effects 0.000 claims description 17
- 230000000694 effects Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 11
- 238000005260 corrosion Methods 0.000 claims description 10
- 230000007797 corrosion Effects 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 238000003709 image segmentation Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000009795 derivation Methods 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 238000012887 quadratic function Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 8
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30152—Solder
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an image enhancement and feature extraction method for automatic welding, which comprises the following steps: carrying out image enhancement on the image of the welding piece: respectively enhancing the Red channel normalized image, the Green channel normalized image and the Blue channel normalized image by using an RGB separation and normalization method to obtain an enhanced R channel image, an enhanced G channel image and an enhanced B channel image, and then performing RGB synthesis to obtain a reconstructed weldment image; carrying out foreground segmentation on the welding piece image: extracting an interested area from the image, and then cutting the foreground to obtain an RGB foreground weldment image; and (3) carrying out feature extraction on the enhanced weldment image: and performing edge detection on the RGB foreground weldment image, and then extracting the center line of the welding line to obtain edge and welding line information. The invention realizes the high-efficiency extraction of the image edge and the welding seam of the weldment.
Description
Technical Field
The invention relates to an image enhancement and feature extraction method for automatic welding, and belongs to the technical field of image processing.
Background
With the development and progress of welding robots and computer vision technologies, the realization and development of automatic welding technologies are supported importantly. The computer vision technology is mainly represented as a weldment image processing technology in automatic welding, and comprises the steps of acquisition, extraction, detection, tracking and the like of an image of a weldment.
The welding line information extraction and the edge detection are one of key steps of an automatic welding technology, and the efficient and accurate welding line information extraction and edge detection technology can accurately position the welding path of a weldment and is beneficial to the subsequent welding process. The traditional edge detection method is greatly influenced by interference factors such as illumination and the like, and is difficult to be applied to complicated and variable welding environments. Therefore, in the automatic welding technology, weldment identification and accurate positioning of a weld joint are the basis of the automatic welding technology, and therefore a special weldment image acquisition and processing system needs to be designed to acquire the edge information and/or the position information of the weld joint of the weldment, so that the system has very important significance for guiding operations such as path planning, part welding and the like of a welding robot.
Disclosure of Invention
In order to solve the technical problems, the invention provides an image enhancement and feature extraction method for automatic welding, which has the following specific technical scheme:
an image enhancement and feature extraction method for automatic welding comprises the following steps:
step 1: carrying out image enhancement on the image of the welding piece: respectively enhancing the Red channel normalized image, the Green channel normalized image and the Blue channel normalized image by using an RGB separation and normalization method to obtain an enhanced R channel image, an enhanced G channel image and an enhanced B channel image, and then performing RGB synthesis to obtain a reconstructed weldment image;
step (ii) of 2: segmenting the image of the welding piece: firstly, extracting a target area of an image as a mark, and then carrying out image foreground segmentation to obtain an interested area of a weldment image;
and step 3: and (3) carrying out feature extraction on the enhanced weldment image: and (3) carrying out edge detection on the weldment image obtained in the step (2), and then carrying out welding seam center line extraction to obtain edge and/or welding seam information.
Further, step 1 adopts a weldment image enhancement algorithm based on a Retinex model, and the specific process is as follows:
the Retinex theory completes the image enhancement based on a single-channel gray-scale image, and gray-scale conversion is usually required before the image enhancement;
in the Retinex model, an observed image I (x, y) is divided into two parts, one part is an incident part of an object, namely an illumination image, and corresponds to a low-frequency part of the image; the other part is reflected illumination of an object, namely a reflected image, corresponding to the high-frequency part of the image, and the expression is as follows:
I(x,y)=R(x,y)*L(x,y) (1)
where I (x, y) is the observed image signal, R (x, y) is the reflected component of the image, and L (x, y) is the incident component of the image;
firstly, extracting gray level images of three channels of R, G and B, then respectively using a Retinex algorithm to enhance each channel, and finally synthesizing a color RGB image.
Further, step 2 adopts a welding part image segmentation algorithm based on GrabCT, and the specific process is as follows: firstly, selecting a target area through threshold segmentation, and then applying the target area to a GrabCT automatic segmentation algorithm as a foreground mark, thereby realizing automatic segmentation of the image and achieving the purpose of extracting the interested area of the weldment image.
Further, the specific process of selecting the target area by threshold segmentation comprises the following steps:
1) Image binarization: searching a threshold value for minimizing the intra-class variance, iterating for multiple times to obtain the most appropriate threshold value, and selecting the threshold value to perform image binarization segmentation, wherein the inter-class variance between the foreground image and the background image is the largest;
2) Mathematical morphology processing: the mathematical morphology comprises expansion, corrosion, opening operation and closing operation, wherein the expansion and the corrosion are realized by operating a structural element in a program to translate on an image, wherein the corrosion is realized by changing the inner boundary of a region into a background to finish inward reduction of the region, the expansion is realized by changing the outer boundary of the region into an object to finish outward expansion of the region, an expansion algorithm is selected to process a foreground image, and the expanded foreground image covers a foreground target region;
2) And (3) connected domain marking: neighboring pixels having the same pixel value are found and marked.
Further, in the conventional GrabCut algorithm, a total set of pixels in an image may be defined as I, the number of pixels is defined as n, and to describe a relationship between adjacent elements in the image, an unordered pair formed by the adjacent elements is written as { I, j } (I, j belongs to I), a set of the unordered pair is written as P, and a binary vector a = { a = is set 1 ,A 2 ,...,A i ,...,A n In which A is i Is the ith pixel value in I, in the binary division task, A i =0 for background, A i If =1 represents the foreground, a corresponding undirected graph ξ =iscreated from the input graph<u,v>And the node u on the way corresponds to a certain pixel I in the image, belongs to I, v is a set of edges in the undirected graph, besides the node and the edges, a foreground node s and a background node t are also required to be added in the undirected graph so as to perform auxiliary separation on the information of the whole undirected graph, and the relationship among the foreground node s, the background node t and the image pixel node u satisfies the following conditions:
u=I∪{s,t} (2.1)
since segmentation is usually performed on a specific target region, this means that a constraint, generally a soft constraint, MRF, needs to be applied to the boundaries and regions of the above-mentioned undirected graph ξ, which can be described by using the following energy function E (a):
E(A,k,θ,I)=U(A,k,θ,I)+V(A,I) (2.2)
wherein, U (A, k, theta, I) represents that a certain punishment is applied to the pixel belonging to the foreground or the background, and the specific form represents that the pixel belongs to the negative log-likelihood function of the foreground or the background; v (A, I) is a boundary term penalty, namely a penalty representing the edge of an undirected graph formed by two pixels, grabCT is a path with minimum energy, namely the sum of red, yellow and blue curves passed by a green cutting path is minimum in energy, namely cutting is carried out along a thin curve;
from the perspective of the mathematical model, the mathematical expression of U (A, k, theta, I) is
Wherein D (A) i ,k i Theta, i) is
Wherein the parameter θ of the GMM mainly comprises
θ={π(A,K),μ(A,K),σ(A,k),A=0,1;k=1,2,...,K} (2.5)
Wherein K refers to a gaussian component; for the RGB color space, typically K =5, and K gaussian component full covariance GMMs in the following formula are used to model the foreground and background in the image;
wherein pi j Represents a weight, g j (x,μ j ,σ j ) Is composed of
Wherein d is the data dimension, and it can be seen from the above formula that each Gaussian component in GMM has weight pi, mean vector mu and covariance matrix sigma, in RGB imageIn (2), the mean value mu is a 3 x 1 vector, and sigma is a 3 x 3 matrix; after GMM modeling, a vector k = { k) can be obtained 1 ,...,k i ,...,k n In which k is i (k i E {1, K }) is a gaussian component corresponding to the pixel i, and after modeling, any pixel in the image is divided into a foreground or a background;
penalty term V for the connected edge between pixels p and q:
as can be seen from equation (2.5), when the difference between the I and j elements in the neighborhood is very small, the I and j elements probably belong to the same region, and the penalty term V (a, I) is larger; when the difference between the two pixels is large, the two pixels do not belong to the same region but are positioned at the edge part, and the punishment items V (A, I) are small at the moment, so that the two pixels are convenient to divide;
the GrabCut automatic segmentation algorithm comprises the following steps:
(1) Inputting an image, marking a foreground region F in the image by using a region of interest obtained by threshold segmentation, and marking a region outside a rectangular region as a background region B;
(2) Setting labels of pixel points in the foreground region F as A i =1, setting pixel point label in background area B as A i =0;
(3) Clustering the pixels in the F and the B respectively according to the set K value through a K-means clustering algorithm;
(4) Based on F (A) i = 1) and B (A) i = 0) initializing the GMMs of K gaussian components, and obtaining corresponding GMM parameters (pi, μ, σ) according to the equations (2.1) and (2.2);
(5) Each pixel in the F is brought into two GMMs to obtain corresponding probabilities p (i | F) and p (j | B), and then a penalty item U of the region is obtained according to the formula (1);
(6) Calculating the distance between two adjacent pixels in the F according to the formula (2.2), thereby calculating a penalty term V of the boundary;
(7) Calculating the minimum value of energy minE (A, k, theta and I) by a minimum cut or maximum flow method according to an energy minimum principle, inputting the calculated result into the foreground region F again, and distributing labels for the pixels in the F according to the step (2);
(8) And (5) continuously repeating the steps (4) to (7) until a final segmentation result is obtained.
Further, the step 3 adopts a weldment image feature extraction algorithm based on an improved self-adaptive Canny operator, and the specific process is as follows:
step 3.1: weldment image edge detection based on an improved adaptive Canny operator: firstly, constructing an eight-direction 5 x 5-based Sobel detection template to replace the original detection template in a Canny operator, expanding the detection range of the algorithm and effectively improving the edge detection precision; then, self-adaptive Gaussian filtering is used for replacing Gaussian filtering, so that the denoising effect is improved; then, introducing a gradient amplitude histogram and an inter-class variance to complete automatic threshold selection;
step 3.2: and (3) extracting the welding seam center line based on Z-S thinning:
step 3.2.1: image refinement
In the process of circularly traversing the image, for each iteration process, two steps of sub-operations are adopted to carry out image thinning operation, and the process is as follows:
(1) for a pixel p with a value of 1 0 If the pixel point p 0 Satisfies the following condition, mark p 0 As an initial point, after the scan is completed, the pixel value of the mark is deleted, i.e. let p 0 =0;
In the formula, N (p) 0 ) Representing a pixel point p 0 Number of non-0 points in the 8 neighborhoods of (1), S (p) 0 ) Representing a pixel point p 0 The number of times of changing the pixel value from 0 to 1 when the pixel value rotates clockwise one week in the 8 neighborhoods;
(2) for a pixel p with a value of 1 0 If p is 0 The points simultaneously satisfy the following formula, let p 0 =0:
Repeating the operation until no deletable pixel exists in the image, and ending the refinement process;
step 3.2.2: weld centerline extraction
Traversing each pixel of the input image, aligning a certain pixel being traversed by the central point of the structural element, then, in the coverage area of the current structural element, searching the maximum value of all pixels in the corresponding area of the input image, replacing the pixel value of the pixel by the found maximum value, wherein the input image is a binary image, the pixel value is only 0 and 1, and the point is changed into a white foreground object when the maximum value is 1,
therefore, if present constitutional element coverage area is the background pixel, then all pixel points are 0, input image can not be changed this moment, if constitutional element coverage area is the foreground pixel, then all pixel points are 1, input image can not be changed this moment, only when constitutional element is in the foreground object edge, two kinds of different pixel values of 0 and 1 just can appear in the coverage area, replace present pixel 0 into 1 this moment, that is to say, to the tiny fracture department that exists in the foreground object, if constitutional element size equals, then the fracture department just is connected, the edge of many lines or the edge of disconnection just are connected together promptly.
Further, the specific process in the step 3.1 is as follows:
step 3.1.1 building template based on eight-direction Sobel operator
The Sobel operator calculates the brightness gradient of the image based on the discrete difference operator, and the calculation using the Sobel operator at any point in the image will obtain the corresponding gradient vector,
the template based on the eight-direction Sobel operator is an operator for expanding the traditional Sobel operator with a 3 × 3 template into a 5 × 5 template, and the flow of the eight-direction edge detection algorithm is as follows: for each pixel point in the image, carrying out convolution calculation on the eight-direction template and the corresponding image, then taking the value with larger gray value in the result as the output value of the current pixel point,
in a 5 × 5 template, the derivation steps of the weight of each position in the template are as follows:
and (5) recording the coordinates of the pixel points in the template as (m, n) and the pixel points in the center of the template as (i, j). The euclidean distance of (m, n) to (i, j) is d (m, n), and the expression of d (m, n) is as follows:
let g (m, n) be the real number weight at (m, n), and the calculation formula is as follows:
In g(m,n)=-[d(m-n) 2 -u]In2 (4.2)
in the above equation, u is an adjustment coefficient (depending on the size of the template, u =3,
the weight w (m, n) for each position is expressed as follows:
w(m,n)=[g(m,n)] (4.3)
in the formula (4.3), for simplifying the calculation, g (m, n) in the formula (4.2) is rounded as an element in the template, and "[ ]" represents the upper rounding operation,
the distance between a template pixel with the coordinate (m, n) and a template central pixel with the coordinate (i, j) is related to the weight, the abscissa is the distance between the template pixel and the central pixel, and the ordinate is the weight of the point;
step 3.1.2 adaptive Gaussian Filtering
The basic principle of denoising by the adaptive Gaussian filter is as follows:
the two-dimensional gaussian function is expressed as follows:
in the formula (4.4), the mean is 0 and the variance is σ 2 。
Discretizing the two-dimensional continuous Gaussian function to obtain a Gaussian template, wherein the expression is as follows:
equation (4.5) may determine each pixel value in the neighborhood.
In the gaussian template expression, the weight of the gaussian template is easily affected by the variance σ 2 In Gaussian filtering, σ 2 When the image is too small, the neighborhood can be degraded into the point operation of the image, and the denoising capability is very poor; sigma 2 Too large, the gaussian filter degenerates to the mean template, losing image detail, and therefore the appropriate σ needs to be selected 2 Values that preserve image detail while de-noising the image,
for equation (4.5), taking k =1 and σ =1, we get a gaussian template of order 3 × 3, the expression is as follows:
the expression of the variance D in a certain region of the image is as follows:
wherein:
S i,j representing a neighborhood around the center point (i, j).
If the variance D is large, the sigma of the selected Gaussian template 2 N is smaller; if the variance D is smaller, the opposite is true;
adaptive Gaussian filtering, i.e. the Gaussian template parameter σ is selected autonomously according to the value of the variance D in the region 2 And a Gaussian template n to achieve the image denoising effect, and a parameter sigma 2 The expression of (c) is as follows:
during filtering, m is specified 1 、m 2 Using a 3X 3 template, m 3 、m 4 A 5 × 5 mode is used;
step 3.1.3 adaptive threshold selection
Let the total number of pixel points in the image be N, and the gray scale range of the image be [0, L-1]The total number of the pixel points corresponding to the gray level i is N i Then the expression of the image gray level probability is:
assuming that pixels with gray values between [0, T ] constitute the background, pixels with gray values between [ T +1, L-1] constitute the target, and T has a value in the range of 0. Ltoreq. T.ltoreq.L-1, the average gray values of the background and the target are expressed as:
wherein:
the definition of the whole image gray level mean value is:
u=u b (T)w b (T)+u o (T)w o (T) (4.13)
the inter-class variance of the pixels of the image background and the object is defined as:
σ 2 (T)=w b (T)[u b (T)-u] 2 +w o (T)[u o (T)-u] 2 (4.14)
dividing pixels in an edge map obtained by Canny operator after non-maximum suppression into D 1 ,D 2 ,D 3 Three intervals. D 1 Comprising a gradient of magnitude t 1 ,t 2 ,...,t k Denotes a non-edge point in the original image, D 2 Comprising a gradient of magnitude t k+1 ,t k+2 ,...,t m Pixel of (c) represents a place in the original image that needs to be determined as an edge point or a non-edge point (D) 3 Comprising a gradient of magnitude t m+1 ,t m+2 ,...,t l Pixel of, representing an edge point in the original image,
let the total number of pixels in the original image be N and the gray gradient be t j The total number of the pixel points is n j Then the image gray scale gradient is t j The probability of (c) is:
the gradient magnitude over the interval is expected to be:
D 1 ,D 2 ,D 3 the expected gradient magnitudes in these three categories are:
simultaneously:
then:
namely:
for a certain image of the input, t j ,P j It can be obtained by finding the gradient histogram of the image, the gradient level l of the image is usually 64, the formula (4.24) is reduced to a binary quadratic function about k and m, the value range of k is [1, l ]]And the value range of m is [ k +1],σ 2 (k, m) describes the inter-class variance, which is a good criterion for testing the separation between classes in a mathematical and statistical sense, so σ is 2 T corresponding to the maximum value of (k, m) m ,P m Is the value of D 1 ,D 2 ,D 3 The demarcation points of the three categories are also the high and low thresholds of the Canny operator.
The invention has the beneficial effects that:
according to the Retinex model-based weldment image enhancement algorithm, the overall brightness of the processed image is improved, the image sharpness is increased under the condition that all details are reserved, and the Retinex model-based weldment image enhancement algorithm has great advantages when the weldment image is segmented. For a weldment image with a complex background, the invention introduces a target area obtained by threshold segmentation as a foreground mark and uses the foreground mark for GrabCT algorithm so that the image foreground segmentation can be automatically carried out.
According to the method, for the extraction of the edges and/or the welding seams of the images, on the basis of a Canny operator, an eight-direction Sobel operator, adaptive Gaussian filtering, adaptive threshold selection and the like are introduced, so that the detection performance of the algorithm is improved, and the edges and/or the welding seams of the workpieces can be well detected. The invention provides a welding seam center line extraction algorithm considering that the welding seam of a workpiece has a certain width, and the welding seam center line extraction algorithm is used for obtaining the welding seam and/or edge information with single pixel width.
The method provided by the invention can accurately extract the welding seam information with single pixel width and without interruption for the welding seam images under complex scenes or simple experiment table scenes.
Drawings
Figure 1 is a schematic diagram of a weldment image processing algorithm of the present invention,
figure 2 is a flow chart of the grafcut-based weldment image segmentation algorithm of the present invention,
figure 3 is an s-t graph model in the GrabCut algorithm of the present invention,
figure 4 is a flow chart of the GrabCut automated segmentation algorithm of the present invention,
figure 5 is a schematic view of an eight-directional template of the present invention,
figure 6 is an eight-way Sobel operator convolution template of the present invention,
figure 7 is a graphical representation of the median versus distance for the eight-direction template of the present invention,
figure 8 is a diagram of a test image of the present invention and its effects,
the method comprises the following steps of (a) original image, (b) enhancement effect, (c) image segmentation, (d) eight-direction operator effect, (e) self-adaptive Gaussian filtering effect, (f) edge detection effect, (g) expansion and corrosion treatment effect, and (h) final effect.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the image enhancement and feature extraction method for automatic welding of the present invention includes the following steps:
step 1: carrying out image enhancement on the image of the welding piece: the Retinex theory completes the image enhancement based on a single-channel gray-scale image, and gray-scale conversion is usually required before the image enhancement. In the Retinex model, an observed image I (x, y) is divided into two parts, one part is an incident part of an object, namely an illumination image, and corresponds to a low-frequency part of the image; the other part is reflected illumination of an object, namely a reflected image, corresponding to the high-frequency part of the image, and the expression is as follows:
I(x,y)=R(x,y)*L(x,y) (1)
where I (x, y) is the observed image signal, R (x, y) is the reflected component of the image, and L (x, y) is the incident component of the image.
Firstly, extracting gray level images of R, G and B channels, then respectively enhancing each channel by using a Retinex algorithm, namely respectively enhancing a Red channel normalized image, a Green channel normalized image and a Blue channel normalized image by using an RGB separation and normalization method to obtain an enhanced R channel image, a G channel image and a B channel image, and finally synthesizing a color RGB image to obtain a reconstructed weldment image.
And 2, step: segmenting the image of the weld: firstly, extracting a target area of an image as a mark, and then carrying out image foreground segmentation to obtain an interested area of a weldment image;
and step 3: and (3) carrying out feature extraction on the enhanced weldment image: and (3) performing edge detection on the RGB foreground weldment image obtained in the step (2), and then performing welding seam center line extraction to obtain edge and/or welding seam information.
The following describes the implementation process of step 2: with reference to the attached figures 2-4
And 2, adopting a weldment image segmentation algorithm based on GrabCT, referring to FIG. 2, firstly selecting a target region through threshold segmentation, and then applying the target region to the GrabCT automatic segmentation algorithm as a foreground mark, so that the automatic segmentation of the image is realized, and the purpose of extracting the interested region of the weldment image is achieved.
The specific process of selecting the target area by threshold segmentation comprises the following steps:
1) Image binarization: searching a threshold value for minimizing the intra-class variance, iterating for multiple times to obtain the most appropriate threshold value, and selecting the threshold value to perform image binarization segmentation, wherein the inter-class variance between the foreground image and the background image is the largest;
2) Mathematical morphology processing: the mathematical morphology comprises expansion, corrosion, opening operation and closing operation, wherein the expansion and the corrosion are realized by operating a structural element in a program to translate on an image, the corrosion is to change the inner boundary of a region into a background to finish inward reduction of the region, the expansion is to change the outer boundary of the region into an object to finish outward expansion of the region, an expansion algorithm is selected to process a foreground image, and the expanded foreground image covers a foreground target region;
3) And (3) connected domain marking: the neighboring pixels with the same pixel value are found and marked.
In a traditional GrabCut algorithm, a total set of pixels in an image can be defined as I, the number of pixels is defined as n, in order to describe the relationship of adjacent elements in the image, an unordered pair composed of the adjacent elements is written as { I, j } (I, j belongs to I), a set of the unordered pair is written as P, and a binary vector a = { a = is set 1 ,A 2 ,...,A i ,...,A n In which A is i Is the ith pixel value in I, in the binary division task, A i =0 for background, A i If =1 represents the foreground, a corresponding undirected graph ξ =iscreated from the input graph<u,v>And the node u on the way corresponds to a certain pixel I in the image, belongs to I, v is a set of edges in the undirected graph, besides the node and the edges, a foreground node s and a background node t are also required to be added in the undirected graph so as to perform auxiliary separation on the information of the whole undirected graph, and the relationship among the foreground node s, the background node t and the image pixel node u satisfies the following conditions:
u=I∪{s,t} (2.1)
since segmentation is usually performed on a specific target region, this means that a constraint, generally a soft constraint, MRF, needs to be applied to the boundaries and regions of the above-mentioned undirected graph ξ, which can be described by using the following energy function E (a):
E(A,k,θ,I)=U(A,k,θ,I)+V(A,I) (2.2)
wherein, U (A, k, theta, I) represents that a certain punishment is applied to the pixel belonging to the foreground or the background, and the specific form represents that the pixel belongs to the negative log-likelihood function of the foreground or the background; v (a, I) is a boundary penalty, i.e. a penalty representing an edge of an undirected graph composed of two pixels, and GrabCut is a path with minimized energy, i.e. a path with green cutting path, which passes through the lowest energy of the sum of red, yellow and blue curves, i.e. cutting along a thin curve, see fig. 3.
From the perspective of the mathematical model, the mathematical expression of U (A, k, theta, I) is
Wherein D (A) i ,k i Theta, i) is
Wherein the parameter θ of the GMM mainly comprises
θ={π(A,K),μ(A,K),σ(A,k),A=0,1;k=1,2,...,K} (2.5)
Wherein K refers to a gaussian component; for the RGB color space, typically K =5, and K gaussian component full covariance GMMs in the following formula are used to model the foreground and background in the image;
wherein pi j Represents a weight, g j (x,μ j ,σ j ) Is composed of
Wherein d is a data dimension, and it can be seen from the above formula that each gaussian component in the GMM has a weight pi, a mean vector μ and a covariance matrix σ, and in the RGB image, the mean μ is a vector of 3 × 1 and σ is a matrix of 3 × 3; in thatAfter GMM modeling, a vector k = { k) can be obtained 1 ,...,k i ,...,k n In which k is i (k i E {1, K }) is a gaussian component corresponding to the pixel i, and after modeling, any pixel in the image is divided into a foreground or a background;
penalty term V for the connected edge between pixels p and q:
as can be seen from equation (2.5), when the difference between the I and j elements in the neighborhood is very small, the I and j elements probably belong to the same region, and the penalty term V (a, I) is larger; when the difference between the two pixels is larger, the two pixels do not belong to the same region in a large probability, but are positioned at the edge part, and the punishment items V (A, I) are smaller at the moment, so that the segmentation is convenient;
the GrabCut automatic segmentation algorithm comprises the following steps: see FIG. 4
(1) Inputting an image, marking a foreground region F in the image by using a region of interest obtained by threshold segmentation, and marking a region outside a rectangular region as a background region B;
(2) Setting labels of pixel points in the foreground region F as A i =1, setting pixel point label in background area B as A i =0;
(3) Clustering the pixels in the F and the B respectively according to the set K value through a K-means clustering algorithm;
(4) Based on F (A) i = 1) and B (A) i = 0) initializing GMMs of K Gaussian components, and obtaining corresponding GMM parameters (pi, mu, sigma) according to the formula (2.1) and the formula (2.2);
(5) Each pixel in the F is brought into two GMMs to obtain corresponding probabilities p (i | F) and p (j | B), and then a penalty item U of the area is obtained according to a formula (1);
(6) Calculating the distance between two adjacent pixels in the F according to the formula (2.2), thereby calculating a penalty term V of the boundary;
(7) Calculating the minimum value of energy minE (A, k, theta and I) by a minimum cut or maximum flow method according to an energy minimum principle, inputting the calculated result into the foreground region F again, and distributing labels for the pixels in the F according to the step (2);
(8) And (5) continuously repeating the steps (4) to (7) until a final segmentation result is obtained.
The following describes the implementation process of step 3: with reference to the attached figures 5-7
Step 3, adopting an improved self-adaptive Canny operator-based weldment image feature extraction algorithm, and specifically comprising the following steps:
step 3.1: weldment image edge detection based on an improved adaptive Canny operator: firstly, constructing an eight-direction 5 x 5-based Sobel detection template to replace the original detection template in a Canny operator, expanding the detection range of the algorithm and effectively improving the edge detection precision; then, self-adaptive Gaussian filtering is used for replacing Gaussian filtering, and the denoising effect is improved; then, a gradient magnitude histogram and an inter-class variance are introduced to complete automatic threshold selection.
Step 3.1.1 building a template based on eight-way Sobel operator, see FIG. 5
The Sobel operator calculates the brightness gradient of the image based on the discrete difference operator, and the calculation using the Sobel operator at any point in the image will obtain the corresponding gradient vector,
the template based on the eight-direction Sobel operator is an operator for expanding the traditional Sobel operator with a 3 × 3 template into a 5 × 5 template, and the flow of the eight-direction edge detection algorithm is as follows: for each pixel point in the image, the eight-direction template of the pixel point is convolved with the corresponding image, and then the value with a larger gray scale value in the result is used as the output value of the current pixel point, see fig. 6.
In a 5 × 5 template, the derivation steps of the weight of each position in the template are as follows:
and (5) recording the coordinates of the pixel points in the template as (m, n) and the pixel points in the center of the template as (i, j). The euclidean distance of (m, n) to (i, j) is d (m, n), and the expression of d (m, n) is as follows:
let g (m, n) be the real number weight at (m, n), and the calculation formula is as follows:
In g(m,n)=-[d(m-n) 2 -u]In2 (4.2)
in the above equation, u is an adjustment coefficient (depending on the size of the template, u =3,
the weight w (m, n) for each position is expressed as follows:
w(m,n)=[g(m,n)] (4.3)
in the formula (4.3), for simplifying the calculation, g (m, n) in the formula (4.2) is rounded as an element in the template, and "[ ]" represents the upper rounding operation,
the distance from the template pixel with coordinates (m, n) to the template center pixel with coordinates (i, j) is related to the weight, the abscissa is the distance between the template pixel and the center pixel, and the ordinate is the weight of the point, see fig. 7.
Step 3.1.2 adaptive Gaussian Filtering
The basic principle of denoising by the adaptive Gaussian filter is as follows:
the two-dimensional gaussian function is expressed as follows:
in the formula (4.4), the mean is 0 and the variance is σ 2 。
Discretizing the two-dimensional continuous Gaussian function to obtain a Gaussian template, wherein the expression is as follows:
equation (4.5) may determine each pixel value in the neighborhood.
In the gaussian template expression, the weight of the gaussian template is easily affected by the variance σ 2 In Gaussian filtering, σ 2 When the image is too small, the neighborhood can be degraded into the point operation of the image, and the denoising capability is very poor; sigma 2 Too large, the Gaussian filter degenerates to the mean template, losing image detailTherefore, it is necessary to select the appropriate σ 2 A value that preserves image detail while denoising the image,
for equation (4.5), taking k =1 and σ =1, we get a gaussian template of order 3 × 3, the expression is as follows:
the expression of the variance D in a certain region of the image is as follows:
wherein:
S i,j representing the neighborhood around the center point (i, j).
If the variance D is large, the sigma of the selected Gaussian template 2 N is smaller; if the variance D is smaller, the opposite is true;
adaptive Gaussian filtering, i.e. the Gaussian template parameter σ is selected autonomously according to the value of the variance D in the region 2 And a Gaussian template n to achieve the image denoising effect, and a parameter sigma 2 The expression of (c) is as follows:
during filtering, m is specified 1 、m 2 Using a 3X 3 template, m 3 、m 4 A 5 × 5 mode is adopted;
step 3.1.3 adaptive threshold selection
Setting the total number of pixel points in the image as N, and the gray scale range of the image as [0, L-1]]The total number of the pixel points corresponding to the gray level i is N i Then the expression of the image gray level probability is:
assuming that pixels with gray values between [0, T ] constitute the background, pixels with gray values between [ T +1, L-1] constitute the target, and T has a value in the range of 0. Ltoreq. T.ltoreq.L-1, the gray averages of the background and the target are respectively expressed as:
wherein:
the definition of the whole image gray level mean value is:
u=u b (T)w b (T)+u o (T)w o (T) (4.13)
the inter-class variance of pixels of the image background and the object is defined as:
σ 2 (T)=w b (T)[u b (T)-u] 2 +w o (T)[u o (T)-u] 2 (4.14)
dividing pixels in an edge map obtained by Canny operator after non-maximum suppression into D 1 ,D 2 ,D 3 Three intervals. D 1 Includes gradient with magnitude of { t } 1 ,t 2 ,...,t k Denotes a non-edge point in the original image, D 2 Comprising a gradient of magnitude t k+1 ,t k+2 ,...,t m D, representing the place to be determined as an edge point or a non-edge point in the original image, D 3 Comprising a gradient of magnitude t m+1 ,t m+2 ,...,t l Pixel of, representing an edge point in the original image,
let the total number of pixels in the original image be N and the gray gradient be t j The total number of the pixel points is n j Then drawingImage gray scale gradient of t j The probability of (c) is:
the gradient magnitude over the interval is expected to be:
D 1 ,D 2 ,D 3 the expected gradient magnitudes in these three categories are:
simultaneously:
then:
namely:
for a certain image of the input, t j ,P j It can be obtained by solving the gradient histogram of the image, the gradient level l of the image is usually 64, the formula (4.24) is reduced to a binary quadratic function about k and m, the value interval of k is [1, l]And the value range of m is [ k +1],σ 2 (k, m) describes the between-class variance, which is a good criterion for testing the separation between classes in a mathematical and statistical sense, so σ is 2 T corresponding to the maximum value of (k, m) m ,P m The value of (A) is D 1 ,D 2 ,D 3 The demarcation points of the three categories are also the high and low thresholds of the Canny operator.
Step 3.2: and (3) extracting the welding seam center line based on Z-S thinning:
step 3.2.1: image refinement
In the process of circularly traversing the image, for each iteration process, two steps of sub-operations are adopted to carry out image thinning operation, and the process is as follows:
(1) for a pixel p with a value of 1 0 If the pixel point p 0 Satisfies the following condition, mark p 0 As an initial point, after the scan is completed, the pixel value of the mark is deleted, i.e. let p 0 =0;
In the formula, N (p) 0 ) Representing a pixel point p 0 Number of non-0 points in the 8 neighborhoods of (1), S (p) 0 ) Representing a pixel point p 0 The number of times of changing the pixel value from 0 to 1 when the pixel value rotates clockwise one week in the 8 neighborhoods;
(2) for pixel point p with value 1 0 If p is 0 The points simultaneously satisfy the following formula, let p 0 =0:
The above operations are repeated until there are no deletable pixels in the image, and the refinement process ends.
Step 3.2.2: weld centerline extraction
Traversing each pixel of the input image, aligning a certain pixel being traversed by using the central point of the structural element, then, in the coverage area of the current structural element, searching the maximum value of all pixels in the corresponding area of the input image, replacing the pixel value of the pixel by using the found maximum value, wherein the input image is a binary image, the pixel value is only 0 and 1, and when the maximum value is 1, the point is changed into a white foreground object,
therefore, if present constitutional element coverage area is the background pixel, then all pixel points all are 0, input image can not be changed this moment, if constitutional element coverage area is the foreground pixel, then all pixel points all are 1, input image can not be changed this moment, only when constitutional element is in the foreground object edge, two kinds of different pixel values of 0 and 1 just can appear in the coverage area, replace current pixel 0 into 1 this moment, that is to say, to the tiny fracture department that exists in the foreground object, if constitutional element size equals, then the fracture department just is connected, the edge of many lines or the edge of disconnection just are connected together promptly.
Referring to fig. 8, the present invention verifies the image effect of each step.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (7)
1. An image enhancement and feature extraction method for automatic welding is characterized by comprising the following steps: the method comprises the following steps:
step 1: carrying out image enhancement on the image of the welding piece: respectively enhancing the Red channel normalized image, the Green channel normalized image and the Blue channel normalized image by using an RGB separation and normalization method to obtain an enhanced R channel image, an enhanced G channel image and an enhanced B channel image, and then performing RGB synthesis to obtain a reconstructed weldment image;
and 2, step: carrying out image segmentation on the welding piece image: firstly, extracting a target area of an image as a mark, and then carrying out image foreground segmentation to obtain an interested area of a weldment image;
and step 3: and (3) carrying out feature extraction on the enhanced weldment image: and (3) carrying out edge detection on the weldment image obtained in the step (2), and then carrying out welding seam center line extraction to obtain edge and/or welding seam information.
2. The image enhancement and feature extraction method for automatic welding according to claim 1, characterized by comprising the following steps: the step 1 adopts a weldment image enhancement algorithm based on a Retinex model, and the specific process is as follows:
the Retinex theory completes the enhancement of the image based on a single-channel gray image, and gray level conversion is usually required before the image enhancement;
in the Retinex model, the observed image I (x, y) is divided into two parts, one of which is the incident part of the object, i.e. the illumination image, corresponding to the low-frequency part of the image; the other part is reflected illumination of the object, namely a reflected image, corresponding to the high-frequency part of the image, and the expression is as follows:
I(x,y)=R(x,y)*L(x,y) (1)
where I (x, y) is the observed image signal, R (x, y) is the reflected component of the image, and L (x, y) is the incident component of the image;
firstly, extracting gray level images of R, G and B channels, then respectively enhancing each channel by using a Retinex algorithm, and finally synthesizing a color RGB image.
3. The image enhancement and feature extraction method for automated welding according to claim 1, wherein: step 2 adopts a weldment image segmentation algorithm based on GrabCT, and the specific process is as follows: firstly, selecting a target area through threshold segmentation, and then applying the target area to a GrabCT automatic segmentation algorithm as a foreground mark, thereby realizing automatic segmentation of an image and achieving the purpose of extracting an interested area of a weldment image.
4. The image enhancement and feature extraction method for automated welding according to claim 3, wherein: the specific process of selecting the target area by threshold segmentation comprises the following steps:
1) Image binarization: searching an optimal threshold value by using a maximum inter-class variance method, namely searching a threshold value of a minimized intra-class variance, iterating for multiple times to obtain the most appropriate threshold value, and selecting the threshold value to carry out image binarization segmentation, wherein the inter-class variance of the foreground image and the background image is maximum;
2) Mathematical morphology processing: the mathematical morphology comprises expansion, corrosion, opening operation and closing operation, wherein the expansion and the corrosion are realized by operating a structural element in a program to translate on an image, wherein the corrosion is realized by changing the inner boundary of a region into a background to finish inward reduction of the region, the expansion is realized by changing the outer boundary of the region into an object to finish outward expansion of the region, an expansion algorithm is selected to process a foreground image, and the expanded foreground image covers a foreground target region;
2) And (3) connected domain marking: neighboring pixels having the same pixel value are found and marked.
5. The image enhancement and feature extraction method for automated welding according to claim 3, wherein: in a traditional GrabCont algorithm, a total set of pixels in an image can be defined as I, the number of pixel points is defined as n, and for describing the relationship of adjacent elements in the image, an unordered pair formed by the adjacent elements is written as { I, j } (I, j belongs to I), I, j represents the adjacent elements in the image, a set of the unordered pairs is written as P, and a binary vector A = { A = is set 1 ,A 2 ,...,A i ,...,A n In which A is i Is the ith pixel value in I, in the binary division task, A i =0 for background,A i If =1 represents the foreground, a corresponding undirected graph ξ =iscreated from the input graph<u,v>The nodes u in the way correspond to a certain pixel I in the image belongs to I, v is a set of edges in the undirected graph, besides the nodes and the edges, a foreground node s and a background node t are also required to be added in the undirected graph to assist in separating the information of the whole undirected graph, and the relationship among the foreground node s, the background node t and the image pixel nodes u meets the following requirements:
u=I∪{s,t} (2.1)
since segmentation is usually performed on a specific target region, this means that a constraint, generally a soft constraint, MRF, needs to be applied to the boundaries and regions of the above-mentioned undirected graph ξ, which can be described by using the following energy function E (a):
E(A,k,θ,I)=U(A,k,θ,I)+V(A,I) (2.2)
wherein, U (A, k, theta, I) represents that a certain punishment is applied to whether a pixel belongs to the foreground or the background, and the specific form represents that the pixel belongs to a negative log-likelihood function of the foreground or the background; v (A, I) is a boundary term penalty, namely a penalty representing the edge of an undirected graph formed by two pixels, grabCT is a path with minimum energy, namely the sum of red, yellow and blue curves passed by a green cutting path is minimum in energy, namely cutting is carried out along a thin curve;
from the perspective of the mathematical model, the mathematical expression of U (A, k, theta, I) is
Wherein D (A) i ,k i Theta, i) is
Wherein the parameter θ of the GMM mainly comprises
θ={π(A,K),μ(A,K),σ(A,k),A=0,1;k=1,2,...,K} (2.5)
Wherein K refers to a gaussian component; for the RGB color space, typically K =5, and K gaussian component full covariance GMMs in the following formula are used to model the foreground and background in the image;
wherein pi j Represents a weight, g j (x,μ j ,σ j ) Is composed of
Wherein x is a two-dimensional vector, d is a data dimension, and it can be seen from the above formula that each gaussian component in the GMM has a weight pi, a mean vector mu and a covariance matrix sigma, and in the RGB image, the mean mu is a vector of 3 × 1, and sigma is a matrix of 3 × 3; after GMM modeling, a vector k = { k) can be obtained 1 ,...,k i ,...,k n In which k is i (k i E {1, K }) is a gaussian component corresponding to the pixel i, and after modeling, any pixel in the image is divided into a foreground or a background;
penalty term V for the connected edge between pixels p and q:
as can be seen from equation (2.5), when the difference between the I and j elements in the neighborhood is very small, the I and j elements probably belong to the same region, and the penalty term V (a, I) is larger; when the difference between the two pixels is larger, the two pixels do not belong to the same region in a large probability, but are positioned at the edge part, and the punishment items V (A, I) are smaller at the moment, so that the segmentation is convenient;
the GrabCut automatic segmentation algorithm comprises the following steps:
(1) Inputting an image, marking a foreground region F in the image by using a region of interest obtained by threshold segmentation, and marking a region outside a rectangular region as a background region B;
(2) Setting labels of pixel points in the foreground region F as A i =1, setting pixel point label in background area B as A i =0;
(3) Clustering the pixels in the F and the B respectively according to the set K value through a K-means clustering algorithm;
(4) Based on F (A) i = 1) and B (A) i = 0) initializing GMMs of K Gaussian components, and obtaining corresponding GMM parameters (pi, mu, sigma) according to the formula (2.1) and the formula (2.2);
(5) Each pixel in the F is brought into two GMMs to obtain corresponding probabilities p (i | F) and p (j | B), and then a penalty item U of the region is obtained according to the formula (1);
(6) Calculating the distance between two adjacent pixels in the F according to the formula (2.2), thereby calculating a penalty term V of the boundary;
(7) Calculating the minimum value of energy minE (A, k, theta and I) by a minimum cut or maximum flow method according to an energy minimum principle, inputting the calculated result into the foreground region F again, and distributing labels for the pixels in the F according to the step (2);
(8) And (5) continuously repeating the steps (4) to (7) until a final segmentation result is obtained.
6. The image enhancement and feature extraction method for automatic welding according to claim 1, characterized by comprising the following steps: the step 3 adopts a weldment image feature extraction algorithm based on an improved self-adaptive Canny operator, and the specific process is as follows:
step 3.1: weldment image edge detection based on an improved adaptive Canny operator: firstly, constructing an eight-direction 5 x 5-based Sobel detection template to replace the original detection template in a Canny operator, expanding the detection range of the algorithm and effectively improving the edge detection precision; then, self-adaptive Gaussian filtering is used for replacing Gaussian filtering, so that the denoising effect is improved; then, introducing a gradient amplitude histogram and an inter-class variance to complete automatic threshold selection;
step (ii) of 3.2: and (3) extracting the weld centerline based on Z-S thinning:
step 3.2.1: image refinement
In the process of circularly traversing the image, for each iteration process, two steps of sub-operations are adopted to carry out image thinning operation, and the process is as follows:
(1) for a pixel p with a value of 1 0 If the pixel point p 0 Satisfies the following condition, mark p 0 As an initial point, after the scanning is finished, the pixel value of the mark is deleted, i.e. p is ordered 0 =0;
In the formula, N (p) 0 ) Representing a pixel point p 0 Number of non-0 points in the 8 neighborhoods of (1), S (p) 0 ) Representing a pixel point p 0 The number of times that the pixel value changes from 0 to 1 when rotating clockwise for one week in the 8 neighborhoods;
(2) for a pixel p with a value of 1 0 If p is 0 The points simultaneously satisfy the following formula, let p 0 =0:
Repeating the operation until no deletable pixel exists in the image, and ending the refinement process;
step 3.2.2: weld centerline extraction
Traversing each pixel of the input image, aligning a certain pixel being traversed by the central point of the structural element, then, in the coverage area of the current structural element, searching the maximum value of all pixels in the corresponding area of the input image, replacing the pixel value of the pixel by the found maximum value, wherein the input image is a binary image, the pixel value is only 0 and 1, and the point is changed into a white foreground object when the maximum value is 1,
therefore, if present constitutional element coverage area is the background pixel, then all pixel points all are 0, input image can not be changed this moment, if constitutional element coverage area is the foreground pixel, then all pixel points all are 1, input image can not be changed this moment, only when constitutional element is in the foreground object edge, two kinds of different pixel values of 0 and 1 just can appear in the coverage area, replace current pixel 0 into 1 this moment, that is to say, to the tiny fracture department that exists in the foreground object, if constitutional element size equals, then the fracture department just is connected, the edge of many lines or the edge of disconnection just are connected together promptly.
7. The image enhancement and feature extraction method for automated welding according to claim 6, wherein: said step 3.1 the specific process is as follows:
step 3.1.1 building template based on eight-direction Sobel operator
The Sobel operator calculates the brightness gradient of the image based on the discrete difference operator, and the calculation using the Sobel operator at any point in the image will obtain the corresponding gradient vector,
the template based on the eight-direction Sobel operator is an operator for expanding the traditional Sobel operator with a 3 × 3 template into a 5 × 5 template, and the flow of the eight-direction edge detection algorithm is as follows: for each pixel point in the image, carrying out convolution calculation on the eight-direction template and the corresponding image, then taking the value with larger gray value in the result as the output value of the current pixel point,
in the 5 × 5 template, the derivation steps of the weight of each position in the template are as follows:
and (5) recording the coordinates of the pixel points in the template as (m, n) and the pixel points in the center of the template as (i, j). The euclidean distance of (m, n) to (i, j) is d (m, n), and the expression of d (m, n) is as follows:
let g (m, n) be the real number weight at (m, n), and the calculation formula is as follows:
In g(m,n)=-[d(m-n) 2 -u]In2 (4.2)
in the above equation, u is an adjustment coefficient (depending on the size of the template, u =3,
the weight w (m, n) for each position is expressed as follows:
w(m,n)=[g(m,n)] (4.3)
in the formula (4.3), for simplifying the calculation, g (m, n) in the formula (4.2) is rounded as an element in the template, and "[ ]" represents the upper rounding operation,
the distance between a template pixel with the coordinate (m, n) and a template central pixel with the coordinate (i, j) is related to the weight, the abscissa is the distance between the template pixel and the central pixel, and the ordinate is the weight of the point;
step 3.1.2 adaptive Gaussian Filtering
The basic principle of denoising by the adaptive Gaussian filter is as follows:
the two-dimensional gaussian function is expressed as follows:
in the formula (4.4), the mean is 0 and the variance is σ 2 。
Discretizing the two-dimensional continuous Gaussian function to obtain a Gaussian template, wherein the expression is as follows:
equation (4.5) may determine each pixel value in the neighborhood.
In the gaussian template expression, the weight of the gaussian template is easily affected by the variance σ 2 In Gaussian filtering, σ 2 When the image is too small, the neighborhood can be degraded into the point operation of the image, and the denoising capability is very poor; sigma 2 Too large, the gaussian filter degenerates to the mean template, losing image detail, and therefore the appropriate σ needs to be selected 2 Values that preserve image detail while de-noising the image,
for equation (4.5), taking k =1 and σ =1, we get a gaussian template of order 3 × 3, the expression is as follows:
the expression of the variance D in a certain region of the image is as follows:
wherein:
S i,j representing a neighborhood around the center point (i, j).
If the variance D is large, the sigma of the selected Gaussian template 2 N is smaller; if the variance D is smaller, the opposite is true;
adaptive Gaussian filtering, i.e. the Gaussian template parameter σ is selected autonomously according to the value of the variance D in the region 2 And a Gaussian template n to achieve the image denoising effect, and a parameter sigma 2 The expression of (a) is as follows:
during filtering, m is specified 1 、m 2 Using a 3X 3 template, m 3 、m 4 A 5 × 5 mode is used;
step 3.1.3 adaptive threshold selection
Let the total number of pixel points in the image be N, and the gray scale range of the image be [0, L-1]The total number of the pixel points corresponding to the gray level i is N i Then the expression of the image gray level probability is:
assuming that pixels with gray values between [0, T ] constitute the background, pixels with gray values between [ T +1, L-1] constitute the target, and T has a value in the range of 0. Ltoreq. T.ltoreq.L-1, the gray averages of the background and the target are respectively expressed as:
wherein:
the definition of the whole image gray level mean value is:
u=u b (T)w b (T)+u o (T)w o (T) (4.13)
the inter-class variance of the pixels of the image background and the object is defined as:
σ 2 (T)=w b (T)[u b (T)-u] 2 +w o (T)[u o (T)-u] 2 (4.14)
for an edge image obtained by Canny operator after non-maximum suppression, dividing pixels in the image into D 1 ,D 2 ,D 3 Three intervals. D 1 Comprising a gradient of magnitude t 1 ,t 2 ,...,t k Denotes a non-edge point in the original image, D 2 Comprising a gradient of magnitude t k+1 ,t k+2 ,...,t m D, representing the place to be determined as an edge point or a non-edge point in the original image, D 3 Includes gradient with magnitude of { t } m+1 ,t m+2 ,...,t l Pixel of, representing an edge point in the original image,
setting the total number of pixel points in the original image as N, gradient of gray scale t j The total number of the pixel points is n j Then the image gray scale gradient is t j The probability of (c) is:
the gradient magnitude over the interval is expected to be:
D 1 ,D 2 ,D 3 the expected gradient magnitudes in these three categories are:
simultaneously:
then:
namely:
for a certain image of the input, t j ,P j It can be obtained by solving the gradient histogram of the image, the gradient level l of the image is usually 64, the formula (4.24) is reduced to a binary quadratic function about k and m, the value interval of k is [1, l]And the value interval of m is [ k +1,l ]],σ 2 (k, m) describes the inter-class variance, which is a good criterion for testing the separation between classes in a mathematical and statistical sense, so σ is 2 T corresponding to the maximum value of (k, m) m ,P m Is the value of D 1 ,D 2 ,D 3 The demarcation points of the three categories are also the high and low thresholds of the Canny operator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210549755.1A CN115147448A (en) | 2022-05-20 | 2022-05-20 | Image enhancement and feature extraction method for automatic welding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210549755.1A CN115147448A (en) | 2022-05-20 | 2022-05-20 | Image enhancement and feature extraction method for automatic welding |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115147448A true CN115147448A (en) | 2022-10-04 |
Family
ID=83406789
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210549755.1A Pending CN115147448A (en) | 2022-05-20 | 2022-05-20 | Image enhancement and feature extraction method for automatic welding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115147448A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115330645A (en) * | 2022-10-17 | 2022-11-11 | 南通惠尔风机有限公司 | Welding image enhancement method |
CN115570228A (en) * | 2022-11-22 | 2023-01-06 | 苏芯物联技术(南京)有限公司 | Intelligent feedback control method and system for welding pipeline gas supply |
CN116168027A (en) * | 2023-04-24 | 2023-05-26 | 山东交通学院 | Intelligent woodworking machine cutting method based on visual positioning |
CN116433657A (en) * | 2023-06-08 | 2023-07-14 | 金乡县明耀玻璃有限公司 | Toughened glass scratch area image enhancement method based on computer vision |
CN117237233A (en) * | 2023-11-10 | 2023-12-15 | 巴苏尼制造(江苏)有限公司 | Reinforcing method for weld joint image |
-
2022
- 2022-05-20 CN CN202210549755.1A patent/CN115147448A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115330645A (en) * | 2022-10-17 | 2022-11-11 | 南通惠尔风机有限公司 | Welding image enhancement method |
CN115570228A (en) * | 2022-11-22 | 2023-01-06 | 苏芯物联技术(南京)有限公司 | Intelligent feedback control method and system for welding pipeline gas supply |
CN116168027A (en) * | 2023-04-24 | 2023-05-26 | 山东交通学院 | Intelligent woodworking machine cutting method based on visual positioning |
CN116433657A (en) * | 2023-06-08 | 2023-07-14 | 金乡县明耀玻璃有限公司 | Toughened glass scratch area image enhancement method based on computer vision |
CN116433657B (en) * | 2023-06-08 | 2023-08-25 | 金乡县明耀玻璃有限公司 | Toughened glass scratch area image enhancement method based on computer vision |
CN117237233A (en) * | 2023-11-10 | 2023-12-15 | 巴苏尼制造(江苏)有限公司 | Reinforcing method for weld joint image |
CN117237233B (en) * | 2023-11-10 | 2024-01-26 | 巴苏尼制造(江苏)有限公司 | Reinforcing method for weld joint image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115147448A (en) | Image enhancement and feature extraction method for automatic welding | |
US9292928B2 (en) | Depth constrained superpixel-based depth map refinement | |
WO2018107939A1 (en) | Edge completeness-based optimal identification method for image segmentation | |
CN110197153B (en) | Automatic wall identification method in house type graph | |
CN110232389B (en) | Stereoscopic vision navigation method based on invariance of green crop feature extraction | |
CN107610111B (en) | deep learning-based welding spot image detection method | |
CN114821114B (en) | Groove cutting robot image processing method based on vision system | |
CN110717872B (en) | Method and system for extracting characteristic points of V-shaped welding seam image under laser-assisted positioning | |
CN112991447A (en) | Visual positioning and static map construction method and system in dynamic environment | |
CN105260693A (en) | Laser two-dimensional code positioning method | |
CN107169972B (en) | Non-cooperative target rapid contour tracking method | |
US20050163373A1 (en) | Method for adaptive image region partition and morphologic processing | |
US6507675B1 (en) | Structure-guided automatic learning for image feature enhancement | |
Hrebień et al. | Segmentation of breast cancer fine needle biopsy cytological images | |
CN115830359A (en) | Workpiece identification and counting method based on target detection and template matching in complex scene | |
CN104915951B (en) | A kind of stippled formula DPM two-dimension code area localization methods | |
CN111582004A (en) | Target area segmentation method and device in ground image | |
CN106446920A (en) | Stroke width transformation method based on gradient amplitude constraint | |
CN110349129B (en) | Appearance defect detection method for high-density flexible IC substrate | |
CN112435272A (en) | High-voltage transmission line connected domain removing method based on image contour analysis | |
CN115661110B (en) | Transparent workpiece identification and positioning method | |
CN116452826A (en) | Coal gangue contour estimation method based on machine vision under shielding condition | |
CN112330659B (en) | Geometric tolerance symbol segmentation method combining LSD (least squares) linear detection and connected domain marking method | |
Kelm et al. | Walk the lines: Object contour tracing cnn for contour completion of ships | |
Rui | Lane line detection technology based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |