CN111292346B - Method for detecting contour of casting box body in noise environment - Google Patents
Method for detecting contour of casting box body in noise environment Download PDFInfo
- Publication number
- CN111292346B CN111292346B CN202010049720.2A CN202010049720A CN111292346B CN 111292346 B CN111292346 B CN 111292346B CN 202010049720 A CN202010049720 A CN 202010049720A CN 111292346 B CN111292346 B CN 111292346B
- Authority
- CN
- China
- Prior art keywords
- image
- casting box
- mapping
- space
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005266 casting Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 40
- 230000002146 bilateral effect Effects 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 10
- 230000009467 reduction Effects 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 21
- 238000003066 decision tree Methods 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 16
- 238000000513 principal component analysis Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 abstract description 12
- 238000004422 calculation algorithm Methods 0.000 description 26
- 230000000694 effects Effects 0.000 description 8
- 230000006872 improvement Effects 0.000 description 4
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 2
- 230000001580 bacterial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30116—Casting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for detecting the outline of a casting box body in a noise environment, which belongs to the technical field of image edge detection and comprises the following steps: step 1, inputting a noise-containing image to be detected; step 2, noise reduction processing is carried out on the input noise image by using bilateral filtering; step 3, constructing a random structure forest; step 4, performing preliminary contour detection on the image after noise reduction by using the trained random structure forest; step 5, binarizing the preliminary contour detection result; step 6, fitting a pouring gate of the casting box body through Hough circle transformation; and 7, outputting a final detection result image. The invention has the main purposes that the circular pouring gate of the casting box body can be accurately fitted while the linear contour of the casting box body is accurately detected, and the circle center of the circular pouring gate is accurately positioned.
Description
Technical Field
The invention belongs to the technical field of image edge detection, and particularly relates to a detection method of a casting box body contour in a noise environment.
Background
In the technical field of image edge detection, edge information of a noise-containing image can be accurately detected, a clear edge image of an object is obtained, and an image edge detection method is adopted for providing favorable conditions for subsequent operation. In the last decades, predecessors have done a lot of work on object edge detection, and traditional image-based categories can be divided into: gray image contour detection, RGB-D image contour detection, color image contour detection. The common contour detection of gray images mostly uses abrupt changes between the edge gray values and the background gray values in the images, which are called roof or step changes, and can be implemented by first and second derivatives in a mathematical model, wherein the first derivatives are used for contour detection by Robert operator, sobel operator, prewitt operator, krisch operator, canny operator, etc., and the second derivatives are used by Laplacian operator and LOG operator.
The color image has more abundant information of chromaticity and brightness than the gray image. The edge contour of the color image can be regarded as pixels with abrupt color change, and the contour detection method mainly comprises two methods of a color component output fusion method and a vector method. The output fusion method is to process each color channel of the color image according to the gray image edge detection method, then output fusion is carried out on the results obtained by each component, and finally the output edge is obtained, but the algorithm ignores the correlation among the components, is easy to take as the edge loss, and has no perfect fusion method at present. The vector method treats each pixel in the color image as a three-dimensional vector, so that the whole image is a two-dimensional three-component vector field, the vector characteristics of the color image are well saved, and the problems of discontinuous detected edges, missed detection and the like are easy to occur.
In recent years, some new edge detection algorithms have been proposed. Om Prakash Verma et al propose an optimal blurring system for edge detection of color images based on a bacterial search algorithm. Han Fangfang et al propose an algorithm for edge detection of high-speed moving objects in noisy environments. Piotr Dollar et al propose a rapid edge detection method based on structured forests.
Although the methods have good effects in contour detection, because the casting box body is in a noise environment and the contour of the casting box body is provided with a circular gate contour besides a straight line, the precise detection and positioning of the casting box body cannot be solved by using a certain edge detection algorithm, and the precise detection and positioning of the casting box body is a precondition of precise operation of a casting robot.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to overcome the defect of inaccurate detection of the outline of the casting box body in the prior art, and provides a detection method of the outline of the casting box body in a noise environment.
2. Technical proposal
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
the invention discloses a method for detecting the outline of a casting box body in a noise environment, which comprises the following steps:
step 1, inputting a noise-containing image to be detected;
step 2, noise reduction processing is carried out on the input noise image by using bilateral filtering;
step 3, constructing a random structure forest;
step 4, performing preliminary contour detection on the image after noise reduction by using the trained random structure forest;
step 5, binarizing the preliminary contour detection result;
step 6, fitting a pouring gate of the casting box body through Hough circle transformation;
and 7, outputting a final detection result image.
As a further improvement of the invention, the specific steps of the step 2 are as follows:
2a) Generating a distance template by using a two-dimensional Gaussian function, generating a value range template by using a one-dimensional Gaussian function, and generating a distance template coefficient by using the following formula:
wherein (k, l) is the center coordinates of the template window, (i, j) is the coordinates of other coefficients of the template window, σ d Standard deviation as gaussian function;
2b) The generation formula of the value range template coefficient is as follows:
where f (x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinates of the other coefficients of the template window,σ r standard deviation as gaussian function;
2c) Multiplying the two templates to obtain a template formula of the bilateral filter:
as a further improvement of the invention, the specific steps of the step 3 are as follows:
3a) Establishing a decision tree: firstly, sampling input image data, assuming that N is used for representing the number of training samples, M is used for representing the number of features, sampling N samples with a put back is adopted, then column sampling is carried out, M sub-features (M < M) are selected from M features, then each decision tree is subjected to recursion mode to classify the sampled data into left and right subtrees until leaf nodes are reached, decision tree f t (x) Is associated with a binary split function:
h(x,θ j )∈{0,1}
where x is the input vector, { θ j Is an independent co-distributed random variable, j represents the j-th node in the tree, if h (x, θ) j ) If the input element is not in the leaf node, the input element is classified into a left node of the node j, if the input element is not classified into a right node, the input element is predicted to output Y through a decision tree and is stored in the leaf node, namely the output distribution is Y epsilon Y;
3b) Training each decision tree using a recursive approach: training set S for a given node j j E X Y, the goal is to find an optimal θ by training j So that the data set gets a good classification result, an information gain criterion needs to be defined here:
wherein: selecting a segmentation parameter θ j The criterion of (2) is to make the information gain I j At maximum, the data set is used for recursion training at the left and right nodes, and training is stopped when one of the following conditions is satisfied: a) reaching a set maximum depth, b) the information gain or training set scale reaching a threshold value, c) the number of samples falling into a node being less than the set threshold value,
the information gain formula is defined as follows:
wherein: h entropy (S)=-∑ y py log(p y ) Representing shannon information entropy, p y Is the probability that the element labeled y appears in the set s;
3c) Random forest structured output: mapping all structured labels Y e Y of leaf nodes to a discrete set of labels C e C, where c= { 1..once., k }, the mapping relationship is defined as follows:
Π:y∈Y→c∈C{1,2,...,k}
the mapping process is divided into two phases, first of all the Y space is mapped to the Z space, i.e. y→z, where such mapping relation z=pi (Y) is defined asA dimension vector representing each pair of pixels of the segmentation mask y, and m-dimensional sampling of Z, the mapping after sampling being defined as:Y-Z, mapping the given set Z to the discrete label set C, reducing the dimension of Z to 5-dimension by Principal Component Analysis (PCA) before mapping from Z space to C space, extracting the most representative characteristic from sample characteristics by PCA, and adding N samples in 256-dimension space>Down to 5D, the mostBack pair of n output labels y 1 ,...,y n And E, combining the E and the y to form a set model.
As a further improvement of the invention, the specific steps of the step 4 are as follows:
4a) Extracting an integral channel of an input image: the gradient histograms of 3 color channels, 1 gradient map and 4 different directions have 8 channel characteristics in total, and the sensitivity degree of the characteristic filters of different directions and scales to the edge is different, so that 13 channel information can be extracted from an image block, namely, a LUV color channel, 1 gradient amplitude channel on 2 scales and 4 gradient histogram channels in total, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 multiplied by 16, 13);
4b) Defining a mapping function n: y→z, the jth pixel of the mask y is represented by (y (j) (1. Ltoreq.j. Ltoreq.256)), so that the value of (j) can be calculated 1 ≠j 2 ) In the case of y (j) 1 )=y(j 2 ) Whether or not this is true, it is thus possible to define a large binary vector mapping function z=n (y), to each pair j 1 ≠j 2 Characteristic point pair y (j) 1 )=y(j 2 ) Encoding;
4c) And obtaining a final casting box contour image through edge mapping Y 'epsilon Y'.
As a further improvement of the invention, the specific steps of the step 6 are as follows:
6a) For the input binarized casting box contour image, one point in the coordinate space can be mapped into a corresponding track curve or curved surface in the parameter space, and for a known round equation, the general equation of rectangular coordinates is as follows:
(x-a) 2 +(y-b) 2 =r 2
wherein: (a, b) is the center coordinates, r is the radius of the circle;
6b) The image space equation (x-a) 2 +(y-b) 2 =r 2 Transforming to obtain a parameter space equation:
(a-x) 2 +(b-y) 2 =r 2 ;
6c) And finding the position with the largest circle intersection point in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all points in the image space, so that the detection of the circular gate is realized.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
the accurate detection of casting box contour is the prerequisite and the basis of the accurate operation of casting robot, but because casting box is in under the noise environment, and the contour of casting box both contains sharp edge and also contains circular runner simultaneously, brings the difficulty for the contour detection. Aiming at the technical problems, the invention provides a detection method for the outline of the casting box body in a noise environment, which can accurately detect the straight outline of the casting box body, accurately fit a circular gate of the box body, and accurately position the center of a circle of the circular gate.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph comparing the effect of the invention on the profile of a casting box with the effect of the conventional method:
(a) A true edge map of the image is provided,
(b) The Canny algorithm detects the resulting image without double-sided filtering,
(c) The Canny algorithm detects the resulting image via bilateral filtering,
(d) The Laplacian algorithm detects the resulting image via bilateral filtering,
(e) A random structure forest detection result image subjected to bilateral filtering,
(f) The random structure forest after bilateral filtering is detected and binarized to detect a result image,
(g) Fitting the graph through Hough circle transformation;
FIG. 3 is a graph comparing the accuracy rate curve of the casting box contour detection result with the conventional algorithm;
FIG. 4 is a graph comparing recall curves of the contour detection results of the casting box body by the conventional algorithm.
Detailed Description
For a further understanding of the present invention, the present invention will be described in detail below with reference to the drawings and examples.
Example 1
As shown in fig. 1, the present embodiment provides a method for detecting a contour of a casting box in a noise environment, including the following steps:
step 1, inputting a noise-containing image to be detected;
the noise-containing cast box image stored in advance in the computer space was read out using phton 3.5 development software in the computer.
And 2, carrying out noise reduction treatment on the input casting box body image by using bilateral filtering, wherein the specific steps are as follows:
2a) A distance template is generated using a two-dimensional gaussian function, and a value range template is generated using a one-dimensional gaussian function. The generation formula of the distance template coefficient is as follows:
wherein, (k, l) is the center coordinate of the template window; (i, j) is the coordinates of the other coefficients of the template window; sigma (sigma) d Is the standard deviation of the gaussian function.
2b) The generation formula of the value range template coefficient is as follows:
wherein the function f (x, y) represents the image to be processed and f (x, y) represents the pixel value of the image at point (x, y); (k, l) is the center coordinates of the template window; (i, j) is the coordinates of the other coefficients of the template window; sigma (sigma) r Is the standard deviation of the gaussian function.
2c) Multiplying the two templates to obtain a template formula of the bilateral filter:
step 3, constructing a random structure forest, which comprises the following specific steps:
3a) Establishing a decision tree: firstly, sampling input image data, assuming that N is used for representing the number of training samples, M is used for representing the number of features, sampling the N samples with a put back, then sampling columns, selecting M sub-features (M < M) from the M features, and then recursively classifying the sampled data to left and right subtrees for each decision tree until leaf nodes. Decision tree f t (x) Is associated with a binary split function:
h(x,θ j )∈{0,1}
where x is the input vector, { θ j And the random variables are independently and uniformly distributed, j represents the j-th node in the tree, if h (x, θj) =1, the x is classified to the left node of the node j, the side is classified to the right node, and the process is finished until the leaf node. The input element is predicted to output Y through the decision tree and is stored in the leaf node, namely, the output distribution is Y epsilon Y. Segmentation function h (x, θ j ) Is very complex, but it is generally common practice to compare an input x of a single feature dimension to a threshold, when θ= (k, τ) and h (x, θ) = [ x (k) < τ)],[·]Representing an indication function; another common method is θ= (k 1, k2, τ) and h (x, θ) = [ x (k 1) -x (k 2) < τ]。
3b) Training each decision tree using a recursive approach: training set S for a given node j j E X Y, the goal is to find an optimal θ by training j So that the data set gets a good classification result. An information gain criterion needs to be defined here:
wherein: selecting a segmentation parameter θ j The criterion of (2) is to make the information gain I j At maximum, the data set is used for recursion training at the left and right nodes, and training is stopped when one of the following conditions is satisfied: a) Reaching a set maximum depth; b) The information gain or the training set scale reaches a threshold value; c) The number of samples falling into the node is less than the set threshold.
The information gain formula is defined as follows:
wherein: h entropy (S)=-∑ y p y log(p y ) Representing shannon information entropy, p y Is the probability of occurrence in the set s of elements labeled y.
3c) Random forest structured output: the structured output space is typically highly complex in dimension, so all structured labels Y e Y of leaf nodes can be mapped to a discrete set of labels C e C, where c= { 1..once., k }, the mapping relationship of which is defined as follows:
∏:y∈Y→c∈C{1,2,...,k}
the calculation of the measured similarity on the information gain dependence Y is herein, however, for structured output spaces, it is difficult to calculate the similarity on Y, and therefore, a temporary space mapping from Y to Z is defined, the distance of the space Z is easier to measure, and finally the mapping process is divided into two stages, first the Y space is mapped to the Z space, i.e. y→z, wherein such mapping relation z= pi (Y) is defined asThe dimension vector, representing each pair of pixel encodings of the segmentation mask y, is still costly to calculate Z for each y, and m-dimensional sampling of Z for dimension reduction, the sampled mapping is defined as:Y.fwdarw.Z. The addition of randomness during the sampling of Z ensures sufficient diversity of the tree.
Before mapping from Z space to C space, the dimension of Z is reduced to 5 dimensions by Principal Component Analysis (PCA), which extracts the most representative of the sample features, for n samples in 256 dimensionsDown to 5 dimensions. There are two ways to achieve mapping of a given set Z to a discrete label set C: a) Using a k-means clustering method to cluster Z into k clusters; b) Log-based 2 PCA in the k dimension quantifies Z and assigns discrete labels c according to the quadrant in which Z falls. Both methods operate similarly, but the latter is faster. Principal component analysis quantization methods that select k=2 are used herein.
To obtain a unique output result, it is necessary to output labels y for n 1 ,...,y n E, combining the E and Y to form a set model. Can adopt m-dimensional sampling mapping function pi φ Calculating z for each tag i i =∏ φ (y i ). In z k =∏ φ (y k ) For the center, select those to let z k With all other z i And the minimum y k As an output tag. Mapping function pi of integrated model dependent m and selection φ 。
And 4, performing preliminary contour detection on the noise-reduced casting box body by using a trained random structure forest, wherein the specific steps are as follows:
4a) Extracting an integral channel of an input casting box body image: the gradient histograms of 3 color channels, 1 gradient map and 4 different directions have 8 channel characteristics in total, and the sensitivity degree of the characteristic filters of different directions and scales to the edges is different, so that 13 channel information of the LUV color channels, 1 gradient amplitude channel on 2 scales and 4 gradient histogram channels can be extracted from an image block, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 multiplied by 16, 13).
4b) Defining a mapping function n: y→z. A mapping function is defined herein, and the j-th pixel point of the mask y is represented by (y (j) (1. Ltoreq.j. Ltoreq.256)), so that the value of (j) can be calculated 1 ≠j 2 ) A kind of electronic deviceIn case y (j) 1 )=y(j 2 ) Whether or not this is true, it is thus possible to define a large binary vector mapping function z= pi (y), to compare each pair j 1 ≠j 2 Characteristic point pair y (j) 1 )=y(j 2 ) Encoding.
4c) By fusing the output results of a plurality of uncorrelated decision trees, the output of the random structure forest is more robust. Efficient fusion of multiple segmentation masks Y e Y is very difficult, so edge mapping Y 'e Y' is employed herein to obtain the final casting box contour image.
And 5, performing binarization processing on the contour image detected by the random structure forest, and finding out an optimal threshold value through repeated experiments. Setting the gray value of the pixel point on the image to be 0 below the threshold value and setting the gray value of the pixel point on the image to be 255 above the threshold value to obtain a binarized image capable of reflecting the whole and local characteristics of the image, thereby dividing the image contour from the background.
And 6, fitting a pouring gate of the casting box body through Hough circle transformation, wherein the concrete steps are as follows:
6a) When Hough transformation is used for curve detection, the most important is to write a transformation formula from an image coordinate space to a parameter space. And mapping one point in the coordinate space of the input binarized casting box contour image into a corresponding track curve or curved surface in the parameter space. For a known circular equation, the general equation for its rectangular coordinates is:
(x-a) 2 +(y-b) 2 =r 2
wherein: (a, b) is the center coordinates, and r is the radius of the circle.
6b) The image space equation (x-a) 2 +(y-b) 2 =r 2 Transforming to obtain a parameter space equation:
(a-x) 2 +(b-y) 2 =r 2
6c) And finding the position with the largest circle intersection point in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all points in the image space, so that the detection of the circular gate is realized.
And 7, outputting a final casting box contour detection result image.
As shown in fig. 2, the detection results of each group of graphs of b, c, d, e, f, g are compared through different angles, wherein it is easy to see that a plurality of disordered lines exist in the four graphs of b, c, d, e, the more the lines indicate the weaker the anti-interference capability of the algorithm on noise, and the fewer the lines indicate the stronger the anti-interference capability of the algorithm on noise.
From the b graph and the c graph in fig. 2, it can be found that the canny algorithm has poor effect on detecting the edge of the box body no matter whether the canny algorithm is subjected to noise reduction treatment or not. The graph d shows the Laplacian detection effect, and the Laplacian algorithm has clearer edges than the canny algorithm, and has certain anti-interference capability on noise. The figure e is a detection result of the edge by a random structure forest algorithm after bilateral filtering, compared with the two algorithms, the detection result shows that the noise of the gate surface of the box body is basically removed after the detection by the algorithm, and the surrounding environment noise is improved to a certain extent, so that the anti-interference capability of the algorithm is strong compared with other algorithms. The graph f is the effect of binarizing the graph e, the graph g accurately detects and positions a pouring gate of the casting box through Hough circle transformation on the basis of the graph f, and a center point of the pouring gate is marked.
As can be seen from the comparison of the accuracy of the edge detection of the pouring box pouring gate under different algorithms in FIG. 3, the accuracy of the algorithm is higher than that of the other two algorithms, the accuracy of the algorithm is more stable for different image algorithms, and the detection results are better for pouring box pouring gates with different angles. From the recall shown in fig. 4, the algorithm herein has a higher recall, indicating that the closer the detected result is to the true edge map.
The invention and its embodiments have been described above by way of illustration and not limitation, and the invention is illustrated in the accompanying drawings and described in the drawings in which the actual structure is not limited thereto. Therefore, if one of ordinary skill in the art is informed by this disclosure, the structural mode and the embodiments similar to the technical scheme are not creatively designed without departing from the gist of the present invention.
Claims (4)
1. The method for detecting the outline of the casting box body in the noise environment is characterized by comprising the following steps of:
step 1, inputting a noise-containing image to be detected;
step 2, noise reduction processing is carried out on the input noise image by using bilateral filtering;
step 3, constructing a random structure forest;
step 4, performing preliminary contour detection on the image after noise reduction by using the trained random structure forest;
step 5, binarizing the preliminary contour detection result;
step 6, fitting a pouring gate of the casting box body through Hough circle transformation;
step 7, outputting a final detection result image;
the specific steps of the step 3 are as follows:
3a) Establishing a decision tree: firstly, sampling input image data, assuming that N is used for representing the number of training samples, M is used for representing the number of features, sampling N samples with a put back is adopted, then column sampling is carried out, M sub-features, M < M, are selected from M features, then each decision tree is subjected to recursion mode to classify the sampled data into left and right subtrees until leaf nodes, decision tree f t (x) Is associated with a binary split function:
h(x,θ j )∈{0,1}
where x is the input vector, { θ j Is an independent co-distributed random variable, j represents the j-th node in the tree, if h (x, θ) j ) If the input element is not in the leaf node, the input element is classified into a left node of the node j, if the input element is not classified into a right node, the input element is predicted to output Y through a decision tree and is stored in the leaf node, namely the output distribution is Y epsilon Y;
3b) Training each decision tree using a recursive approach: training set S for a given node j j E X Y, the goal is to find an optimal θ by training j So that the data set gets a good classification result, an information gain criterion needs to be defined here:
wherein:selecting a segmentation parameter θ j The criterion of (2) is to make the information gain I j At maximum, the data set is used for recursion training at the left and right nodes, and training is stopped when one of the following conditions is satisfied: a) reaches a set maximum depth, b) the information gain or the training set scale reaches a threshold value, c) the number of samples falling into the node is less than the set threshold value, and the information gain formula is defined as follows:
wherein: h entropy (S)=-∑ y p y log(p y ) Representing shannon information entropy, p y Is the probability that the element labeled y appears in the set s;
3c) Random forest structured output: mapping all structured labels Y e Y of leaf nodes to a discrete set of labels C e C, where c= { 1..once., k }, the mapping relationship is defined as follows:
Π:y∈Y→c∈C{1,2,...,k}
the mapping process is divided into two phases, first of all the Y space is mapped to the Z space, i.e. y→z, where such mapping relation z=pi (Y) is defined asA dimension vector representing each pair of pixels of the segmentation mask y, and m-dimensional sampling of Z, the mapping after sampling being defined as:Y-Z, mapping the given set Z to the discrete label set C, and before mapping from Z space to C space, adopting principal componentAnalysis (PCA) reduces the dimension of Z to 5 dimensions, the PCA extracting the most representative of the sample features, for n samples in 256-dimensional space>Down to 5 dimension, finally to n output labels y 1 ,...,y n E, combining the E and Y to form a set model. />
2. The method for detecting the contour of a casting box in a noisy environment according to claim 1, wherein the specific steps of step 2 are as follows:
2a) Generating a distance template by using a two-dimensional Gaussian function, generating a value range template by using a one-dimensional Gaussian function, and generating a distance template coefficient by using the following formula:
wherein (k, l) is the center coordinates of the template window, (i, j) is the coordinates of other coefficients of the template window, σ d Standard deviation as gaussian function;
2b) The generation formula of the value range template coefficient is as follows:
where f (x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinates of the other coefficients of the template window, σ r Standard deviation as gaussian function;
2c) Multiplying the two templates to obtain a template formula of the bilateral filter:
3. the method for detecting the contour of a casting box in a noisy environment according to claim 1, wherein the specific steps of step 4 are as follows:
4a) Extracting an integral channel of an input image: the gradient histograms of 3 color channels, 1 gradient map and 4 different directions have 8 channel characteristics in total, and the sensitivity degree of the characteristic filters of different directions and scales to the edge is different, so that 13 channel information can be extracted from an image block, namely, a LUV color channel, 1 gradient amplitude channel on 2 scales and 4 gradient histogram channels in total, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 multiplied by 16, 13);
4b) Defining a mapping function n: y→z, the jth pixel of the mask y is represented by (y (j) (1. Ltoreq.j. Ltoreq.256)), so that the value of (j) can be calculated 1 ≠j 2 ) In the case of y (j) 1 )=y(j 2 ) Whether or not this is true, it is thus possible to define a large binary vector mapping function z=n (y), to each pair j 1 ≠j 2 Characteristic point pair y (j) 1 )=y(j 2 ) Encoding;
4c) And obtaining a final casting box contour image through edge mapping Y 'epsilon Y'.
4. The method for detecting the contour of a casting box in a noisy environment according to claim 1, wherein the specific steps of step 6 are as follows:
6a) For the input binarized casting box contour image, one point in the coordinate space can be mapped into a corresponding track curve or curved surface in the parameter space, and for a known round equation, the general equation of rectangular coordinates is as follows:
(x-a) 2 +(y-b) 2 =r 2
wherein: (a, b) is the center coordinates, r is the radius of the circle;
6b) The image space equation (x-a) 2 +(y-b) 2 =r 2 Transforming to obtain a parameter space equation:
(a-x) 2 +(b-y) 2 =r 2 ;
6c) And finding the position with the largest circle intersection point in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all points in the image space, so that the detection of the circular gate is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010049720.2A CN111292346B (en) | 2020-01-16 | 2020-01-16 | Method for detecting contour of casting box body in noise environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010049720.2A CN111292346B (en) | 2020-01-16 | 2020-01-16 | Method for detecting contour of casting box body in noise environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111292346A CN111292346A (en) | 2020-06-16 |
CN111292346B true CN111292346B (en) | 2023-05-12 |
Family
ID=71029047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010049720.2A Active CN111292346B (en) | 2020-01-16 | 2020-01-16 | Method for detecting contour of casting box body in noise environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111292346B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111967526B (en) * | 2020-08-20 | 2023-09-22 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN113793269B (en) * | 2021-10-14 | 2023-10-31 | 安徽理工大学 | Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220664A (en) * | 2017-05-18 | 2017-09-29 | 南京大学 | A kind of oil bottle vanning counting method based on structuring random forest |
WO2018107492A1 (en) * | 2016-12-16 | 2018-06-21 | 深圳大学 | Intuitionistic fuzzy random forest-based method and device for target tracking |
-
2020
- 2020-01-16 CN CN202010049720.2A patent/CN111292346B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018107492A1 (en) * | 2016-12-16 | 2018-06-21 | 深圳大学 | Intuitionistic fuzzy random forest-based method and device for target tracking |
CN107220664A (en) * | 2017-05-18 | 2017-09-29 | 南京大学 | A kind of oil bottle vanning counting method based on structuring random forest |
Non-Patent Citations (2)
Title |
---|
医学影像计算机辅助检测与诊断系统综述;郑光远等;《软件学报》(第05期);全文 * |
基于结构森林边缘检测和Hough变换的海天线检测;徐良玉等;《上海大学学报(自然科学版)》(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111292346A (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109299720B (en) | Target identification method based on contour segment spatial relationship | |
CN111626146B (en) | Merging cell table segmentation recognition method based on template matching | |
CN110543822A (en) | finger vein identification method based on convolutional neural network and supervised discrete hash algorithm | |
CN107145829B (en) | Palm vein identification method integrating textural features and scale invariant features | |
CN116205919B (en) | Hardware part production quality detection method and system based on artificial intelligence | |
CN103093215A (en) | Eye location method and device | |
CN112597812A (en) | Finger vein identification method and system based on convolutional neural network and SIFT algorithm | |
CN110532825B (en) | Bar code identification device and method based on artificial intelligence target detection | |
CN111292346B (en) | Method for detecting contour of casting box body in noise environment | |
CN111445511B (en) | Method for detecting circle in image | |
CN117689655B (en) | Metal button surface defect detection method based on computer vision | |
CN109766850B (en) | Fingerprint image matching method based on feature fusion | |
CN112101058B (en) | Automatic identification method and device for test paper bar code | |
CN104573701B (en) | A kind of automatic testing method of Tassel of Corn | |
CN103942526B (en) | Linear feature extraction method for discrete data point set | |
CN112818983B (en) | Method for judging character inversion by using picture acquaintance | |
CN107729863B (en) | Human finger vein recognition method | |
CN112258532B (en) | Positioning and segmentation method for callus in ultrasonic image | |
CN110532826B (en) | Bar code recognition device and method based on artificial intelligence semantic segmentation | |
CN109829511B (en) | Texture classification-based method for detecting cloud layer area in downward-looking infrared image | |
CN105761237B (en) | Chip x-ray image Hierarchical Segmentation based on mean shift | |
CN109460763B (en) | Text region extraction method based on multilevel text component positioning and growth | |
CN116206208A (en) | Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence | |
CN112258534B (en) | Method for positioning and segmenting small brain earthworm parts in ultrasonic image | |
CN112258536B (en) | Integrated positioning and segmentation method for calluses and cerebellum earthworm parts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |