CN111292346A - Method for detecting contour of casting box body in noise environment - Google Patents
Method for detecting contour of casting box body in noise environment Download PDFInfo
- Publication number
- CN111292346A CN111292346A CN202010049720.2A CN202010049720A CN111292346A CN 111292346 A CN111292346 A CN 111292346A CN 202010049720 A CN202010049720 A CN 202010049720A CN 111292346 A CN111292346 A CN 111292346A
- Authority
- CN
- China
- Prior art keywords
- image
- casting box
- box body
- mapping
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005266 casting Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000001514 detection method Methods 0.000 claims abstract description 40
- 230000002146 bilateral effect Effects 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 230000009467 reduction Effects 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 18
- 238000003066 decision tree Methods 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 238000013459 approach Methods 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims 1
- 238000003708 edge detection Methods 0.000 abstract description 11
- 238000004422 calculation algorithm Methods 0.000 description 22
- 230000000694 effects Effects 0.000 description 8
- 238000000513 principal component analysis Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000036039 immunity Effects 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 230000001580 bacterial effect Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30116—Casting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for detecting the outline of a casting box body in a noise environment, which belongs to the technical field of image edge detection and comprises the following steps: step 1, inputting a to-be-detected image containing noise; step 2, using bilateral filtering to perform noise reduction processing on the input noise image; step 3, constructing a random structure forest; step 4, using the trained random structure forest to perform preliminary contour detection on the denoised image; step 5, carrying out binarization processing on the preliminary contour detection result; step 6, fitting a pouring gate of the casting box body through Hough circle transformation; and 7, outputting a final detection result image. The method has the main purposes of accurately fitting the circular pouring gate of the casting box body while accurately detecting the linear profile of the casting box body and accurately positioning the circle center of the circular pouring gate.
Description
Technical Field
The invention belongs to the technical field of image edge detection, and particularly relates to a method for detecting the outline of a casting box body in a noise environment.
Background
In the technical field of image edge detection, the method can accurately detect the edge information of a noise-containing image, obtain a clear edge image of an object, and provide favorable conditions for subsequent operations. In the past decades, a great deal of work has been done by predecessors on object edge detection, and traditional image-based categories can be classified as: gray image contour detection, RGB-D image contour detection and color image contour detection. The common contour detection of gray images mostly utilizes the abrupt change between the edge gray value and the background gray value in the image, the abrupt change of the gray value is called roof or step change, and can be realized by using a first derivative and a second derivative in a mathematical model, wherein Robert operators, Sobel operators, Prewitt operators, Krisch operators, Canny operators and the like all adopt the first derivative to realize contour detection, and the operators adopting the second derivative include Laplacian operators and LOG operators.
The color image is richer in chrominance and luminance information than the gray image. The edge contour of the color image can be regarded as those pixels with abrupt color change, and the contour detection method mainly comprises two methods, namely a color component output fusion method and a vector method. The output fusion method is to process each color channel of the color image according to the gray image edge detection method, then output and fuse the results obtained by each component, and finally obtain the output edge. The vector method treats each pixel in the color image as a three-dimensional vector, so that the whole image is a two-dimensional three-component vector field, the vector characteristics of the color image are well stored, and the problems of discontinuity, missing detection and the like of the detected edge are easy to occur.
In recent years, some new edge detection algorithms have been proposed. Om Prakash Verma et al, propose the optimal fuzzy system of color image edge detection based on bacterial search algorithm. Korean patent, et al, propose an algorithm for edge detection of high-speed moving target images in a noisy environment. Piotr Dollar et al, propose a fast edge detection method based on structured forest.
Although the methods have good effect in contour detection, the casting box body is in a noise environment, and the contour of the casting box body has a circular sprue contour besides a straight line, so that the precise detection and positioning of the casting box body cannot be solved by simply using a certain edge detection algorithm, and the precise detection and positioning of the casting box body are the premise of precise operation of a casting robot.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to overcome the defect of inaccurate detection of the profile of the casting box in the prior art, and provides a method for detecting the profile of the casting box in a noise environment.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention discloses a method for detecting the profile of a casting box body in a noise environment, which comprises the following steps:
step 1, inputting a to-be-detected image containing noise;
step 2, performing noise reduction processing on the input noise image by using bilateral filtering;
step 3, constructing a random structure forest;
step 4, carrying out preliminary contour detection on the denoised image by using the trained random structure forest;
step 5, carrying out binarization processing on the preliminary contour detection result;
step 6, fitting a pouring gate of the casting box body through Hough circle transformation;
and 7, outputting a final detection result image.
As a further improvement of the invention, the specific steps of step 2 are as follows:
2a) using a two-dimensional Gaussian function to generate a distance template, using a one-dimensional Gaussian function to generate a value domain template, wherein the generation formula of the distance template coefficient is as follows:
where (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σdIs the standard deviation of the gaussian function;
2b) the value domain template coefficient generation formula is as follows:
where f (x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σrIs the standard deviation of the gaussian function;
2c) multiplying the two templates to obtain a template formula of the bilateral filter:
as a further improvement of the invention, the specific steps of step 3 are as follows:
3a) establishing a decision tree: firstly, input image data is sampled, N is assumed to represent the number of training samples, M represents the number of features, replaced sampling is adopted for N samples, then column sampling is carried out, M sub-features (M & lt M) are selected from M features, then the sampled data is classified into left and right sub-trees by a recursive mode for each decision tree till leaf nodes and decision trees ft(x) Each node j of (a) is associated with a binary partition function:
h(x,θj)∈{0,1}
where x is the input vector, { θ }jIs an independent and equally distributed random variable, j represents the jth node in the tree if h (x, theta)j) If the output distribution is Y belonged to Y, classifying x into a left node of a node j, classifying the other side into a right node, and predicting and outputting Y to store the input elements in leaf nodes through a decision tree, wherein the output distribution is Y belonged to Y;
3b) each decision tree is trained using a recursive approach:training set S on given node jjE.x Y, the goal is to find an optimal theta by trainingjSo that the data set gets good classification results, an information gain criterion needs to be defined here:
wherein: selecting a segmentation parameter θjIs to make the information gain IjAt maximum, the data set is used to perform recursive training at the left and right nodes, and the training is stopped when one of the following conditions is met: a) reaching a set maximum depth, b) the information gain or training set scale both reaching a threshold, c) the number of samples falling into a node being less than a set threshold,
the information gain formula is defined as follows:
wherein: hentropy(S)=-∑ypy log(py) Representing Shannon information entropy, pyIs the probability that the element labeled y appears in the set s;
3c) and (3) random forest structured output: mapping all the structural labels Y e Y of the leaf nodes to a discrete label set C e C, wherein C is { 1.
Π:y∈Y→c∈C{1,2,...,k}
The mapping process is divided into two stages, namely mapping Y space to Z space, namely Y → Z, wherein the mapping relation Z ═ Π (Y) is defined asDimension vector representing each pair of pixel codes of the segmentation mask yAnd m-dimensional sampling is performed on Z, and the mapping after sampling is defined as:y → Z, then mapping the given set Z to the discrete label set C, before mapping from Z space to C space, adopting Principal Component Analysis (PCA) to reduce the dimension of Z to 5 dimension, the PCA extracting the most representative feature in the sample features, for n samples in 256-dimensional spaceReducing to 5 dimensions, and finally outputting n output labels y1,...,ynE.g. y are combined to form a set model.
As a further improvement of the invention, the specific steps of step 4 are as follows:
4a) extracting an integral channel of an input image: the method comprises the steps that (1) color channels, gradient graphs and gradient histograms in 4 different directions total 8 channel characteristics, and the sensitivity degrees of characteristic filters in different directions and scales to edges are different, so that 13 channel information including an LUV color channel, 1 gradient amplitude channel in 2 scales and 4 directional gradient histogram channels can be extracted from an image block, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 x 16, 13);
4b) defining a mapping function Π: y → z, and (y (j) (1. ltoreq. j.ltoreq.256)) represents the j-th pixel of the mask y, and thus can be calculated as (j)1≠j2) In the case of y (j)1)=y(j2) If this is true, a large binary vector mapping function z ═ Π (y) is defined, and each pair j is assigned1≠j2Characteristic point pair y (j)1)=y(j2) Coding;
4c) and obtaining a final casting box outline image by edge mapping Y 'epsilon Y'.
As a further improvement of the invention, the specific steps of step 6 are as follows:
6a) for an input binarized casting box profile image, one point in a coordinate space can be mapped into a corresponding track curve or curved surface in a parameter space, and for a known circular equation, a general equation of a rectangular coordinate is as follows:
(x-a)2+(y-b)2=r2
wherein: (a, b) are coordinates of the center of a circle, and r is the radius of the circle;
6b) image space equation (x-a)2+(y-b)2=r2And transforming to obtain a parameter space equation:
(a-x)2+(b-y)2=r2;
6c) and finding the position with the most intersection points of the circles in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all the points in the image space, so that the detection of the circular gate is realized.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
the accurate detection of the casting box body profile is the premise and the basis of the accurate operation of a casting robot, but the casting box body is in a noise environment, and the profile of the casting box body comprises a straight line edge and a circular pouring gate, so that the difficulty is brought to the profile detection. Aiming at the technical problem, the invention provides a method for detecting the profile of a casting box body in a noise environment, which can accurately fit a circular pouring gate of the casting box body while accurately detecting the linear profile of the casting box body and accurately position the circle center of the circular pouring gate.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a comparison graph of the detection effect of the invention on the contour of the casting box body and the detection effect of the traditional method:
(a) the real edge map of the image is shown,
(b) the Canny algorithm detects the resulting image without bilateral filtering,
(c) the Canny algorithm detects the resulting image by bilateral filtering,
(d) the Laplacian algorithm detects the resulting image through bilateral filtering,
(e) the random structure forest detection result image after bilateral filtering,
(f) detecting a result image after bilateral filtering random structure forest detection and binarization,
(g) fitting the graph through Hough circle transformation;
FIG. 3 is a graph comparing the accuracy curves of the detection results of the invention and the traditional algorithm on the contour of the casting box body;
FIG. 4 is a graph comparing the recall rate of the contour detection result of the casting box according to the present invention and the traditional algorithm.
Detailed Description
For a further understanding of the present invention, reference will now be made in detail to the following examples and accompanying drawings.
Example 1
As shown in fig. 1, the embodiment provides a method for detecting the profile of a casting box body in a noisy environment, which includes the following steps:
step 1, inputting a to-be-detected image containing noise;
the casting box image containing noise stored in the computer space in advance is read out by applying the phthon3.5 development software in the computer.
And 2, performing noise reduction treatment on the input casting box body image by using bilateral filtering, and specifically comprising the following steps:
2a) a distance template is generated by using a two-dimensional Gaussian function, and a value domain template is generated by using a one-dimensional Gaussian function. The distance template coefficient generation formula is as follows:
wherein, (k, l) is the center coordinate of the template window; (i, j) coordinates of other coefficients of the template window; sigmadIs the standard deviation of the gaussian function.
2b) The value domain template coefficient generation formula is as follows:
wherein the function f (x, y) indicates to be processedF (x, y) represents the pixel value of the image at point (x, y); (k, l) is the center coordinate of the template window; (i, j) coordinates of other coefficients of the template window; sigmarIs the standard deviation of the gaussian function.
2c) Multiplying the two templates to obtain a template formula of the bilateral filter:
and 3, constructing a random structure forest, and specifically comprising the following steps:
3a) establishing a decision tree: firstly, input image data is sampled, N is assumed to represent the number of training samples, M represents the number of features, replaced sampling is adopted for N samples, then column sampling is carried out, M sub-features (M & lt M) are selected from the M features, and then the sampled data is classified into left and right sub-trees in a recursive mode for each decision tree until leaf nodes. Decision tree ft(x) Each node j of (a) is associated with a binary partition function:
h(x,θj)∈{0,1}
where x is the input vector, { θ }jJ represents the jth node in the tree, if h (x, theta j) is 1, x is classified into the left side node of the node j, and the other side is classified into the right side node until the leaf node, and the process is ended. And predicting output Y of the input elements through the decision tree and storing the output Y in leaf nodes, namely the output distribution is Y belonging to Y. Dividing function h (x, theta)j) It is very complicated, but it is common practice to compare the input x of a single feature dimension with a threshold when θ ═ k, τ and h (x, θ) ═ x (k) < τ],[·]Representing an indicator function; another common method is θ ═ k1, k2, τ and h (x, θ) [ [ x (k1) -x (k2) < τ]。
3b) Each decision tree is trained using a recursive approach: training set S on given node jjE.x Y, the goal is to find an optimal theta by trainingjSo that the data set gets good classification results. There is a need to define an information gain criterionThen:
wherein: selecting a segmentation parameter θjIs to make the information gain IjAt maximum, the data set is used to perform recursive training at the left and right nodes, and the training is stopped when one of the following conditions is met: a) reaching the set maximum depth; b) the information gain or the training set scale reaches the threshold value; c) the number of samples falling into a node is less than a set threshold.
The information gain formula is defined as follows:
wherein: hentropy(S)=-∑ypylog(py) Representing Shannon information entropy, pyIs the probability of occurrence in the set s of elements labeled y.
3c) And (3) random forest structured output: the structured output space is generally more complex with high dimensions, so all structured labels Y e Y of leaf nodes can be mapped to a discrete set of labels C e C, where C ═ 1.
∏:y∈Y→c∈C{1,2,...,k}
The calculation of the information gain depends on the similarity of the measurement on Y, but for the structured output space, it is difficult to calculate the similarity on Y, and therefore a temporary Y-to-Z spatial mapping is defined, the distance of the space Z being relatively easy to measure, and finally the mapping process is divided into two stages, namely Y spatial mapping to Z space, i.e. Y → Z, where the mapping Z ═ ii (Y) is defined asDimensional vectors, representing each pair of pixel encodings of the segmentation mask y, the cost of computing Z for each y is still high, in order to reduce the dimensionality, Z is sampled in m dimensions, the sampled mapping being defined as:y → Z. Randomness is added in the sampling process of Z, and the sufficient diversity of the tree is ensured.
Before mapping from Z space to C space, reducing Z dimension to 5 dimension by Principal Component Analysis (PCA), extracting most representative features from sample features by PCA, and for n samples in 256-dimensional spaceReduce the blood pressure to 5 dimensions. There are two ways to achieve the mapping of a given set Z to a discrete set of labels C: a) clustering Z into k clusters by using a k-means clustering method; b) based on log2PCA of k dimensions quantizes Z, assigning a discrete label c according to the quadrant Z falls into. Both methods operate similarly, but the latter is faster. A principal component analysis quantification method with the choice of k-2 is used herein.
In order to obtain a unique output result, n output labels y need to be matched1,...,ynE, combining the E and Y to form a set model. Taking m-dimensional sampling mapping function piφFor each tag i, calculate zi=∏φ(yi). In zk=∏φ(yk) To centre, those are chosen such that zkWith all others ziAnd minimum ykAs an output tag. Mapping function pi for integrating model dependence m and selectionφ。
And 4, carrying out preliminary contour detection on the casting box body after noise reduction by using the trained random structure forest, and specifically comprising the following steps:
4a) extracting an integral channel of an input casting box body image: the sensitivity of feature filters in different directions and scales to edges is different, so that 13 pieces of channel information including an LUV color channel, 1 gradient magnitude channel in 2 scales and 4 directional gradient histogram channels can be extracted from an image block, then self-similarity features are obtained, and the obtained features are a feature matrix with the shape of (16 × 16, 13).
4b) Defining a mapping function pi: y → z. A mapping function is defined herein, which represents the jth pixel of mask y by (y (j) (1 ≦ j ≦ 256) and thus can be calculated at (j)1≠j2) In the case of y (j)1)=y(j2) If this is true, then a large binary vector mapping function z ═ pi (y) is defined, and each pair j is mapped to1≠j2Characteristic point pair y (j)1)=y(j2) And (5) encoding.
4c) The output of the random structure forest is more robust by fusing the output results of a plurality of irrelevant decision trees. Efficient fusion of multiple segmentation masks Y e Y is very difficult, so edge mapping Y 'e Y' is used herein to obtain the final cast box profile image.
And 5, carrying out binarization processing on the contour image detected by the random structure forest, and finding out the optimal threshold value through repeated experiments. Setting the gray value of the pixel point on the image to be 0 when the gray value is lower than the threshold value and to be 255 when the gray value is higher than the threshold value, obtaining a binary image capable of reflecting the whole and local characteristics of the image, and further segmenting the image outline from the background.
Step 6, fitting a pouring gate of the casting box body through Hough circle transformation, and specifically comprising the following steps:
6a) when curve detection is carried out by Hough transformation, the most important thing is to write a transformation formula from an image coordinate space to a parameter space. One point in the coordinate space of the input binaryzation casting box body outline image can be mapped into a corresponding track curve or a corresponding curved surface in the parameter space. For a known circular equation, the general equation for its rectangular coordinates is:
(x-a)2+(y-b)2=r2
wherein: and (a, b) are coordinates of the center of the circle, and r is the radius of the circle.
6b) HandleImage space equation (x-a)2+(y-b)2=r2And transforming to obtain a parameter space equation:
(a-x)2+(b-y)2=r2
6c) and finding the position with the most intersection points of the circles in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all the points in the image space, so that the detection of the circular gate is realized.
And 7, outputting a final casting box body contour detection result image.
As shown in fig. 2, the b, c, d, e, f, and g images are compared with each other through different angles, wherein it is easy to see that there are many disordered lines in the four images, the more lines indicate that the noise immunity of the algorithm is weaker, and the less lines indicate that the noise immunity of the algorithm is stronger.
Through the b diagram and the c diagram in fig. 2, it can be found that the canny algorithm has poor effect on the detection of the edge of the box body no matter whether the noise reduction processing is performed or not. And a graph d shows the detection effect of Laplacian, and the Laplacian algorithm is clearer than the edge detected by canny algorithm and has certain anti-interference capability on noise. And the detection result shows that the noise of the sprue surface of the box body is basically removed after the detection of the algorithm, and the noise of the surrounding environment is improved to a certain extent, so that the algorithm has strong anti-interference capability on the noise compared with other algorithms. And f, the graph is the effect of the graph e after binarization processing, and g, the graph is accurately detected and positioned to a pouring gate of the casting box through Hough circle transformation on the basis of the graph f, and the central point of the pouring gate is marked.
Fig. 3 compares the accuracy of the edge detection of the pouring gate of the casting box under different algorithms, and it can be seen that the accuracy of the algorithm is higher than that of the other two algorithms, the accuracy of the algorithm is more stable for different image algorithms, and the method has better detection results for the pouring gates of the casting boxes at different angles. As can be seen from the recall ratios shown in fig. 4, the algorithm herein has a higher recall ratio, indicating that the detected result is closer to the real edge map.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.
Claims (5)
1. A detection method for the profile of a casting box body in a noise environment is characterized by comprising the following steps:
step 1, inputting a to-be-detected image containing noise;
step 2, performing noise reduction processing on the input noise image by using bilateral filtering;
step 3, constructing a random structure forest;
step 4, carrying out preliminary contour detection on the denoised image by using the trained random structure forest;
step 5, carrying out binarization processing on the preliminary contour detection result;
step 6, fitting a pouring gate of the casting box body through Hough circle transformation;
and 7, outputting a final detection result image.
2. The method for detecting the profile of the casting box body in the noise environment as claimed in claim 1, wherein the specific steps of the step 2 are as follows:
2a) using a two-dimensional Gaussian function to generate a distance template, using a one-dimensional Gaussian function to generate a value domain template, wherein the generation formula of the distance template coefficient is as follows:
where (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σdIs the standard deviation of the gaussian function;
2b) the value domain template coefficient generation formula is as follows:
where f (x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σrIs the standard deviation of the gaussian function;
2c) multiplying the two templates to obtain a template formula of the bilateral filter:
3. the method for detecting the profile of the casting box body in the noise environment as claimed in claim 1, wherein the specific steps of the step 3 are as follows:
3a) establishing a decision tree: firstly, input image data is sampled, N is assumed to represent the number of training samples, M represents the number of features, replaced sampling is adopted for N samples, then column sampling is carried out, M sub-features (M & lt M) are selected from M features, then the sampled data is classified into left and right sub-trees by a recursive mode for each decision tree till leaf nodes and decision trees ft(x) Each node j of (a) is associated with a binary partition function:
h(x,θj)∈{0,1}
where x is the input vector, { θ }jIs an independent and equally distributed random variable, j represents the jth node in the tree if h (x, theta)j) If the output distribution is Y belonged to Y, classifying x into a left node of a node j, classifying the other side into a right node, and predicting and outputting Y to store the input elements in leaf nodes through a decision tree, wherein the output distribution is Y belonged to Y;
3b) each decision tree is trained using a recursive approach: training set S on given node jjE.x Y, the goal is to find an optimal theta by trainingjSo that the data set gets good classification results, an information gain needs to be definedCriterion is as follows:
wherein:selecting a segmentation parameter θjIs to make the information gain IjAt maximum, the data set is used to perform recursive training at the left and right nodes, and the training is stopped when one of the following conditions is met: a) reaching the set maximum depth, b) the information gain or the training set scale reaches the threshold value, c) the number of samples falling into the node is less than the set threshold value, and the information gain formula is defined as follows:
wherein: hentropy(S)=-∑ypylog(py) Representing Shannon information entropy, pyIs the probability that the element labeled y appears in the set s;
3c) and (3) random forest structured output: mapping all the structural labels Y e Y of the leaf nodes to a discrete label set C e C, wherein C is { 1.
Π:y∈Y→c∈C{1,2,...,k}
The mapping process is divided into two stages, namely mapping Y space to Z space, namely Y → Z, wherein the mapping relation Z ═ Π (Y) is defined asA dimension vector representing each pair of pixel codes of the segmentation mask y, and m-dimensional sampling of Z, the sampled mapping being defined as:then mapping the given set Z to the discrete label set C, and adopting the principal component before mapping from the Z space to the C spaceAnalysis (PCA) reduces the dimension of Z to 5 dimensions, and PCA extracts the most representative of the sample features, for n samples in 256-dimensional spaceReducing to 5 dimensions, and finally outputting n output labels y1,...,ynE, combining the E and Y to form a set model.
4. The method for detecting the profile of the casting box body in the noise environment as claimed in claim 3, wherein the specific steps of the step 4 are as follows:
4a) extracting an integral channel of an input image: the method comprises the steps that (1) color channels, gradient graphs and gradient histograms in 4 different directions total 8 channel characteristics, and the sensitivity degrees of characteristic filters in different directions and scales to edges are different, so that 13 channel information including an LUV color channel, 1 gradient amplitude channel in 2 scales and 4 directional gradient histogram channels can be extracted from an image block, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 x 16, 13);
4b) defining a mapping function Π: y → z, and (y (j) (1. ltoreq. j.ltoreq.256)) represents the j-th pixel of the mask y, and thus can be calculated as (j)1≠j2) In the case of y (j)1)=y(j2) If this is true, a large binary vector mapping function z ═ Π (y) is defined, and each pair j is assigned1≠j2Characteristic point pair y (j)1)=y(j2) Coding;
4c) and obtaining a final casting box outline image by edge mapping Y 'epsilon Y'.
5. The method for detecting the profile of the casting box body in the noise environment as claimed in claim 1, wherein the specific steps of the step 6 are as follows:
6a) for an input binarized casting box profile image, one point in a coordinate space can be mapped into a corresponding track curve or curved surface in a parameter space, and for a known circular equation, a general equation of a rectangular coordinate is as follows:
(x-a)2+(y-b)2=r2
wherein: (a, b) are coordinates of the center of a circle, and r is the radius of the circle;
6b) image space equation (x-a)2+(y-b)2=r2And transforming to obtain a parameter space equation: (a-x)2+(b-y)2=r2;
6c) And finding the position with the most intersection points of the circles in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all the points in the image space, so that the detection of the circular gate is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010049720.2A CN111292346B (en) | 2020-01-16 | 2020-01-16 | Method for detecting contour of casting box body in noise environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010049720.2A CN111292346B (en) | 2020-01-16 | 2020-01-16 | Method for detecting contour of casting box body in noise environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111292346A true CN111292346A (en) | 2020-06-16 |
CN111292346B CN111292346B (en) | 2023-05-12 |
Family
ID=71029047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010049720.2A Active CN111292346B (en) | 2020-01-16 | 2020-01-16 | Method for detecting contour of casting box body in noise environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111292346B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111967526A (en) * | 2020-08-20 | 2020-11-20 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN113793269A (en) * | 2021-10-14 | 2021-12-14 | 安徽理工大学 | Super-resolution image reconstruction method based on improved neighborhood embedding and prior learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220664A (en) * | 2017-05-18 | 2017-09-29 | 南京大学 | A kind of oil bottle vanning counting method based on structuring random forest |
WO2018107492A1 (en) * | 2016-12-16 | 2018-06-21 | 深圳大学 | Intuitionistic fuzzy random forest-based method and device for target tracking |
-
2020
- 2020-01-16 CN CN202010049720.2A patent/CN111292346B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018107492A1 (en) * | 2016-12-16 | 2018-06-21 | 深圳大学 | Intuitionistic fuzzy random forest-based method and device for target tracking |
CN107220664A (en) * | 2017-05-18 | 2017-09-29 | 南京大学 | A kind of oil bottle vanning counting method based on structuring random forest |
Non-Patent Citations (2)
Title |
---|
徐良玉等: "基于结构森林边缘检测和Hough变换的海天线检测", 《上海大学学报(自然科学版)》 * |
郑光远等: "医学影像计算机辅助检测与诊断系统综述", 《软件学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111967526A (en) * | 2020-08-20 | 2020-11-20 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN111967526B (en) * | 2020-08-20 | 2023-09-22 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN113793269A (en) * | 2021-10-14 | 2021-12-14 | 安徽理工大学 | Super-resolution image reconstruction method based on improved neighborhood embedding and prior learning |
CN113793269B (en) * | 2021-10-14 | 2023-10-31 | 安徽理工大学 | Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning |
Also Published As
Publication number | Publication date |
---|---|
CN111292346B (en) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN111626146B (en) | Merging cell table segmentation recognition method based on template matching | |
CN107145829B (en) | Palm vein identification method integrating textural features and scale invariant features | |
CN110826408B (en) | Face recognition method by regional feature extraction | |
Akhtar et al. | Optical character recognition (OCR) using partial least square (PLS) based feature reduction: an application to artificial intelligence for biometric identification | |
CN108073940B (en) | Method for detecting 3D target example object in unstructured environment | |
CN111292346B (en) | Method for detecting contour of casting box body in noise environment | |
CN111091071B (en) | Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting | |
CN104573701B (en) | A kind of automatic testing method of Tassel of Corn | |
Li et al. | Finely Crafted Features for Traffic Sign Recognition | |
CN108694411B (en) | Method for identifying similar images | |
Srivastava et al. | Drought stress classification using 3D plant models | |
Hristov et al. | A software system for classification of archaeological artefacts represented by 2D plans | |
CN109902690A (en) | Image recognition technology | |
CN112258536A (en) | Integrated positioning and dividing method for corpus callosum and lumbricus cerebellum | |
CN109829511B (en) | Texture classification-based method for detecting cloud layer area in downward-looking infrared image | |
Faska et al. | A powerful and efficient method of image segmentation based on random forest algorithm | |
CN115965613A (en) | Cross-layer connection construction scene crowd counting method based on cavity convolution | |
CN108154107A (en) | A kind of method of the scene type of determining remote sensing images ownership | |
CN112541471A (en) | Shielded target identification method based on multi-feature fusion | |
Rao et al. | Texture classification based on local features using dual neighborhood approach | |
CN117058390B (en) | High-robustness circular pointer type dial plate image state segmentation method | |
Rao et al. | Texture classification based on statistical Properties of local units | |
CN112258535B (en) | Integrated positioning and segmentation method for corpus callosum and lumbricus in ultrasonic image | |
Sadjadi | Object recognition using coding schemes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |