CN111292346A - Method for detecting contour of casting box body in noise environment - Google Patents

Method for detecting contour of casting box body in noise environment Download PDF

Info

Publication number
CN111292346A
CN111292346A CN202010049720.2A CN202010049720A CN111292346A CN 111292346 A CN111292346 A CN 111292346A CN 202010049720 A CN202010049720 A CN 202010049720A CN 111292346 A CN111292346 A CN 111292346A
Authority
CN
China
Prior art keywords
image
casting box
box body
mapping
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010049720.2A
Other languages
Chinese (zh)
Other versions
CN111292346B (en
Inventor
鲍士水
黄友锐
许欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202010049720.2A priority Critical patent/CN111292346B/en
Publication of CN111292346A publication Critical patent/CN111292346A/en
Application granted granted Critical
Publication of CN111292346B publication Critical patent/CN111292346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30116Casting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting the outline of a casting box body in a noise environment, which belongs to the technical field of image edge detection and comprises the following steps: step 1, inputting a to-be-detected image containing noise; step 2, using bilateral filtering to perform noise reduction processing on the input noise image; step 3, constructing a random structure forest; step 4, using the trained random structure forest to perform preliminary contour detection on the denoised image; step 5, carrying out binarization processing on the preliminary contour detection result; step 6, fitting a pouring gate of the casting box body through Hough circle transformation; and 7, outputting a final detection result image. The method has the main purposes of accurately fitting the circular pouring gate of the casting box body while accurately detecting the linear profile of the casting box body and accurately positioning the circle center of the circular pouring gate.

Description

Method for detecting contour of casting box body in noise environment
Technical Field
The invention belongs to the technical field of image edge detection, and particularly relates to a method for detecting the outline of a casting box body in a noise environment.
Background
In the technical field of image edge detection, the method can accurately detect the edge information of a noise-containing image, obtain a clear edge image of an object, and provide favorable conditions for subsequent operations. In the past decades, a great deal of work has been done by predecessors on object edge detection, and traditional image-based categories can be classified as: gray image contour detection, RGB-D image contour detection and color image contour detection. The common contour detection of gray images mostly utilizes the abrupt change between the edge gray value and the background gray value in the image, the abrupt change of the gray value is called roof or step change, and can be realized by using a first derivative and a second derivative in a mathematical model, wherein Robert operators, Sobel operators, Prewitt operators, Krisch operators, Canny operators and the like all adopt the first derivative to realize contour detection, and the operators adopting the second derivative include Laplacian operators and LOG operators.
The color image is richer in chrominance and luminance information than the gray image. The edge contour of the color image can be regarded as those pixels with abrupt color change, and the contour detection method mainly comprises two methods, namely a color component output fusion method and a vector method. The output fusion method is to process each color channel of the color image according to the gray image edge detection method, then output and fuse the results obtained by each component, and finally obtain the output edge. The vector method treats each pixel in the color image as a three-dimensional vector, so that the whole image is a two-dimensional three-component vector field, the vector characteristics of the color image are well stored, and the problems of discontinuity, missing detection and the like of the detected edge are easy to occur.
In recent years, some new edge detection algorithms have been proposed. Om Prakash Verma et al, propose the optimal fuzzy system of color image edge detection based on bacterial search algorithm. Korean patent, et al, propose an algorithm for edge detection of high-speed moving target images in a noisy environment. Piotr Dollar et al, propose a fast edge detection method based on structured forest.
Although the methods have good effect in contour detection, the casting box body is in a noise environment, and the contour of the casting box body has a circular sprue contour besides a straight line, so that the precise detection and positioning of the casting box body cannot be solved by simply using a certain edge detection algorithm, and the precise detection and positioning of the casting box body are the premise of precise operation of a casting robot.
Disclosure of Invention
1. Technical problem to be solved by the invention
The invention aims to overcome the defect of inaccurate detection of the profile of the casting box in the prior art, and provides a method for detecting the profile of the casting box in a noise environment.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention discloses a method for detecting the profile of a casting box body in a noise environment, which comprises the following steps:
step 1, inputting a to-be-detected image containing noise;
step 2, performing noise reduction processing on the input noise image by using bilateral filtering;
step 3, constructing a random structure forest;
step 4, carrying out preliminary contour detection on the denoised image by using the trained random structure forest;
step 5, carrying out binarization processing on the preliminary contour detection result;
step 6, fitting a pouring gate of the casting box body through Hough circle transformation;
and 7, outputting a final detection result image.
As a further improvement of the invention, the specific steps of step 2 are as follows:
2a) using a two-dimensional Gaussian function to generate a distance template, using a one-dimensional Gaussian function to generate a value domain template, wherein the generation formula of the distance template coefficient is as follows:
Figure BDA0002370704900000021
where (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σdIs the standard deviation of the gaussian function;
2b) the value domain template coefficient generation formula is as follows:
Figure BDA0002370704900000022
where f (x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σrIs the standard deviation of the gaussian function;
2c) multiplying the two templates to obtain a template formula of the bilateral filter:
Figure BDA0002370704900000023
as a further improvement of the invention, the specific steps of step 3 are as follows:
3a) establishing a decision tree: firstly, input image data is sampled, N is assumed to represent the number of training samples, M represents the number of features, replaced sampling is adopted for N samples, then column sampling is carried out, M sub-features (M & lt M) are selected from M features, then the sampled data is classified into left and right sub-trees by a recursive mode for each decision tree till leaf nodes and decision trees ft(x) Each node j of (a) is associated with a binary partition function:
h(x,θj)∈{0,1}
where x is the input vector, { θ }jIs an independent and equally distributed random variable, j represents the jth node in the tree if h (x, theta)j) If the output distribution is Y belonged to Y, classifying x into a left node of a node j, classifying the other side into a right node, and predicting and outputting Y to store the input elements in leaf nodes through a decision tree, wherein the output distribution is Y belonged to Y;
3b) each decision tree is trained using a recursive approach:training set S on given node jjE.x Y, the goal is to find an optimal theta by trainingjSo that the data set gets good classification results, an information gain criterion needs to be defined here:
Figure BDA0002370704900000031
wherein:
Figure BDA0002370704900000032
Figure BDA0002370704900000033
selecting a segmentation parameter θjIs to make the information gain IjAt maximum, the data set is used to perform recursive training at the left and right nodes, and the training is stopped when one of the following conditions is met: a) reaching a set maximum depth, b) the information gain or training set scale both reaching a threshold, c) the number of samples falling into a node being less than a set threshold,
the information gain formula is defined as follows:
Figure BDA0002370704900000034
wherein: hentropy(S)=-∑ypy log(py) Representing Shannon information entropy, pyIs the probability that the element labeled y appears in the set s;
3c) and (3) random forest structured output: mapping all the structural labels Y e Y of the leaf nodes to a discrete label set C e C, wherein C is { 1.
Π:y∈Y→c∈C{1,2,...,k}
The mapping process is divided into two stages, namely mapping Y space to Z space, namely Y → Z, wherein the mapping relation Z ═ Π (Y) is defined as
Figure BDA0002370704900000035
Dimension vector representing each pair of pixel codes of the segmentation mask yAnd m-dimensional sampling is performed on Z, and the mapping after sampling is defined as:
Figure BDA0002370704900000036
y → Z, then mapping the given set Z to the discrete label set C, before mapping from Z space to C space, adopting Principal Component Analysis (PCA) to reduce the dimension of Z to 5 dimension, the PCA extracting the most representative feature in the sample features, for n samples in 256-dimensional space
Figure BDA0002370704900000037
Reducing to 5 dimensions, and finally outputting n output labels y1,...,ynE.g. y are combined to form a set model.
As a further improvement of the invention, the specific steps of step 4 are as follows:
4a) extracting an integral channel of an input image: the method comprises the steps that (1) color channels, gradient graphs and gradient histograms in 4 different directions total 8 channel characteristics, and the sensitivity degrees of characteristic filters in different directions and scales to edges are different, so that 13 channel information including an LUV color channel, 1 gradient amplitude channel in 2 scales and 4 directional gradient histogram channels can be extracted from an image block, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 x 16, 13);
4b) defining a mapping function Π: y → z, and (y (j) (1. ltoreq. j.ltoreq.256)) represents the j-th pixel of the mask y, and thus can be calculated as (j)1≠j2) In the case of y (j)1)=y(j2) If this is true, a large binary vector mapping function z ═ Π (y) is defined, and each pair j is assigned1≠j2Characteristic point pair y (j)1)=y(j2) Coding;
4c) and obtaining a final casting box outline image by edge mapping Y 'epsilon Y'.
As a further improvement of the invention, the specific steps of step 6 are as follows:
6a) for an input binarized casting box profile image, one point in a coordinate space can be mapped into a corresponding track curve or curved surface in a parameter space, and for a known circular equation, a general equation of a rectangular coordinate is as follows:
(x-a)2+(y-b)2=r2
wherein: (a, b) are coordinates of the center of a circle, and r is the radius of the circle;
6b) image space equation (x-a)2+(y-b)2=r2And transforming to obtain a parameter space equation:
(a-x)2+(b-y)2=r2
6c) and finding the position with the most intersection points of the circles in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all the points in the image space, so that the detection of the circular gate is realized.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
the accurate detection of the casting box body profile is the premise and the basis of the accurate operation of a casting robot, but the casting box body is in a noise environment, and the profile of the casting box body comprises a straight line edge and a circular pouring gate, so that the difficulty is brought to the profile detection. Aiming at the technical problem, the invention provides a method for detecting the profile of a casting box body in a noise environment, which can accurately fit a circular pouring gate of the casting box body while accurately detecting the linear profile of the casting box body and accurately position the circle center of the circular pouring gate.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a comparison graph of the detection effect of the invention on the contour of the casting box body and the detection effect of the traditional method:
(a) the real edge map of the image is shown,
(b) the Canny algorithm detects the resulting image without bilateral filtering,
(c) the Canny algorithm detects the resulting image by bilateral filtering,
(d) the Laplacian algorithm detects the resulting image through bilateral filtering,
(e) the random structure forest detection result image after bilateral filtering,
(f) detecting a result image after bilateral filtering random structure forest detection and binarization,
(g) fitting the graph through Hough circle transformation;
FIG. 3 is a graph comparing the accuracy curves of the detection results of the invention and the traditional algorithm on the contour of the casting box body;
FIG. 4 is a graph comparing the recall rate of the contour detection result of the casting box according to the present invention and the traditional algorithm.
Detailed Description
For a further understanding of the present invention, reference will now be made in detail to the following examples and accompanying drawings.
Example 1
As shown in fig. 1, the embodiment provides a method for detecting the profile of a casting box body in a noisy environment, which includes the following steps:
step 1, inputting a to-be-detected image containing noise;
the casting box image containing noise stored in the computer space in advance is read out by applying the phthon3.5 development software in the computer.
And 2, performing noise reduction treatment on the input casting box body image by using bilateral filtering, and specifically comprising the following steps:
2a) a distance template is generated by using a two-dimensional Gaussian function, and a value domain template is generated by using a one-dimensional Gaussian function. The distance template coefficient generation formula is as follows:
Figure BDA0002370704900000051
wherein, (k, l) is the center coordinate of the template window; (i, j) coordinates of other coefficients of the template window; sigmadIs the standard deviation of the gaussian function.
2b) The value domain template coefficient generation formula is as follows:
Figure BDA0002370704900000052
wherein the function f (x, y) indicates to be processedF (x, y) represents the pixel value of the image at point (x, y); (k, l) is the center coordinate of the template window; (i, j) coordinates of other coefficients of the template window; sigmarIs the standard deviation of the gaussian function.
2c) Multiplying the two templates to obtain a template formula of the bilateral filter:
Figure BDA0002370704900000061
and 3, constructing a random structure forest, and specifically comprising the following steps:
3a) establishing a decision tree: firstly, input image data is sampled, N is assumed to represent the number of training samples, M represents the number of features, replaced sampling is adopted for N samples, then column sampling is carried out, M sub-features (M & lt M) are selected from the M features, and then the sampled data is classified into left and right sub-trees in a recursive mode for each decision tree until leaf nodes. Decision tree ft(x) Each node j of (a) is associated with a binary partition function:
h(x,θj)∈{0,1}
where x is the input vector, { θ }jJ represents the jth node in the tree, if h (x, theta j) is 1, x is classified into the left side node of the node j, and the other side is classified into the right side node until the leaf node, and the process is ended. And predicting output Y of the input elements through the decision tree and storing the output Y in leaf nodes, namely the output distribution is Y belonging to Y. Dividing function h (x, theta)j) It is very complicated, but it is common practice to compare the input x of a single feature dimension with a threshold when θ ═ k, τ and h (x, θ) ═ x (k) < τ],[·]Representing an indicator function; another common method is θ ═ k1, k2, τ and h (x, θ) [ [ x (k1) -x (k2) < τ]。
3b) Each decision tree is trained using a recursive approach: training set S on given node jjE.x Y, the goal is to find an optimal theta by trainingjSo that the data set gets good classification results. There is a need to define an information gain criterionThen:
Figure BDA0002370704900000062
wherein:
Figure BDA0002370704900000063
Figure BDA0002370704900000064
selecting a segmentation parameter θjIs to make the information gain IjAt maximum, the data set is used to perform recursive training at the left and right nodes, and the training is stopped when one of the following conditions is met: a) reaching the set maximum depth; b) the information gain or the training set scale reaches the threshold value; c) the number of samples falling into a node is less than a set threshold.
The information gain formula is defined as follows:
Figure BDA0002370704900000065
wherein: hentropy(S)=-∑ypylog(py) Representing Shannon information entropy, pyIs the probability of occurrence in the set s of elements labeled y.
3c) And (3) random forest structured output: the structured output space is generally more complex with high dimensions, so all structured labels Y e Y of leaf nodes can be mapped to a discrete set of labels C e C, where C ═ 1.
∏:y∈Y→c∈C{1,2,...,k}
The calculation of the information gain depends on the similarity of the measurement on Y, but for the structured output space, it is difficult to calculate the similarity on Y, and therefore a temporary Y-to-Z spatial mapping is defined, the distance of the space Z being relatively easy to measure, and finally the mapping process is divided into two stages, namely Y spatial mapping to Z space, i.e. Y → Z, where the mapping Z ═ ii (Y) is defined as
Figure BDA0002370704900000071
Dimensional vectors, representing each pair of pixel encodings of the segmentation mask y, the cost of computing Z for each y is still high, in order to reduce the dimensionality, Z is sampled in m dimensions, the sampled mapping being defined as:
Figure BDA0002370704900000072
y → Z. Randomness is added in the sampling process of Z, and the sufficient diversity of the tree is ensured.
Before mapping from Z space to C space, reducing Z dimension to 5 dimension by Principal Component Analysis (PCA), extracting most representative features from sample features by PCA, and for n samples in 256-dimensional space
Figure BDA0002370704900000073
Reduce the blood pressure to 5 dimensions. There are two ways to achieve the mapping of a given set Z to a discrete set of labels C: a) clustering Z into k clusters by using a k-means clustering method; b) based on log2PCA of k dimensions quantizes Z, assigning a discrete label c according to the quadrant Z falls into. Both methods operate similarly, but the latter is faster. A principal component analysis quantification method with the choice of k-2 is used herein.
In order to obtain a unique output result, n output labels y need to be matched1,...,ynE, combining the E and Y to form a set model. Taking m-dimensional sampling mapping function piφFor each tag i, calculate zi=∏φ(yi). In zk=∏φ(yk) To centre, those are chosen such that zkWith all others ziAnd minimum ykAs an output tag. Mapping function pi for integrating model dependence m and selectionφ
And 4, carrying out preliminary contour detection on the casting box body after noise reduction by using the trained random structure forest, and specifically comprising the following steps:
4a) extracting an integral channel of an input casting box body image: the sensitivity of feature filters in different directions and scales to edges is different, so that 13 pieces of channel information including an LUV color channel, 1 gradient magnitude channel in 2 scales and 4 directional gradient histogram channels can be extracted from an image block, then self-similarity features are obtained, and the obtained features are a feature matrix with the shape of (16 × 16, 13).
4b) Defining a mapping function pi: y → z. A mapping function is defined herein, which represents the jth pixel of mask y by (y (j) (1 ≦ j ≦ 256) and thus can be calculated at (j)1≠j2) In the case of y (j)1)=y(j2) If this is true, then a large binary vector mapping function z ═ pi (y) is defined, and each pair j is mapped to1≠j2Characteristic point pair y (j)1)=y(j2) And (5) encoding.
4c) The output of the random structure forest is more robust by fusing the output results of a plurality of irrelevant decision trees. Efficient fusion of multiple segmentation masks Y e Y is very difficult, so edge mapping Y 'e Y' is used herein to obtain the final cast box profile image.
And 5, carrying out binarization processing on the contour image detected by the random structure forest, and finding out the optimal threshold value through repeated experiments. Setting the gray value of the pixel point on the image to be 0 when the gray value is lower than the threshold value and to be 255 when the gray value is higher than the threshold value, obtaining a binary image capable of reflecting the whole and local characteristics of the image, and further segmenting the image outline from the background.
Step 6, fitting a pouring gate of the casting box body through Hough circle transformation, and specifically comprising the following steps:
6a) when curve detection is carried out by Hough transformation, the most important thing is to write a transformation formula from an image coordinate space to a parameter space. One point in the coordinate space of the input binaryzation casting box body outline image can be mapped into a corresponding track curve or a corresponding curved surface in the parameter space. For a known circular equation, the general equation for its rectangular coordinates is:
(x-a)2+(y-b)2=r2
wherein: and (a, b) are coordinates of the center of the circle, and r is the radius of the circle.
6b) HandleImage space equation (x-a)2+(y-b)2=r2And transforming to obtain a parameter space equation:
(a-x)2+(b-y)2=r2
6c) and finding the position with the most intersection points of the circles in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all the points in the image space, so that the detection of the circular gate is realized.
And 7, outputting a final casting box body contour detection result image.
As shown in fig. 2, the b, c, d, e, f, and g images are compared with each other through different angles, wherein it is easy to see that there are many disordered lines in the four images, the more lines indicate that the noise immunity of the algorithm is weaker, and the less lines indicate that the noise immunity of the algorithm is stronger.
Through the b diagram and the c diagram in fig. 2, it can be found that the canny algorithm has poor effect on the detection of the edge of the box body no matter whether the noise reduction processing is performed or not. And a graph d shows the detection effect of Laplacian, and the Laplacian algorithm is clearer than the edge detected by canny algorithm and has certain anti-interference capability on noise. And the detection result shows that the noise of the sprue surface of the box body is basically removed after the detection of the algorithm, and the noise of the surrounding environment is improved to a certain extent, so that the algorithm has strong anti-interference capability on the noise compared with other algorithms. And f, the graph is the effect of the graph e after binarization processing, and g, the graph is accurately detected and positioned to a pouring gate of the casting box through Hough circle transformation on the basis of the graph f, and the central point of the pouring gate is marked.
Fig. 3 compares the accuracy of the edge detection of the pouring gate of the casting box under different algorithms, and it can be seen that the accuracy of the algorithm is higher than that of the other two algorithms, the accuracy of the algorithm is more stable for different image algorithms, and the method has better detection results for the pouring gates of the casting boxes at different angles. As can be seen from the recall ratios shown in fig. 4, the algorithm herein has a higher recall ratio, indicating that the detected result is closer to the real edge map.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (5)

1. A detection method for the profile of a casting box body in a noise environment is characterized by comprising the following steps:
step 1, inputting a to-be-detected image containing noise;
step 2, performing noise reduction processing on the input noise image by using bilateral filtering;
step 3, constructing a random structure forest;
step 4, carrying out preliminary contour detection on the denoised image by using the trained random structure forest;
step 5, carrying out binarization processing on the preliminary contour detection result;
step 6, fitting a pouring gate of the casting box body through Hough circle transformation;
and 7, outputting a final detection result image.
2. The method for detecting the profile of the casting box body in the noise environment as claimed in claim 1, wherein the specific steps of the step 2 are as follows:
2a) using a two-dimensional Gaussian function to generate a distance template, using a one-dimensional Gaussian function to generate a value domain template, wherein the generation formula of the distance template coefficient is as follows:
Figure FDA0002370704890000011
where (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σdIs the standard deviation of the gaussian function;
2b) the value domain template coefficient generation formula is as follows:
Figure FDA0002370704890000012
where f (x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinate of the other coefficients of the template window, σrIs the standard deviation of the gaussian function;
2c) multiplying the two templates to obtain a template formula of the bilateral filter:
Figure FDA0002370704890000013
3. the method for detecting the profile of the casting box body in the noise environment as claimed in claim 1, wherein the specific steps of the step 3 are as follows:
3a) establishing a decision tree: firstly, input image data is sampled, N is assumed to represent the number of training samples, M represents the number of features, replaced sampling is adopted for N samples, then column sampling is carried out, M sub-features (M & lt M) are selected from M features, then the sampled data is classified into left and right sub-trees by a recursive mode for each decision tree till leaf nodes and decision trees ft(x) Each node j of (a) is associated with a binary partition function:
h(x,θj)∈{0,1}
where x is the input vector, { θ }jIs an independent and equally distributed random variable, j represents the jth node in the tree if h (x, theta)j) If the output distribution is Y belonged to Y, classifying x into a left node of a node j, classifying the other side into a right node, and predicting and outputting Y to store the input elements in leaf nodes through a decision tree, wherein the output distribution is Y belonged to Y;
3b) each decision tree is trained using a recursive approach: training set S on given node jjE.x Y, the goal is to find an optimal theta by trainingjSo that the data set gets good classification results, an information gain needs to be definedCriterion is as follows:
Figure FDA0002370704890000021
wherein:
Figure FDA0002370704890000022
selecting a segmentation parameter θjIs to make the information gain IjAt maximum, the data set is used to perform recursive training at the left and right nodes, and the training is stopped when one of the following conditions is met: a) reaching the set maximum depth, b) the information gain or the training set scale reaches the threshold value, c) the number of samples falling into the node is less than the set threshold value, and the information gain formula is defined as follows:
Figure FDA0002370704890000023
wherein: hentropy(S)=-∑ypylog(py) Representing Shannon information entropy, pyIs the probability that the element labeled y appears in the set s;
3c) and (3) random forest structured output: mapping all the structural labels Y e Y of the leaf nodes to a discrete label set C e C, wherein C is { 1.
Π:y∈Y→c∈C{1,2,...,k}
The mapping process is divided into two stages, namely mapping Y space to Z space, namely Y → Z, wherein the mapping relation Z ═ Π (Y) is defined as
Figure FDA0002370704890000024
A dimension vector representing each pair of pixel codes of the segmentation mask y, and m-dimensional sampling of Z, the sampled mapping being defined as:
Figure FDA0002370704890000025
then mapping the given set Z to the discrete label set C, and adopting the principal component before mapping from the Z space to the C spaceAnalysis (PCA) reduces the dimension of Z to 5 dimensions, and PCA extracts the most representative of the sample features, for n samples in 256-dimensional space
Figure FDA0002370704890000026
Reducing to 5 dimensions, and finally outputting n output labels y1,...,ynE, combining the E and Y to form a set model.
4. The method for detecting the profile of the casting box body in the noise environment as claimed in claim 3, wherein the specific steps of the step 4 are as follows:
4a) extracting an integral channel of an input image: the method comprises the steps that (1) color channels, gradient graphs and gradient histograms in 4 different directions total 8 channel characteristics, and the sensitivity degrees of characteristic filters in different directions and scales to edges are different, so that 13 channel information including an LUV color channel, 1 gradient amplitude channel in 2 scales and 4 directional gradient histogram channels can be extracted from an image block, then self-similar characteristics are obtained, and the obtained characteristics are characteristic matrixes with the shapes of (16 x 16, 13);
4b) defining a mapping function Π: y → z, and (y (j) (1. ltoreq. j.ltoreq.256)) represents the j-th pixel of the mask y, and thus can be calculated as (j)1≠j2) In the case of y (j)1)=y(j2) If this is true, a large binary vector mapping function z ═ Π (y) is defined, and each pair j is assigned1≠j2Characteristic point pair y (j)1)=y(j2) Coding;
4c) and obtaining a final casting box outline image by edge mapping Y 'epsilon Y'.
5. The method for detecting the profile of the casting box body in the noise environment as claimed in claim 1, wherein the specific steps of the step 6 are as follows:
6a) for an input binarized casting box profile image, one point in a coordinate space can be mapped into a corresponding track curve or curved surface in a parameter space, and for a known circular equation, a general equation of a rectangular coordinate is as follows:
(x-a)2+(y-b)2=r2
wherein: (a, b) are coordinates of the center of a circle, and r is the radius of the circle;
6b) image space equation (x-a)2+(y-b)2=r2And transforming to obtain a parameter space equation: (a-x)2+(b-y)2=r2
6c) And finding the position with the most intersection points of the circles in the parameter space, wherein the circle corresponding to the intersection point is the circle passing through all the points in the image space, so that the detection of the circular gate is realized.
CN202010049720.2A 2020-01-16 2020-01-16 Method for detecting contour of casting box body in noise environment Active CN111292346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010049720.2A CN111292346B (en) 2020-01-16 2020-01-16 Method for detecting contour of casting box body in noise environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010049720.2A CN111292346B (en) 2020-01-16 2020-01-16 Method for detecting contour of casting box body in noise environment

Publications (2)

Publication Number Publication Date
CN111292346A true CN111292346A (en) 2020-06-16
CN111292346B CN111292346B (en) 2023-05-12

Family

ID=71029047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010049720.2A Active CN111292346B (en) 2020-01-16 2020-01-16 Method for detecting contour of casting box body in noise environment

Country Status (1)

Country Link
CN (1) CN111292346B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967526A (en) * 2020-08-20 2020-11-20 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN113793269A (en) * 2021-10-14 2021-12-14 安徽理工大学 Super-resolution image reconstruction method based on improved neighborhood embedding and prior learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220664A (en) * 2017-05-18 2017-09-29 南京大学 A kind of oil bottle vanning counting method based on structuring random forest
WO2018107492A1 (en) * 2016-12-16 2018-06-21 深圳大学 Intuitionistic fuzzy random forest-based method and device for target tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107492A1 (en) * 2016-12-16 2018-06-21 深圳大学 Intuitionistic fuzzy random forest-based method and device for target tracking
CN107220664A (en) * 2017-05-18 2017-09-29 南京大学 A kind of oil bottle vanning counting method based on structuring random forest

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐良玉等: "基于结构森林边缘检测和Hough变换的海天线检测", 《上海大学学报(自然科学版)》 *
郑光远等: "医学影像计算机辅助检测与诊断系统综述", 《软件学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967526A (en) * 2020-08-20 2020-11-20 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN111967526B (en) * 2020-08-20 2023-09-22 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN113793269A (en) * 2021-10-14 2021-12-14 安徽理工大学 Super-resolution image reconstruction method based on improved neighborhood embedding and prior learning
CN113793269B (en) * 2021-10-14 2023-10-31 安徽理工大学 Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning

Also Published As

Publication number Publication date
CN111292346B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN111626146B (en) Merging cell table segmentation recognition method based on template matching
CN107145829B (en) Palm vein identification method integrating textural features and scale invariant features
CN110826408B (en) Face recognition method by regional feature extraction
Akhtar et al. Optical character recognition (OCR) using partial least square (PLS) based feature reduction: an application to artificial intelligence for biometric identification
CN108073940B (en) Method for detecting 3D target example object in unstructured environment
CN111292346B (en) Method for detecting contour of casting box body in noise environment
CN111091071B (en) Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting
CN104573701B (en) A kind of automatic testing method of Tassel of Corn
Li et al. Finely Crafted Features for Traffic Sign Recognition
CN108694411B (en) Method for identifying similar images
Srivastava et al. Drought stress classification using 3D plant models
Hristov et al. A software system for classification of archaeological artefacts represented by 2D plans
CN109902690A (en) Image recognition technology
CN112258536A (en) Integrated positioning and dividing method for corpus callosum and lumbricus cerebellum
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image
Faska et al. A powerful and efficient method of image segmentation based on random forest algorithm
CN115965613A (en) Cross-layer connection construction scene crowd counting method based on cavity convolution
CN108154107A (en) A kind of method of the scene type of determining remote sensing images ownership
CN112541471A (en) Shielded target identification method based on multi-feature fusion
Rao et al. Texture classification based on local features using dual neighborhood approach
CN117058390B (en) High-robustness circular pointer type dial plate image state segmentation method
Rao et al. Texture classification based on statistical Properties of local units
CN112258535B (en) Integrated positioning and segmentation method for corpus callosum and lumbricus in ultrasonic image
Sadjadi Object recognition using coding schemes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant