CN111292346A - Method for detecting contour of casting box body in noise environment - Google Patents

Method for detecting contour of casting box body in noise environment Download PDF

Info

Publication number
CN111292346A
CN111292346A CN202010049720.2A CN202010049720A CN111292346A CN 111292346 A CN111292346 A CN 111292346A CN 202010049720 A CN202010049720 A CN 202010049720A CN 111292346 A CN111292346 A CN 111292346A
Authority
CN
China
Prior art keywords
image
casting box
space
node
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010049720.2A
Other languages
Chinese (zh)
Other versions
CN111292346B (en
Inventor
鲍士水
黄友锐
许欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202010049720.2A priority Critical patent/CN111292346B/en
Publication of CN111292346A publication Critical patent/CN111292346A/en
Application granted granted Critical
Publication of CN111292346B publication Critical patent/CN111292346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30116Casting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting the outline of a casting box body in a noise environment, which belongs to the technical field of image edge detection and comprises the following steps: step 1, inputting a to-be-detected image containing noise; step 2, using bilateral filtering to perform noise reduction processing on the input noise image; step 3, constructing a random structure forest; step 4, using the trained random structure forest to perform preliminary contour detection on the denoised image; step 5, carrying out binarization processing on the preliminary contour detection result; step 6, fitting a pouring gate of the casting box body through Hough circle transformation; and 7, outputting a final detection result image. The method has the main purposes of accurately fitting the circular pouring gate of the casting box body while accurately detecting the linear profile of the casting box body and accurately positioning the circle center of the circular pouring gate.

Description

一种噪声环境下浇铸箱体轮廓的检测方法A method for detecting the contour of a casting box in a noisy environment

技术领域technical field

本发明属于图像边缘检测技术领域,更具体地说,涉及一种噪声环境下浇铸箱体轮廓的检测方法。The invention belongs to the technical field of image edge detection, and more particularly relates to a method for detecting the contour of a casting box in a noise environment.

背景技术Background technique

在图像边缘检测技术领域中,能准确的检测到含噪声图像的边缘信息,得到物体清晰的边缘图像,并为后续操作提供有利的条件而采用图像边缘检测的方法。在过去几十年里,前人针对物体边缘检测做了大量的工作,传统的基于图像类别可分为:灰度图像轮廓检测、RGB—D图像轮廓检测、彩色图像轮廓检测。灰度图像常见的轮廓检测大多利用图像中的边缘灰度值与背景灰度值之间的突变,这些灰度值的突变称为屋顶或者阶跃变化,在数学模型中可以用一阶和二阶导数来实现,其中Robert算子、Sobel算子、Prewitt算子、Krisch算子、Canny算子等都是采用一阶导数来实现轮廓检测的,而采用二阶导数的算子有Laplacian算子和LOG算子。In the field of image edge detection technology, the edge information of a noisy image can be accurately detected, a clear edge image of an object can be obtained, and an image edge detection method can be used to provide favorable conditions for subsequent operations. In the past few decades, predecessors have done a lot of work on object edge detection. Traditional image-based categories can be divided into: grayscale image contour detection, RGB-D image contour detection, and color image contour detection. The common contour detection of grayscale images mostly uses the sudden change between the edge gray value and the background gray value in the image. These sudden changes in gray values are called roof or step changes. In the mathematical model, the first and second order can be used. The first-order derivative is used to realize contour detection, among which the Robert operator, Sobel operator, Prewitt operator, Krisch operator, Canny operator, etc. use the first-order derivative to realize contour detection, and the operator using the second-order derivative has the Laplacian operator and the LOG operator.

彩色图像比灰度图像色度、亮度信息更加丰富。彩色图像的边缘轮廓可以看做色彩发生突变的那些像素,其轮廓检测的方法主要有颜色分量输出融合法和矢量法两类方法。输出融合法是将彩色图像的各颜色通道按照灰度图像边缘检测的方法进行处理,然后对各分量得到的结果进行输出融合,最后得到输出边缘,但这类算法忽略了各分量之间的相关性,容易照成边缘丢失,而且目前没有完善的融合方法。矢量方法将彩色图像中每个像素作为一个三维矢量对待,这样整个图像就是一个二维三分量的向量场,它很好的保存了彩色图像的矢量特性,但容易出现检测到的边缘不连续、漏检等问题。Color images are richer in chrominance and luminance information than grayscale images. The edge contour of a color image can be regarded as those pixels where the color changes abruptly. The methods of contour detection mainly include two types of methods: color component output fusion method and vector method. The output fusion method is to process the color channels of the color image according to the method of gray image edge detection, and then output the results obtained by each component to obtain the output edge, but this kind of algorithm ignores the correlation between the components. It is easy to lose the edge, and there is no perfect fusion method at present. The vector method treats each pixel in the color image as a three-dimensional vector, so that the entire image is a two-dimensional three-component vector field, which well preserves the vector characteristics of the color image, but is prone to detected edge discontinuities, Missing inspections, etc.

最近几年,一些新的边缘检测算法被提出。Om Prakash Verma等人,提出了基于细菌搜索算法的彩色图像边缘检测最优模糊系统。韩方芳等人,提出了噪声环境下高速运动目标图像边缘检测的算法。Piotr Dollar等人,提出了一种基于结构化森林的快速边缘检测方法。In recent years, some new edge detection algorithms have been proposed. Om Prakash Verma et al. proposed an optimal fuzzy system for color image edge detection based on bacteria search algorithm. Fangfang Han et al. proposed an algorithm for edge detection of high-speed moving target images in noisy environments. Piotr Dollar et al., proposed a fast edge detection method based on structured forest.

以上这些方法虽然在轮廓检测中取得了很好的效果,但由于浇铸箱体处在噪声环境下,且浇铸箱体的轮廓除了直线外,还有圆形的浇口轮廓,所以单纯的使用某一种边缘检测算法没有办法解决浇铸箱体的精准检测及定位,而浇铸箱体的精准检测及定位是浇铸机器人进行精准作业的前提。Although the above methods have achieved good results in contour detection, because the casting box is in a noisy environment, and the contour of the casting box has a circular gate contour in addition to a straight line, it is necessary to simply use a certain An edge detection algorithm cannot solve the precise detection and positioning of the casting box, and the precise detection and positioning of the casting box is the premise of the precise operation of the casting robot.

发明内容SUMMARY OF THE INVENTION

1.发明要解决的技术问题1. The technical problem to be solved by the invention

本发明的目的在于克服现有技术中浇铸箱体轮廓检测不准确的不足,提供了一种噪声环境下浇铸箱体轮廓的检测方法,采用本发明的技术方案,可在满足准确检测浇铸箱体直线轮廓的同时也准确拟合出箱体的圆形浇口,并精准定位出圆形浇口的圆心。The purpose of the present invention is to overcome the inaccurate detection of the contour of the casting box in the prior art, and provide a method for detecting the contour of the casting box in a noise environment. The technical scheme of the present invention can meet the requirements of accurate detection of the casting box. The straight outline also accurately fits the circular gate of the box, and accurately locates the center of the circular gate.

2.技术方案2. Technical solutions

为达到上述目的,本发明提供的技术方案为:In order to achieve the above object, the technical scheme provided by the invention is:

本发明的一种噪声环境下浇铸箱体轮廓的检测方法,包括以下步骤:A method for detecting the contour of a casting box in a noise environment of the present invention comprises the following steps:

步骤1、输入待检测含噪声图像;Step 1. Input the noise-containing image to be detected;

步骤2、使用双边滤波对输入噪声图像进行降噪处理;Step 2. Use bilateral filtering to denoise the input noise image;

步骤3、构建随机结构森林;Step 3. Build a random structure forest;

步骤4、使用训练好的随机结构森林对降噪后图像进行初步轮廓检测;Step 4. Use the trained random structure forest to perform preliminary contour detection on the denoised image;

步骤5、对初步轮廓检测结果进行二值化处理;Step 5. Binarize the preliminary contour detection result;

步骤6、通过Hough圆变换拟合浇铸箱体浇口;Step 6. Fit the gate of the casting box through the Hough circle transformation;

步骤7、输出最终检测结果图像。Step 7, output the final detection result image.

作为本发明更进一步的改进,步骤2的具体步骤如下:As a further improvement of the present invention, the concrete steps of step 2 are as follows:

2a)使用二维高斯函数生成距离模板,使用一维高斯函数生成值域模板,距离模板系数的生成公式为:2a) Use a two-dimensional Gaussian function to generate a distance template, and use a one-dimensional Gaussian function to generate a range template. The generation formula of the distance template coefficient is:

Figure BDA0002370704900000021
Figure BDA0002370704900000021

其中,(k,l)为模板窗口的中心坐标,(i,j)为模板窗口的其他系数的坐标,σd为高斯函数的标准差;Among them, (k, l) are the center coordinates of the template window, (i, j) are the coordinates of other coefficients of the template window, and σ d is the standard deviation of the Gaussian function;

2b)值域模板系数的生成公式为:2b) The generation formula of the value domain template coefficient is:

Figure BDA0002370704900000022
Figure BDA0002370704900000022

其中f(x,y)表示图像在点(x,y)处的像素值,(k,l)为模板窗口的中心坐标,(i,j)为模板窗口的其他系数的坐标,σr为高斯函数的标准差;where f(x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinate of other coefficients of the template window, σ r is the standard deviation of the Gaussian function;

2c)将上述两个模板相乘就得到了双边滤波器的模板公式:2c) Multiply the above two templates to get the template formula of the bilateral filter:

Figure BDA0002370704900000023
Figure BDA0002370704900000023

作为本发明更进一步的改进,步骤3的具体步骤如下:As a further improvement of the present invention, the concrete steps of step 3 are as follows:

3a)建立决策树:首先是对输入的图像数据进行采样,假设用N表示训练样本的个数,M表示特征数目,对N个样本采用有放回的取样,然后进行列采样,从M个特征中选择m个子特征(m<<M),之后对每棵决策树用递归的方式将采样的数据归到左右子树,直到叶子节点,决策树ft(x)的每个节点j都会被关联一个二元分割函数:3a) Establish a decision tree: First, sample the input image data. Suppose N represents the number of training samples, M represents the number of features, and sampling with replacement is used for N samples, and then column sampling is performed. Select m sub-features (m<<M) from the features, and then recursively classify the sampled data into the left and right sub-trees for each decision tree, until the leaf node, each node j of the decision tree ft (x) will be is associated with a binary segmentation function:

h(x,θj)∈{0,1}h(x, θ j )∈{0,1}

其中x是输入向量,{θj}是独立同分布的随机变量,j代表树中的第j个节点,如果h(x,θj)=1,则将x归类到节点j的左侧节点,否侧归类到右侧节点,输入元素经过决策树预测输出y存放于叶子节点,即输出分布为y∈Y;where x is the input vector, {θ j } is an independent and identically distributed random variable, j represents the jth node in the tree, if h(x, θ j )=1, then classify x to the left of node j The node, if the side is not, is classified to the right node, and the input element is predicted by the decision tree and the output y is stored in the leaf node, that is, the output distribution is y∈Y;

3b)采用递归的方法训练每个决策树:对于给定节点j上训练集Sj∈X×Y,目标就是通过训练找个一个最优的θj使得数据集得到好的分类结果,在这里需要定义一个信息增益准则:3b) Use recursive method to train each decision tree: For a training set S j ∈ X×Y on a given node j, the goal is to find an optimal θ j through training to make the data set get good classification results, here An information gain criterion needs to be defined:

Figure BDA0002370704900000031
Figure BDA0002370704900000031

其中:

Figure BDA0002370704900000032
Figure BDA0002370704900000033
选择分割参数θj的标准是使得信息增益Ij最大,使用数据集在左右节点进行递归训练,当满足下列条件之一停止训练:a)达到设定的最大深度,b)信息增益或训练集尺度都达到门限值,c)落入节点的样本个数少于设定阈值,in:
Figure BDA0002370704900000032
Figure BDA0002370704900000033
The criterion for selecting the segmentation parameter θ j is to maximize the information gain I j , use the dataset to perform recursive training at the left and right nodes, and stop the training when one of the following conditions is met: a) reach the set maximum depth, b) information gain or training set The scales all reach the threshold, c) the number of samples falling into the node is less than the set threshold,

信息增益公式定义如下:The information gain formula is defined as follows:

Figure BDA0002370704900000034
Figure BDA0002370704900000034

其中:Hentropy(S)=-∑ypy log(py)表示香农信息熵,py是标签为y的元素在集合s中出现的概率;Where: H entropy (S)=-∑ y py log(p y ) represents Shannon information entropy, and py is the probability that the element labeled y appears in the set s;

3c)随机森林结构化输出:将叶子节点的所有结构化标签y∈Y映射到离散的标记集合c∈C,其中C={1,...,k},其映射关系定义如下:3c) Random forest structured output: map all structured labels y∈Y of leaf nodes to discrete label sets c∈C, where C={1,...,k}, and the mapping relationship is defined as follows:

Π:y∈Y→c∈C{1,2,...,k}Π: y∈Y→c∈C{1,2,...,k}

将映射过程分成两个阶段,首先是Y空间映射到Z空间,即Y→Z,其中,将这种映射关系z=Π(y)定义为

Figure BDA0002370704900000035
维向量,表示分割掩模y的每对像素编码,且对Z进行m维采样,采样后的映射定义为:
Figure BDA0002370704900000036
Y→Z,再将给定集合Z映射到离散标签集合C,在从Z空间映射到C空间上之前,先采用主成分分析(PCA)将Z的维数降到5维,PCA提取了样本特征中最具有代表性的特征,对于256维空间中的n个样本
Figure BDA0002370704900000037
降至5维,最后对n个输出标签y1,...,yn∈y进行联合形成一个集合模型。The mapping process is divided into two stages. First, the Y space is mapped to the Z space, that is, Y→Z, where the mapping relationship z=Π(y) is defined as
Figure BDA0002370704900000035
dimensional vector, representing each pair of pixel codes of the segmentation mask y, and m-dimensional sampling of Z, the sampling mapping is defined as:
Figure BDA0002370704900000036
Y→Z, and then map the given set Z to the discrete label set C. Before mapping from the Z space to the C space, the principal component analysis (PCA) is used to reduce the dimension of Z to 5 dimensions, and PCA extracts the samples The most representative feature among the features, for n samples in a 256-dimensional space
Figure BDA0002370704900000037
Down to 5 dimensions, the n output labels y 1 , ..., y n ∈ y are finally combined to form an ensemble model.

作为本发明更进一步的改进,步骤4的具体步骤如下:As a further improvement of the present invention, the concrete steps of step 4 are as follows:

4a)提取输入图像的积分通道:3个颜色通道、1个梯度图和4个不同方向的梯度直方图总共8个通道特征,不同方向和尺度的特征滤波器对边缘的敏感程度不同,因此可以在图像块上提取LUV颜色通道、2个尺度上的1个梯度幅值通道和4个方向梯度直方图通道共13个通道信息,然后求取自相似特征,得到的特征就是形状为(16×16,13)的特征矩阵;4a) Extract the integral channel of the input image: 3 color channels, 1 gradient map and 4 gradient histograms in different directions, a total of 8 channel features, the feature filters of different directions and scales are different in sensitivity to the edge, so it can be Extract LUV color channel, 1 gradient amplitude channel on 2 scales, and 4 directional gradient histogram channels on the image block, a total of 13 channel information, and then obtain from similar features, the obtained feature is the shape of (16× 16, 13) feature matrix;

4b)定义映射函数Π:y→z,用(y(j)(1≤j≤256))表示掩膜y的第j个像素点,这样就可以计算在(j1≠j2)的情况下y(j1)=y(j2)是否成立,由此可得,定义一个大型二元向量映射函数z=Π(y),将每一对j1≠j2的特征点对y(j1)=y(j2)编码;4b) Define the mapping function Π: y→z, use (y(j)(1≤j≤256)) to represent the jth pixel of the mask y, so that it can be calculated in the case of (j 1 ≠j 2 ) Whether the following y(j 1 )=y(j 2 ) holds, it can be obtained that a large binary vector mapping function z=Π(y) is defined, and each pair of feature points of j 1 ≠j 2 is paired with y( j 1 )=y(j 2 ) code;

4c)通过边缘映射y′∈Y′来得到最终浇铸箱体轮廓图像。4c) Obtain the final casting box outline image through edge mapping y′∈Y′.

作为本发明更进一步的改进,步骤6的具体步骤如下:As a further improvement of the present invention, the concrete steps of step 6 are as follows:

6a)对输入的二值化后的浇铸箱体轮廓图像,其坐标空间中的一点,在参数空间中就可以映射为相应的轨迹曲线或者曲面,对于已知的圆方程,其直角坐标的一般方程为:6a) For the input binarized contour image of the casting box, a point in the coordinate space can be mapped to the corresponding trajectory curve or surface in the parameter space. The equation is:

(x-a)2+(y-b)2=r2 (xa) 2 +(yb) 2 =r 2

其中:(a,b)为圆心坐标,r为圆的半径;Among them: (a, b) are the coordinates of the center of the circle, and r is the radius of the circle;

6b)把图像空间方程(x-a)2+(y-b)2=r2,变换得到参数空间方程:6b) Transform the image space equation (xa) 2 +(yb) 2 =r 2 to obtain the parameter space equation:

(a-x)2+(b-y)2=r2(ax) 2 +(by) 2 =r 2 ;

6c)在参数空间找圆交点最多的位置,这个交点对应的圆就是图像空间中经过所有点的那个圆,从而实现圆形浇口的检测。6c) Find the position with the most circle intersections in the parameter space. The circle corresponding to this intersection is the circle that passes through all the points in the image space, so as to realize the detection of the circular gate.

3.有益效果3. Beneficial effects

采用本发明提供的技术方案,与现有技术相比,具有如下显著效果:Adopting the technical scheme provided by the present invention, compared with the prior art, has the following remarkable effects:

浇铸箱体轮廓的准确检测是浇铸机器人精准作业的前提和基础,但由于浇铸箱体处在噪声环境下,且浇铸箱体的轮廓既包含直线边缘同时也含有圆形浇口,给轮廓检测带来困难。针对上述技术问题问题,本发明提出一种噪声环境下浇铸箱体轮廓的检测方法,在满足准确检测浇铸箱体直线轮廓的同时也准确拟合出箱体的圆形浇口,并精准定位出圆形浇口的圆心。The accurate detection of the contour of the casting box is the premise and basis for the precise operation of the casting robot. However, because the casting box is in a noisy environment, and the contour of the casting box contains both straight edges and round gates, it is difficult to detect the contours. come difficult. In view of the above technical problems, the present invention proposes a method for detecting the contour of a casting box in a noise environment, which not only meets the requirements for accurate detection of the linear contour of the casting box, but also accurately fits the circular gate of the box, and accurately locates the The center of the round gate.

附图说明Description of drawings

图1是本发明流程图;Fig. 1 is the flow chart of the present invention;

图2是本发明对浇铸箱体轮廓检测效果与传统方法检测的效果对比图:Fig. 2 is the effect comparison diagram of the present invention to the casting box contour detection effect and traditional method detection:

(a)图像真实边缘图,(a) The true edge map of the image,

(b)Canny算法未经双边滤波检测结果图像,(b) Canny algorithm detects the result image without bilateral filtering,

(c)Canny算法经双边滤波检测结果图像,(c) Canny algorithm detects the result image by bilateral filtering,

(d)Laplacian算法经双边滤波检测结果图像,(d) The Laplacian algorithm detects the resulting image by bilateral filtering,

(e)经双边滤波的随机结构森林检测结果图像,(e) Random structure forest detection result image after bilateral filtering,

(f)经双边滤波的随机结构森林检测并二值化后检测结果图像,(f) The image of the detection result after the random structure forest detection by bilateral filtering and binarization,

(g)经Hough圆变换拟合图;(g) Fitting diagram through Hough circle transformation;

图3是本发明与传统算法对浇铸箱体轮廓检测结果的准确率曲线对比图;Fig. 3 is the accuracy curve comparison diagram of the present invention and the traditional algorithm to the casting box contour detection result;

图4是本发明与传统算法对浇铸箱体轮廓检测结果的召回率曲线对比图。FIG. 4 is a comparison diagram of the recall rate curves of the present invention and the traditional algorithm for the detection results of the contour of the casting box.

具体实施方式Detailed ways

为进一步了解本发明的内容,下面结合附图和实施例对本发明作详细描述。In order to further understand the content of the present invention, the present invention will be described in detail below with reference to the accompanying drawings and embodiments.

实施例1Example 1

如图1所示,本实施例提供了一种噪声环境下浇铸箱体轮廓的检测方法,包括以下步骤:As shown in FIG. 1 , this embodiment provides a method for detecting the contour of a casting box in a noise environment, including the following steps:

步骤1,输入待检测含噪声图像;Step 1, input the noise-containing image to be detected;

在计算机中应用phthon3.5开发软件读出提前存储在计算机空间中的含有噪声的浇铸箱体图像。Using phthon3.5 development software in the computer, read out the noise-containing casting box images stored in the computer space in advance.

步骤2,使用双边滤波对输入浇铸箱体图像进行降噪处理,具体步骤如下:Step 2: Use bilateral filtering to perform noise reduction processing on the input casting box image. The specific steps are as follows:

2a)使用二维高斯函数生成距离模板,使用一维高斯函数生成值域模板。距离模板系数的生成公式为:2a) Use a two-dimensional Gaussian function to generate a distance template, and use a one-dimensional Gaussian function to generate a range template. The formula for generating the distance template coefficients is:

Figure BDA0002370704900000051
Figure BDA0002370704900000051

其中,(k,l)为模板窗口的中心坐标;(i,j)为模板窗口的其他系数的坐标;σd为高斯函数的标准差。Among them, (k, l) are the center coordinates of the template window; (i, j) are the coordinates of other coefficients of the template window; σ d is the standard deviation of the Gaussian function.

2b)值域模板系数的生成公式为:2b) The generation formula of the value domain template coefficient is:

Figure BDA0002370704900000052
Figure BDA0002370704900000052

其中,函数f(x,y)表示要处理的图像,f(x,y)表示图像在点(x,y)处的像素值;(k,l)为模板窗口的中心坐标;(i,j)为模板窗口的其他系数的坐标;σr为高斯函数的标准差。Among them, the function f(x, y) represents the image to be processed, f(x, y) represents the pixel value of the image at point (x, y); (k, l) is the center coordinate of the template window; (i, j) is the coordinates of other coefficients of the template window; σ r is the standard deviation of the Gaussian function.

2c)将上述两个模板相乘就得到了双边滤波器的模板公式:2c) Multiply the above two templates to get the template formula of the bilateral filter:

Figure BDA0002370704900000061
Figure BDA0002370704900000061

步骤3,构建随机结构森林,具体步骤如下:Step 3, build a random structure forest, the specific steps are as follows:

3a)建立决策树:首先是对输入的图像数据进行采样,假设用N表示训练样本的个数,M表示特征数目,对N个样本采用有放回的取样,然后进行列采样,从M个特征中选择m个子特征(m<<M),之后对每棵决策树用递归的方式将采样的数据归到左右子树,直到叶子节点。决策树ft(x)的每个节点j都会被关联一个二元分割函数:3a) Establish a decision tree: First, sample the input image data. Suppose N represents the number of training samples, M represents the number of features, and sampling with replacement is used for N samples, and then column sampling is performed. Select m sub-features (m<<M) in the feature, and then recursively return the sampled data to the left and right sub-trees for each decision tree until the leaf node. Each node j of the decision tree f t (x) will be associated with a binary partition function:

h(x,θj)∈{0,1}h(x, θ j )∈{0,1}

其中x是输入向量,{θj}是独立同分布的随机变量,j代表树中的第j个节点,如果h(x,θj)=1,则将x归类到节点j的左侧节点,否侧归类到右侧节点,直到叶子节点,过程结束。输入元素经过决策树预测输出y存放于叶子节点,即输出分布为y∈Y。分割函数h(x,θj)是非常复杂的,但是一般通用的做法就是将单一特征维度的输入x与一个阈值做比较,当θ=(k,τ)以及h(x,θ)=[x(k)<τ],[·]表示指示函数;另外一种常用方法是θ=(k1,k2,τ)以及h(x,θ)=[x(k1)-x(k2)<τ]。where x is the input vector, {θ j } is an independent and identically distributed random variable, j represents the jth node in the tree, if h(x, θj)=1, then classify x to the left node of node j , the no side is classified to the right node until the leaf node, and the process ends. The input element is predicted by the decision tree and the output y is stored in the leaf node, that is, the output distribution is y∈Y. The segmentation function h(x, θ j ) is very complicated, but the general practice is to compare the input x of a single feature dimension with a threshold, when θ = (k, τ) and h(x, θ) = [ x(k)<τ], [·] represents the indicator function; another common method is θ=(k1, k2, τ) and h(x, θ)=[x(k1)-x(k2)<τ ].

3b)采用递归的方法训练每个决策树:对于给定节点j上训练集Sj∈X×Y,目标就是通过训练找个一个最优的θj使得数据集得到好的分类结果。在这里需要定义一个信息增益准则:3b) Use recursive method to train each decision tree: For a training set S j ∈ X×Y on a given node j, the goal is to find an optimal θ j through training so that the data set can get a good classification result. Here we need to define an information gain criterion:

Figure BDA0002370704900000062
Figure BDA0002370704900000062

其中:

Figure BDA0002370704900000063
Figure BDA0002370704900000064
选择分割参数θj的标准是使得信息增益Ij最大,使用数据集在左右节点进行递归训练,当满足下列条件之一停止训练:a)达到设定的最大深度;b)信息增益或训练集尺度都达到门限值;c)落入节点的样本个数少于设定阈值。in:
Figure BDA0002370704900000063
Figure BDA0002370704900000064
The criterion for selecting the segmentation parameter θ j is to maximize the information gain I j , use the dataset to perform recursive training on the left and right nodes, and stop training when one of the following conditions is met: a) reach the set maximum depth; b) information gain or training set The scales all reach the threshold; c) the number of samples falling into the node is less than the set threshold.

信息增益公式定义如下:The information gain formula is defined as follows:

Figure BDA0002370704900000065
Figure BDA0002370704900000065

其中:Hentropy(S)=-∑y py log(py)表示香农信息熵,py是标签为y的元素集s中出现的概率。Where: Hentropy (S)=-∑ y p y log(p y ) represents the Shannon information entropy, and p y is the probability of occurrence in the element set s with the label y.

3c)随机森林结构化输出:结构化输出空间一般是高维的较为复杂,因此可以将叶子节点的所有结构化标签y∈Y映射到离散的标记集合c∈C,其中C={1,...,k},其映射关系定义如下:3c) Random forest structured output: The structured output space is generally high-dimensional and complex, so all structured labels y∈Y of leaf nodes can be mapped to a discrete set of labels c∈C, where C={1,. ..,k}, whose mapping relationship is defined as follows:

∏:y∈Y→c∈C{1,2,...,k}∏: y∈Y→c∈C{1,2,...,k}

文中计算信息增益依赖Y上的测量相似度,但是,对于结构化输出空间,计算Y上的相似度比较困难,因此,定义一个从Y到Z临时空间映射,该空间Z的距离比较容易测量,最终将映射过程分成两个阶段,首先是Y空间映射到Z空间,即Y→Z,其中,将这种映射关系z=∏(y)定义为

Figure BDA0002370704900000071
维向量,表示分割掩模y的每对像素编码,对于每个y计算z的代价仍然很高,为了降低维度,对Z进行m维采样,采样后的映射定义为:
Figure BDA0002370704900000072
Y→Z。在Z的采样过程中加入了随机性保证了树足够多样性。In this paper, the calculation of information gain depends on the measured similarity on Y. However, for the structured output space, it is difficult to calculate the similarity on Y. Therefore, a temporary space mapping from Y to Z is defined, and the distance of this space Z is easier to measure, Finally, the mapping process is divided into two stages. First, the Y space is mapped to the Z space, that is, Y→Z, where the mapping relationship z=∏(y) is defined as
Figure BDA0002370704900000071
dimensional vector, representing each pair of pixel codes of the segmentation mask y, the cost of calculating z for each y is still high, in order to reduce the dimension, m-dimensional sampling is performed on Z, and the sampled mapping is defined as:
Figure BDA0002370704900000072
Y→Z. Adding randomness to the sampling process of Z ensures that the tree is sufficiently diverse.

在从Z空间映射到C空间上之前,先采用主成分分析(PCA)将Z的维数降到5维,PCA提取了样本特征中最具有代表性的特征,对于256维空间中的n个样本

Figure BDA0002370704900000073
降至5维。有两种方法实现给定集合Z映射到离散标签集合C:a)使用k-means聚类方法,将Z聚集成k个簇;b)基于log2k维度的PCA量化Z,根据Z落入的象限分配离散标签c。两种方法运行相似,但是后一种速度更快。文中使用选择k=2的主成分分析量化方法。Before mapping from Z space to C space, principal component analysis (PCA) is used to reduce the dimension of Z to 5 dimensions. PCA extracts the most representative features of the sample features. For n in the 256-dimensional space sample
Figure BDA0002370704900000073
down to 5 dimensions. There are two ways to achieve the mapping of a given set Z to a set of discrete labels C: a) use the k-means clustering method to cluster Z into k clusters; b) quantify Z based on PCA of log 2 k dimensions, according to which Z falls into The quadrants of are assigned discrete labels c. Both methods work similarly, but the latter is faster. In this paper, the quantification method of principal component analysis with k=2 is used.

为了得到唯一的输出结果,需要对n个输出标签y1,...,yn∈Y进行联合形成一个集合模型。可采用取m维采样映射函数Πφ,对于每个标签i计算zi=∏φ(yi)。以zk=∏φ(yk)为中心,选择那些使zk与其他所有zi的距离和最小的yk作为输出标签。集成模型依赖m和选择的映射函数ΠφIn order to get a unique output result, n output labels y 1 , ..., y n ∈ Y need to be combined to form an ensemble model. The m-dimensional sampling mapping function Π φ can be used to calculate zi = ∏ φ (y i ) for each label i. Centered on z k = ∏ φ (y k ), select those y k that minimize the sum of the distances of z k from all other zi as output labels. The ensemble model depends on m and the chosen mapping function Π φ .

步骤4,使用训练好的随机结构森林对降噪后浇铸箱体进行初步轮廓检测,具体步骤如下:Step 4. Use the trained random structure forest to perform preliminary contour detection on the denoised cast box. The specific steps are as follows:

4a)提取输入浇铸箱体图像的积分通道:3个颜色通道、1个梯度图和4个不同方向的梯度直方图总共8个通道特征,不同方向和尺度的特征滤波器对边缘的敏感程度不同,因此可以在图像块上提取LUV颜色通道、2个尺度上的1个梯度幅值通道和4个方向梯度直方图通道共13个通道信息,然后求取自相似特征,得到的特征就是形状为(16×16,13)的特征矩阵。4a) Extract the integral channel of the input casting box image: 3 color channels, 1 gradient map and 4 gradient histograms in different directions, a total of 8 channel features, and the feature filters in different directions and scales are sensitive to edges. , so the LUV color channel, 1 gradient amplitude channel on 2 scales and 4 directional gradient histogram channels can be extracted from the image block, a total of 13 channel information, and then obtained from similar features, the obtained feature is the shape of (16×16, 13) feature matrix.

4b)定义映射函数∏:y→z。本文定义一种映射函数,用(y(j)(1≤j≤256))表示掩膜y的第j个像素点,这样就可以计算在(j1≠j2)的情况下y(j1)=y(j2)是否成立,由此可得,定义一个大型二元向量映射函数z=∏(y),将每一对j1≠j2的特征点对y(j1)=y(j2)编码。4b) Define the mapping function ∏: y→z. This paper defines a mapping function, using (y(j)( 1≤j≤256 )) to represent the jth pixel of mask y, so that y (j 1 )=y(j 2 ) is true, from this, we can define a large binary vector mapping function z=∏(y), and each pair of feature points of j 1 ≠j 2 is paired with y(j 1 )= y(j 2 ) encoding.

4c)通过融合多个不相关的决策树的输出结果,从而使得随机结构森林的输出更具有鲁棒性。对多个分割掩膜y∈Y有效融合是非常困难的,因此本文采用边缘映射y′∈Y′来得到最终浇铸箱体轮廓图像。4c) By fusing the output results of multiple unrelated decision trees, the output of the random structure forest is made more robust. It is very difficult to effectively fuse multiple segmentation masks y∈Y, so this paper adopts the edge map y′∈Y′ to obtain the final casting box contour image.

步骤5,对随机结构森林检测的轮廓图像进行二值化处理,通过反复实验找到最佳阈值。将图像上的像素点灰度值低于此阈值的置为0,高于此阈值的置为255,得到可以反映图像整体和局部特征的二值化图像,从而将图像轮廓从背景中分割出来。Step 5: Binarize the contour image detected by the random structure forest, and find the best threshold through repeated experiments. Set the gray value of the pixel point on the image below this threshold to 0, and set it to 255 if it is higher than this threshold, to obtain a binary image that can reflect the overall and local characteristics of the image, so as to segment the image outline from the background. .

步骤6,通过Hough圆变换拟合浇铸箱体浇口,具体步骤如下:Step 6: Fit the gate of the casting box through Hough circle transformation. The specific steps are as follows:

6a)Hough变换做曲线检测时,最重要的是写出图像坐标空间到参数空间的变换公式。对输入的二值化后的浇铸箱体轮廓图像,其坐标空间中的一点,在参数空间中就可以映射为相应的轨迹曲线或者曲面。对于已知的圆方程,其直角坐标的一般方程为:6a) When Hough transform is used for curve detection, the most important thing is to write out the transformation formula from the image coordinate space to the parameter space. For the input binarized contour image of the casting box, a point in the coordinate space can be mapped to the corresponding trajectory curve or surface in the parameter space. For a known circle equation, the general equation for its rectangular coordinates is:

(x-a)2+(y-b)2=r2 (xa) 2 +(yb) 2 =r 2

其中:(a,b)为圆心坐标,r为圆的半径。Among them: (a, b) are the coordinates of the center of the circle, and r is the radius of the circle.

6b)把图像空间方程(x-a)2+(y-b)2=r2,变换得到参数空间方程:6b) Transform the image space equation (xa) 2 +(yb) 2 =r 2 to obtain the parameter space equation:

(a-x)2+(b-y)2=r2 (ax) 2 +(by) 2 =r 2

6c)在参数空间找圆交点最多的位置,这个交点对应的圆就是图像空间中经过所有点的那个圆,从而实现圆形浇口的检测。6c) Find the position with the most circle intersections in the parameter space. The circle corresponding to this intersection is the circle that passes through all the points in the image space, so as to realize the detection of the circular gate.

步骤7,输出最终浇铸箱体轮廓检测结果图像。Step 7, output the final casting box contour detection result image.

如图2所示,b、c、d、e、f、g各组图均通过不同角度对比检测结果,其中不难看出b、c、d、e四个图中均存在许多杂乱无章的线条,线条越多表明算法对噪声的抗干扰能力越弱,线条越少表明算法对噪声的抗干扰能力越强。As shown in Figure 2, each group of pictures b, c, d, e, f, and g compares the detection results from different angles. It is not difficult to see that there are many disordered lines in the four pictures b, c, d, and e. The more lines, the weaker the anti-interference ability of the algorithm to noise, and the fewer the lines, the stronger the anti-interference ability of the algorithm to noise.

通过图2中b图和c图,可以发现canny算法无论是否经过降噪处理,对箱体边缘检测的效果均不佳。图d是Laplacian检测效果,可以看出Laplacian算法比canny算法检测的边缘更清晰,对噪声有一定抗干扰能力。图e是经过双边滤波以后的随机结构森林算法对边缘的检测结果,相比前两种算法,检测结果发现经本文算法检测后箱体浇口面噪声基本被去除,周边环境噪声也得到一定的改善,从而表明本文算法相对其他算法对噪声的抗干扰能力强。图f是对图e进行二值化处理后的效果,g图在f图的基础上通过Hough圆变换精确检测及定位到浇铸箱浇口,并标注出浇口的中心点。From pictures b and c in Figure 2, it can be found that the canny algorithm is not good for box edge detection whether or not it has been denoised. Figure d shows the Laplacian detection effect. It can be seen that the edge detected by the Laplacian algorithm is clearer than that of the canny algorithm, and it has a certain anti-interference ability to noise. Figure e is the edge detection result of the random structure forest algorithm after bilateral filtering. Compared with the first two algorithms, the detection result shows that the noise on the gate surface of the box is basically removed after the detection by the algorithm in this paper, and the surrounding environmental noise is also obtained to a certain extent. It shows that the algorithm in this paper has stronger anti-interference ability to noise than other algorithms. Figure f is the effect of binarizing Figure e. On the basis of Figure f, Figure g is accurately detected and positioned to the gate of the casting box through Hough circle transformation, and the center point of the gate is marked.

图3比较不同算法下浇铸箱浇口边缘检测的正确率,可以看出,本文算法的正确率高于其他两个算法,而且对于不同图像算法正确率较稳定,对于不同角度的浇铸箱浇口均有较好的检测结果。从图4所示的召回率可以看出,本文的算法有较高的召回率,说明检测的结果越接近真实边缘图。Figure 3 compares the accuracy of the edge detection of the casting box gate under different algorithms. It can be seen that the accuracy of the algorithm in this paper is higher than the other two algorithms, and the accuracy of the algorithm for different images is relatively stable. For different angles of the casting box gate All have good detection results. From the recall rate shown in Figure 4, it can be seen that the algorithm in this paper has a higher recall rate, indicating that the detection result is closer to the real edge map.

以上示意性的对本发明及其实施方式进行了描述,该描述没有限制性,附图中所示的也只是本发明的实施方式之一,实际的结构并不局限于此。所以,如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,不经创造性的设计出与该技术方案相似的结构方式及实施例,均应属于本发明的保护范围。The present invention and its embodiments have been described above schematically, and the description is not restrictive, and what is shown in the accompanying drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if those of ordinary skill in the art are inspired by it, without departing from the purpose of the present invention, any structural modes and embodiments similar to this technical solution are designed without creativity, which shall belong to the protection scope of the present invention. .

Claims (5)

1.一种噪声环境下浇铸箱体轮廓的检测方法,其特征在于,包括以下步骤:1. the detection method of casting box profile under a noise environment, is characterized in that, comprises the following steps: 步骤1、输入待检测含噪声图像;Step 1. Input the noise-containing image to be detected; 步骤2、使用双边滤波对输入噪声图像进行降噪处理;Step 2. Use bilateral filtering to denoise the input noise image; 步骤3、构建随机结构森林;Step 3. Build a random structure forest; 步骤4、使用训练好的随机结构森林对降噪后图像进行初步轮廓检测;Step 4. Use the trained random structure forest to perform preliminary contour detection on the denoised image; 步骤5、对初步轮廓检测结果进行二值化处理;Step 5. Binarize the preliminary contour detection result; 步骤6、通过Hough圆变换拟合浇铸箱体浇口;Step 6. Fit the gate of the casting box through the Hough circle transformation; 步骤7、输出最终检测结果图像。Step 7, output the final detection result image. 2.根据权利要求1所述的一种噪声环境下浇铸箱体轮廓的检测方法,其特征在于,步骤2的具体步骤如下:2. The method for detecting the contour of a casting box under a noise environment according to claim 1, wherein the specific steps of step 2 are as follows: 2a)使用二维高斯函数生成距离模板,使用一维高斯函数生成值域模板,距离模板系数的生成公式为:2a) Use a two-dimensional Gaussian function to generate a distance template, and use a one-dimensional Gaussian function to generate a range template. The generation formula of the distance template coefficient is:
Figure FDA0002370704890000011
Figure FDA0002370704890000011
其中,(k,l)为模板窗口的中心坐标,(i,j)为模板窗口的其他系数的坐标,σd为高斯函数的标准差;Among them, (k, l) are the center coordinates of the template window, (i, j) are the coordinates of other coefficients of the template window, and σ d is the standard deviation of the Gaussian function; 2b)值域模板系数的生成公式为:2b) The generation formula of the value domain template coefficient is:
Figure FDA0002370704890000012
Figure FDA0002370704890000012
其中f(x,y)表示图像在点(x,y)处的像素值,(k,l)为模板窗口的中心坐标,(i,j)为模板窗口的其他系数的坐标,σr为高斯函数的标准差;where f(x, y) represents the pixel value of the image at point (x, y), (k, l) is the center coordinate of the template window, (i, j) is the coordinate of other coefficients of the template window, σ r is the standard deviation of the Gaussian function; 2c)将上述两个模板相乘就得到了双边滤波器的模板公式:2c) Multiply the above two templates to get the template formula of the bilateral filter:
Figure FDA0002370704890000013
Figure FDA0002370704890000013
3.根据权利要求1所述的一种噪声环境下浇铸箱体轮廓的检测方法,其特征在于,步骤3的具体步骤如下:3. The method for detecting the contour of a casting box under a noise environment according to claim 1, wherein the specific steps of step 3 are as follows: 3a)建立决策树:首先是对输入的图像数据进行采样,假设用N表示训练样本的个数,M表示特征数目,对N个样本采用有放回的取样,然后进行列采样,从M个特征中选择m个子特征(m<<M),之后对每棵决策树用递归的方式将采样的数据归到左右子树,直到叶子节点,决策树ft(x)的每个节点j都会被关联一个二元分割函数:3a) Establish a decision tree: First, sample the input image data. Suppose N represents the number of training samples, M represents the number of features, and sampling with replacement is used for N samples, and then column sampling is performed. Select m sub-features (m<<M) from the features, and then recursively classify the sampled data into the left and right sub-trees for each decision tree, until the leaf node, each node j of the decision tree ft (x) will be is associated with a binary segmentation function: h(x,θj)∈{0,1}h(x,θ j )∈{0,1} 其中x是输入向量,{θj}是独立同分布的随机变量,j代表树中的第j个节点,如果h(x,θj)=1,则将x归类到节点j的左侧节点,否侧归类到右侧节点,输入元素经过决策树预测输出y存放于叶子节点,即输出分布为y∈Y;where x is the input vector, {θ j } is an independent and identically distributed random variable, j represents the jth node in the tree, if h(x,θ j )=1, then classify x to the left of node j The node, if the side is not, is classified to the right node, and the input element is predicted by the decision tree and the output y is stored in the leaf node, that is, the output distribution is y∈Y; 3b)采用递归的方法训练每个决策树:对于给定节点j上训练集Sj∈X×Y,目标就是通过训练找个一个最优的θj使得数据集得到好的分类结果,在这里需要定义一个信息增益准则:3b) Use recursive method to train each decision tree: For a training set S j ∈ X×Y on a given node j, the goal is to find an optimal θ j through training to make the data set get good classification results, here An information gain criterion needs to be defined:
Figure FDA0002370704890000021
Figure FDA0002370704890000021
其中:
Figure FDA0002370704890000022
选择分割参数θj的标准是使得信息增益Ij最大,使用数据集在左右节点进行递归训练,当满足下列条件之一停止训练:a)达到设定的最大深度,b)信息增益或训练集尺度都达到门限值,c)落入节点的样本个数少于设定阈值,信息增益公式定义如下:
in:
Figure FDA0002370704890000022
The criterion for selecting the segmentation parameter θ j is to maximize the information gain I j , use the dataset to perform recursive training at the left and right nodes, and stop the training when one of the following conditions is met: a) reach the set maximum depth, b) information gain or training set The scales all reach the threshold value, c) the number of samples falling into the node is less than the set threshold value, the information gain formula is defined as follows:
Figure FDA0002370704890000023
Figure FDA0002370704890000023
其中:Hentropy(S)=-∑ypylog(py)表示香农信息熵,py是标签为y的元素在集合s中出现的概率;Where: H entropy (S)=-∑ y p y log(p y ) represents the Shannon information entropy, and p y is the probability that the element labeled y appears in the set s; 3c)随机森林结构化输出:将叶子节点的所有结构化标签y∈Y映射到离散的标记集合c∈C,其中C={1,...,k},其映射关系定义如下:3c) Random forest structured output: map all structured labels y∈Y of leaf nodes to discrete label sets c∈C, where C={1,...,k}, and the mapping relationship is defined as follows: Π:y∈Y→c∈C{1,2,...,k}Π: y∈Y→c∈C{1,2,...,k} 将映射过程分成两个阶段,首先是Y空间映射到Z空间,即Y→Z,其中,将这种映射关系z=Π(y)定义为
Figure FDA0002370704890000024
维向量,表示分割掩模y的每对像素编码,且对Z进行m维采样,采样后的映射定义为:
Figure FDA0002370704890000025
再将给定集合Z映射到离散标签集合C,在从Z空间映射到C空间上之前,先采用主成分分析(PCA)将Z的维数降到5维,PCA提取了样本特征中最具有代表性的特征,对于256维空间中的n个样本
Figure FDA0002370704890000026
降至5维,最后对n个输出标签y1,...,yn∈Y进行联合形成一个集合模型。
The mapping process is divided into two stages. First, the Y space is mapped to the Z space, that is, Y→Z, where the mapping relationship z=Π(y) is defined as
Figure FDA0002370704890000024
dimensional vector, representing each pair of pixel codes of the segmentation mask y, and m-dimensional sampling of Z, the sampling mapping is defined as:
Figure FDA0002370704890000025
Then map the given set Z to the discrete label set C. Before mapping from the Z space to the C space, the principal component analysis (PCA) is used to reduce the dimension of Z to 5 dimensions. PCA extracts the most characteristic features of the sample features. Representative features, for n samples in 256-dimensional space
Figure FDA0002370704890000026
Down to 5 dimensions, the n output labels y 1 , . . . , y n ∈ Y are finally combined to form an ensemble model.
4.根据权利要求3所述的一种噪声环境下浇铸箱体轮廓的检测方法,其特征在于,步骤4的具体步骤如下:4. The method for detecting the contour of the casting box under a noise environment according to claim 3, wherein the specific steps of step 4 are as follows: 4a)提取输入图像的积分通道:3个颜色通道、1个梯度图和4个不同方向的梯度直方图总共8个通道特征,不同方向和尺度的特征滤波器对边缘的敏感程度不同,因此可以在图像块上提取LUV颜色通道、2个尺度上的1个梯度幅值通道和4个方向梯度直方图通道共13个通道信息,然后求取自相似特征,得到的特征就是形状为(16×16,13)的特征矩阵;4a) Extract the integral channel of the input image: 3 color channels, 1 gradient map and 4 gradient histograms in different directions, a total of 8 channel features, the feature filters of different directions and scales are different in sensitivity to the edge, so it can be Extract LUV color channel, 1 gradient amplitude channel on 2 scales, and 4 directional gradient histogram channels on the image block, a total of 13 channel information, and then obtain from similar features, the obtained feature is the shape of (16× 16, 13) feature matrix; 4b)定义映射函数Π:y→z,用(y(j)(1≤j≤256))表示掩膜y的第j个像素点,这样就可以计算在(j1≠j2)的情况下y(j1)=y(j2)是否成立,由此可得,定义一个大型二元向量映射函数z=Π(y),将每一对j1≠j2的特征点对y(j1)=y(j2)编码;4b) Define the mapping function Π: y→z, use (y(j)(1≤j≤256)) to represent the jth pixel of the mask y, so that it can be calculated in the case of (j 1 ≠j 2 ) Whether the following y(j 1 )=y(j 2 ) holds, it can be obtained that a large binary vector mapping function z=Π(y) is defined, and each pair of feature points of j 1 ≠j 2 is paired with y( j 1 )=y(j 2 ) code; 4c)通过边缘映射y′∈Y′来得到最终浇铸箱体轮廓图像。4c) Obtain the final casting box outline image through edge mapping y′∈Y′. 5.根据权利要求1所述的一种噪声环境下浇铸箱体轮廓的检测方法,其特征在于,步骤6的具体步骤如下:5. The method for detecting the outline of a casting box under a noise environment according to claim 1, wherein the specific steps of step 6 are as follows: 6a)对输入的二值化后的浇铸箱体轮廓图像,其坐标空间中的一点,在参数空间中就可以映射为相应的轨迹曲线或者曲面,对于已知的圆方程,其直角坐标的一般方程为:6a) For the input binarized contour image of the casting box, a point in the coordinate space can be mapped to the corresponding trajectory curve or surface in the parameter space. The equation is: (x-a)2+(y-b)2=r2 (xa) 2 +(yb) 2 =r 2 其中:(a,b)为圆心坐标,r为圆的半径;Among them: (a, b) are the coordinates of the center of the circle, and r is the radius of the circle; 6b)把图像空间方程(x-a)2+(y-b)2=r2,变换得到参数空间方程:(a-x)2+(b-y)2=r26b) Transform the image space equation (xa) 2 +(yb) 2 =r 2 to obtain the parameter space equation: (ax) 2 +(by) 2 =r 2 ; 6c)在参数空间找圆交点最多的位置,这个交点对应的圆就是图像空间中经过所有点的那个圆,从而实现圆形浇口的检测。6c) Find the position with the most circle intersections in the parameter space. The circle corresponding to this intersection is the circle that passes through all the points in the image space, so as to realize the detection of the circular gate.
CN202010049720.2A 2020-01-16 2020-01-16 A detection method for casting box profile in noisy environment Active CN111292346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010049720.2A CN111292346B (en) 2020-01-16 2020-01-16 A detection method for casting box profile in noisy environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010049720.2A CN111292346B (en) 2020-01-16 2020-01-16 A detection method for casting box profile in noisy environment

Publications (2)

Publication Number Publication Date
CN111292346A true CN111292346A (en) 2020-06-16
CN111292346B CN111292346B (en) 2023-05-12

Family

ID=71029047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010049720.2A Active CN111292346B (en) 2020-01-16 2020-01-16 A detection method for casting box profile in noisy environment

Country Status (1)

Country Link
CN (1) CN111292346B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967526A (en) * 2020-08-20 2020-11-20 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN113793269A (en) * 2021-10-14 2021-12-14 安徽理工大学 A Super-Resolution Image Reconstruction Method Based on Improved Neighborhood Embedding and Prior Learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220664A (en) * 2017-05-18 2017-09-29 南京大学 A kind of oil bottle vanning counting method based on structuring random forest
WO2018107492A1 (en) * 2016-12-16 2018-06-21 深圳大学 Intuitionistic fuzzy random forest-based method and device for target tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107492A1 (en) * 2016-12-16 2018-06-21 深圳大学 Intuitionistic fuzzy random forest-based method and device for target tracking
CN107220664A (en) * 2017-05-18 2017-09-29 南京大学 A kind of oil bottle vanning counting method based on structuring random forest

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐良玉等: "基于结构森林边缘检测和Hough变换的海天线检测", 《上海大学学报(自然科学版)》 *
郑光远等: "医学影像计算机辅助检测与诊断系统综述", 《软件学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967526A (en) * 2020-08-20 2020-11-20 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN111967526B (en) * 2020-08-20 2023-09-22 东北大学秦皇岛分校 Remote sensing image change detection method and system based on edge mapping and deep learning
CN113793269A (en) * 2021-10-14 2021-12-14 安徽理工大学 A Super-Resolution Image Reconstruction Method Based on Improved Neighborhood Embedding and Prior Learning
CN113793269B (en) * 2021-10-14 2023-10-31 安徽理工大学 Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning

Also Published As

Publication number Publication date
CN111292346B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN111626146B (en) Merging cell table segmentation recognition method based on template matching
CN106355577B (en) Fast Image Matching Method and System Based on Feature State and Global Consistency
He et al. Beyond OCR: Multi-faceted understanding of handwritten document characteristics
CN110298376B (en) An Image Classification Method of Bank Notes Based on Improved B-CNN
US20140301608A1 (en) Chemical structure recognition tool
CN109446894B (en) A Multispectral Image Change Detection Method Based on Probabilistic Segmentation and Gaussian Mixture Clustering
Li et al. A complex junction recognition method based on GoogLeNet model
CN110210297B (en) Method for locating and extracting Chinese characters in customs clearance image
Obaidullah et al. A system for handwritten script identification from Indian document
CN117689655B (en) Metal button surface defect detection method based on computer vision
CN113723330B (en) A method and system for understanding chart document information
CN103955950B (en) Image tracking method utilizing key point feature matching
Hu Research on data acquisition algorithms based on image processing and artificial intelligence
CN106503694A (en) Digit recognition method based on eight neighborhood feature
CN111292346B (en) A detection method for casting box profile in noisy environment
CN110008920A (en) Research on facial expression recognition method
CN107194916A (en) A kind of vision measurement system of feature based Point matching
CN103116890A (en) Video image based intelligent searching and matching method
Hristov et al. A software system for classification of archaeological artefacts represented by 2D plans
CN115690803A (en) Digital image recognition method, device, electronic device and readable storage medium
CN109902690A (en) Image recognition technology
CN102609732B (en) Object recognition method based on generalization visual dictionary diagram
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN110737364B (en) Control method for touch writing acceleration under android system
Yue et al. An unsupervised automatic organization method for Professor Shirakawa’s hand-notated documents of oracle bone inscriptions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant