CN111882542A - Full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM - Google Patents
Full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM Download PDFInfo
- Publication number
- CN111882542A CN111882542A CN202010741349.6A CN202010741349A CN111882542A CN 111882542 A CN111882542 A CN 111882542A CN 202010741349 A CN202010741349 A CN 202010741349A CN 111882542 A CN111882542 A CN 111882542A
- Authority
- CN
- China
- Prior art keywords
- thread
- matrix
- hmm
- r2unet
- hidden state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims description 59
- 230000007704 transition Effects 0.000 claims description 31
- 238000000034 method Methods 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 9
- 230000003416 augmentation Effects 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000012512 characterization method Methods 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 230000013011 mating Effects 0.000 claims description 2
- 230000010339 dilation Effects 0.000 claims 1
- 238000011084 recovery Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000005299 abrasion Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a full-automatic accurate measurement method for high-precision threads based on AA R2Unet and HMM, which utilizes an AA R2Unet network to extract thread edges so as to filter foreign matters in images. And then, classifying the thread edge points by using an HMM (hidden Markov model), fitting a straight line and calculating thread parameters.
Description
Technical Field
The invention relates to the technical field of thread parameter detection, in particular to a full-automatic precise measuring method for high-precision threads based on AA R2Unet and HMM.
Background
At present, most of thread parameter detection in industry uses manual detection, which is time-consuming and easy to cause abrasion to workpieces. In recent years, visual detection technology is rapidly developed, a plurality of visual-based thread detection methods are developed in the field of thread detection, the thread detection is carried out by utilizing the traditional visual technologies such as a Canny edge detection algorithm, template matching and the like, however, the industrial field environment is often complex, the interference factors are more, foreign matters such as dust, scrap iron, oil stains and the like are easy to interfere with the visual detection, and the detection result is abnormal.
Disclosure of Invention
The invention aims to provide a high-precision thread full-automatic precise measurement method based on AA R2Unet and HMM, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM comprises the following steps:
A. AA R2Unet thread edge identification: establishing an AA R2Unet network to extract the thread edge of the acquired thread picture and acquire an image only containing the thread edge;
B. HMM-based classification of thread edge points: calculating the gradient direction of the thread edge point by using the thread edge extracted in the step A and the gray information in the original image, and dividing the thread edge point into a peak-valley type, a linear type and a transition type by taking the gradient direction as an observation object of the HMM;
C. calculating parameters by fitting a straight line: and D, fitting the thread in the step B into a straight line by using a least square method, and calculating the major diameter and the tooth angle of the thread according to the straight line.
Preferably, in the step a, an R2 module and an Attention augmentation module are added to the Unet; the Unet structure is a symmetrical U-shaped structure overall and comprises 12 units (F1-F12) during design, wherein the left side F1-F6 are contraction paths and are used for feature extraction; the right side F6-F12 is an expansion path and is used for recovering details to realize accurate prediction; the R2 module includes a residual learning unit and a recursive convolution.
Preferably, the R2 module includes a residual learning unit and a recursive convolution,
(1) a residual learning unit: assuming that an input of a neural network unit is x, an expected output is h (x), a residual map f (x) ═ h (x) -x is defined, and if x is directly transmitted to the output, the target to be learned by the neural network unit is the residual map f (x) ═ h (x) -x, the residual learning unit is composed of a series of convolution layers and a shortcut, and the input x is transmitted to the output of the residual learning unit through the shortcut, and the output of the residual learning unit is z ═ f (x) + x
(2) And (3) recursive convolution: assuming that the input is x, successive convolutions are performed on the input x, and the current input is added to the output of each convolution as the input for the next convolution.
The R2 module replaces the normal convolution in the residual learning unit with a recursive convolution.
Preferably, it is characterized in that: AttentitionAccept essentially gets a mapping of a series of key-value pairs through a query; first, the input size is (w, h, c)in) The signature of (a) performs a 1 × 1 convolution of the output QKV matrix, which has a size of (w, h,2 × d)k+dv) Wherein w, h,2 x dk+dVThe width, the length and the depth of the matrix are respectively represented; and then, QKV matrixes are segmented from the depth channels to obtain Q, K, V three matrixes with the depth channel sizes dk、dk、dv(ii) a Then, a structure of a multi-head attention mechanism is adopted, Q, K, V three matrixes are respectively divided into N equal matrixes from a depth channel for subsequent calculation, and the multi-head attention mechanism expands the original single attention calculation into a plurality of calculations which are smaller and independent in parallel, so that the model can learn feature information in different subspaces.
Preferably, F is generated by flattening the Q, K, V matrix that has been dividedlat _ Q, Flat _ K, Flat _ V three matrices, namely Q, K, V, the depth channel is kept unchanged, and the depth channel is compressed to 1 dimension from the length direction and the width direction, wherein the sizes of the first two matrices are (w x h, d)k) The latter matrix size is (w x h, d)v) (ii) a Then, the attribute evaluation stores the original Self-attribute method, and uses Flat _ Q, Flat _ K two matrixes to perform matrix multiplication to calculate a weight matrix, and adds calculation of embedding relative position on the basis of the weight matrix, and obtains the relative position information of each point on the feature map by performing weight calculation on the Q matrix in the length direction and the width direction, thereby preventing the transformation of the feature position and reducing the final effect of the model.
Preferably, the position information in both the length and width directions is obtained by inner products of the Q matrix and the weight matrices H and W, and is denoted as ShAnd SwWherein the weight matrixes H and W are obtained by training a model, and the weight matrixes H and W have the size of (wh, wh, 1); then, the resulting three matrices are added and multiplied by a scaling factorTo prevent the calculation result from being too large; then, processing by using a softmax function to obtain a final characteristic weight matrix; and finally, multiplying the weight matrix by the V matrix, and performing 1 × 1 convolution operation on the result reshape to the original length and width to obtain a final attention feature matrix O.
Preferably, the Attention feature matrix O and the normal convolution process are spliced (concat) in the depth direction to obtain the result of the Attention authority; the formula for the calculation of the attention characterization matrix O is as follows:
preferably, the HMM model in step B is composed of the following 5 parts:
(1) hidden state sequence:
according to the difference of gradient directions, the thread edge points are divided into three hidden states: peak-valley type, transition type, and straight line type, denoted as M ═ M1,m2,m3}; generally, the difference of the gradient directions of the peak-valley type points is the largest, the difference of the gradient directions of the transition type points is the next largest, and the difference of the gradient directions of the straight line type points is the smallest; the peak valley type points can be used for fitting the outer diameter line and the inner diameter line of the thread and calculating the outer diameter of the thread; linear points are used for fitting two sides of the thread angle to calculate tooth angle parameters; the transitional points are used for eliminating arc line parts on the thread edges, so that the fitting precision of straight lines is improved; the sequence of hidden states is denoted S ═ S1,s2…st},st∈M;
(2) Observation sequences
Calculating the gradient direction of the edge point identified by the AA R2Unet method and recording the gradient direction as thetat(ii) a In order to eliminate the change of the gradient direction caused by the placement angle of the threaded part and enable the observation characteristic to have rotation invariance, the difference value of the gradient directions of adjacent edge points is taken as an observation value and is marked as at. The observed value calculation formula is as follows:
gx(k)=Sx*I(k)
gy(k)=Sy*I(k)
at=|θt-θt-1|
wherein S isxAnd SyThe gradient template of the Sobel operator in the horizontal and vertical directions is shown, and I is an image matrix;
(3) initial probability distribution
The initial probability distribution represents the probability of each hidden state occurring in the initial state, and is denoted as pii=P(s1=mi). Assume a sample size of N for the observed sequence and a hidden state stNumber of occurrences is NiThen the initial probability of the hidden state is pii=Ni/N;
(4) State transition matrix
The state transition matrix represents the current hidden state stProbability of transition to the next hidden state; the transition of the hidden state of the HMM satisfies the following markov property:
P(st+1|st,st-1…s1)=P(st+1|st)
i.e. the probability of transitioning to the next hidden state depends only on the current hidden state; transitions between hidden states are represented by a transition probability matrix A, where αmnRepresents the current state miTransition to the next state mjThe probability of (d);
each hidden state has self-transition, and no direct transition exists between a peak-valley point and a straight line point;
(5) probability of occurrence of observed value
Due to the observed value otIs a continuous random variable, and uses a continuous hidden Markov model to deduce a hidden state sequence; assuming that the observed values corresponding to the respective hidden states follow a gaussian distribution, the mean and variance of the gaussian density function is as follows:
in the formula, ai(t) sample data representing a corresponding hidden state; the probability of the sample data appearing in different hidden states is different due to different mean values and variances of Gaussian density functions corresponding to different states; the probability of occurrence of a given observation is as follows:
for a given HMM and observation sequence O ═ O1,o2…otH, it is necessary to deduce the most probable hidden state sequence S ═ S1,s2…st}。
Preferably, in order to find the optimal hidden state sequence, a Viterbi algorithm is used, in which a definition is madet(i) Is the maximum probability of a hidden state for a given observation sequence and defines psit(i) Is a backward pointer which records the state on the Viterbi path; the recursion of the Viterbi algorithm can be described as follows:
initialization:
1(i)=P(oi|mi)·πi
ψ1(i)=0
recursion:
and (4) terminating:
backtracking:
ht-1=ψt(ht)
the Viterbi algorithm is used to infer the hidden state for each point and merge adjacent points of the same state and then fit the thread outer diameter line to the two side lines of the thread angle.
Preferably, the thread edge mating steps are as follows:
(1) dividing the thread edge into different angle sets according to the inclination angle of the thread outer diameter line;
(2) in the same angle set, connecting the middle point of the thread edge profile with the middle points of other thread edge profiles in sequence, and if the connecting line is in the thread profile, classifying the connecting line into a communicating set;
(3) and sequentially taking the middle point of the thread edge profile in the same communicated set, making a straight line perpendicular to the outer diameter line of the thread edge after the middle point passes through the middle point, and if the straight line has an intersection point with the profiles of other thread edges in the set, judging that the two thread edges are successfully matched, otherwise, judging that the two thread edges are not matched.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a high-precision thread full-automatic precise measurement method based on AA R2Unet and HMM. The method utilizes an AA R2Unet network to extract the thread edge so as to filter foreign matters in the image. And then, classifying the thread edge points by using an HMM (hidden Markov model), fitting a straight line and calculating thread parameters.
Drawings
FIG. 1 is a schematic flow chart of a high-precision full-automatic precise thread measuring method for AA R2Unet and HMM according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an AA R2Unet according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an R2 module according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an Attention augmentation provided by an embodiment of the present invention;
FIG. 5 is a classification diagram of edge points of a thread according to an embodiment of the present invention;
FIG. 6 is a gradient pattern provided by embodiments of the present invention;
FIG. 7 is a diagram of an HMM state transition provided by an embodiment of the invention;
fig. 8 is a schematic diagram of a straight line fitting provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-8, the present invention provides a technical solution: the invention provides the following technical scheme: a high-precision thread full-automatic precise measurement method based on AAR2Unet and HMM comprises the following steps:
A. AA R2Unet thread edge identification: establishing an AA R2Unet network to extract the thread edge of the acquired thread picture and acquire an image only containing the thread edge;
B. HMM-based classification of thread edge points: calculating the gradient direction of the thread edge point by using the thread edge extracted in the step A and the gray information in the original image, and dividing the thread edge point into a peak-valley type, a linear type and a transition type by taking the gradient direction as an observation object of the HMM;
C. calculating parameters by fitting a straight line: and D, fitting the thread in the step B into a straight line by using a least square method, and calculating the major diameter and the tooth angle of the thread according to the straight line.
Preferably, in the step a, an R2 module and an Attention augmentation module are added to the Unet; the Unet structure is a symmetrical U-shaped structure overall and comprises 12 units (F1-F12) during design, wherein the left side F1-F6 are contraction paths and are used for feature extraction; the right side F6-F12 is an expansion path and is used for recovering details to realize accurate prediction; the R2 module includes a residual learning unit and a recursive convolution.
In the invention, the essence of the Attention advertisement is to obtain a series of key-value pair mappings through query; first, the input size is (w, h, c)in) The signature of (a) performs a 1 × 1 convolution of the output QKV matrix, which has a size of (w, h,2 × d)k+dv) Wherein w, h,2 x dk+dVThe width, the length and the depth of the matrix are respectively represented; and then, QKV matrixes are segmented from the depth channels to obtain Q, K, V three matrixes with the depth channel sizes dk、dk、dv. Then, a structure of a multi-head attention mechanism is adopted, Q, K, V three matrixes are respectively divided into N equal matrixes from a depth channel for subsequent calculation, and the multi-head attention mechanism expands the original single attention calculation into a plurality of calculations which are smaller and independent in parallel, so that the model can learn feature information in different subspaces; flattening the divided Q, K, V matrix to generate three matrixes of Flat _ Q, Flat _ K, Flat _ V, i.e. Q, K, V keeps the depth channel unchanged and compresses the matrix from length to width to 1 dimension, wherein the sizes of the first two matrixes are (w × h, d)k) The latter matrix size is (w x h, d)v) (ii) a Subsequently, the attribute estimation stores the original Self-attribute method, performs matrix multiplication by using two matrixes of Flat _ Q, Flat _ K to calculate a weight matrix, adds calculation of Relative position embedding (Relative position embedding) to the weight matrix, and performs weight calculation of length and width directions on the Q matrix to obtain Relative position information of each point on the feature map, thereby preventing the final effect of the model from being reduced due to the transformation of the feature position. The related position information in the length direction and the width direction is obtained by inner products of the Q matrix and the weight matrixes H and W respectively and is recorded as ShAnd SwAnd the weight matrixes H and W are obtained by training the model and have the size of (wh, wh, 1). Then, the resulting three matrices are added and multiplied by a scaling factorTo prevent the calculation result from being too large; and then processing by using a softmax function to obtain a final characteristic weight matrix. Finally, multiplying the weight matrix by the V matrix, and performing 1 × 1 convolution operation on the result reshape to the original length and width to obtain a final attention feature matrix O; the Attention feature matrix O and the normal convolution process are spliced (concat) according to the depth direction, and then the result of the Attention augmentation can be obtained. The formula for the calculation of the attention characterization matrix O is as follows:
in the present invention, the HMM model in step B is composed of the following 5 parts:
(1) hidden state sequence:
according to the difference of gradient directions, the thread edge points are divided into three hidden states: peak-valley type, transition type, and straight line type, denoted as M ═ M1,m2,m3}; generally, the difference of the gradient directions of the peak-valley type points is the largest, the difference of the gradient directions of the transition type points is the next largest, and the difference of the gradient directions of the straight line type points is the smallest; the peak valley type points can be used for fitting the outer diameter line and the inner diameter line of the thread and calculating the outer diameter of the thread; linear points are used for fitting two sides of the thread angle to calculate tooth angle parameters; the transitional points are used for eliminating arc line parts on the thread edges, so that the fitting precision of straight lines is improved; the sequence of hidden states is denoted S ═ S1,s2…st},st∈M;
(2) Observation sequences
Calculating the gradient direction of the edge point identified by the AA R2Unet method and recording the gradient direction as thetat(ii) a In order to eliminate the change of the gradient direction caused by the placement angle of the threaded part and enable the observation characteristic to have rotation invariance, the difference value of the gradient directions of adjacent edge points is taken as an observation value and is marked as at. The observed value calculation formula is as follows:
gx(k)=Sx*I(k)
gy(k)=Sy*I(k)
at=|θt-θt-1|
wherein S isxAnd SyThe gradient template of the Sobel operator in the horizontal and vertical directions is shown, and I is an image matrix;
(3) initial probability distribution
The initial probability distribution represents the probability of each hidden state occurring in the initial state, and is denoted as pii=P(s1=mi). Assume a sample size of N for the observed sequence and a hidden state stNumber of occurrences is NiThen the initial probability of the hidden state is pii=Ni/N;
(4) State transition matrix
The state transition matrix represents the current hidden state stProbability of transition to the next hidden state; the transition of the hidden state of the HMM satisfies the following markov property:
P(st+1|st,st-1…s1)=P(st+1|st)
i.e. the probability of transitioning to the next hidden state depends only on the current hidden state; transitions between hidden states are represented by a transition probability matrix A, where αmnRepresents the current state miTransition to the next state mjThe probability of (d);
each hidden state has self-transition, and no direct transition exists between a peak-valley point and a straight line point;
(5) probability of occurrence of observed value
Due to the observed value otIs a continuous random variable, and uses a continuous hidden Markov model to deduce a hidden state sequence; assuming that the observed values corresponding to the respective hidden states follow a gaussian distribution, the mean and variance of the gaussian density function is as follows:
in the formula, ai(t) number of samples representing respective hidden statesAccordingly; because the mean and variance of the gaussian density functions corresponding to different states are different, the probability of the sample data appearing in different hidden states is different. The probability of occurrence of a given observation is as follows:
for a given HMM and observation sequence O ═ O1,o2…otH, it is necessary to deduce the most probable hidden state sequence S ═ S1,s2…st}; to find the optimal hidden state sequence, the Viterbi algorithm is used, in which the definitions are madet(i) Is the maximum probability of a hidden state for a given observation sequence and defines psit(i) Is a backward pointer which records the state on the Viterbi path; the recursion of the Viterbi algorithm can be described as follows:
initialization:
1(i)=P(oi|mi)·πi
ψ1(i)=0
recursion:
and (4) terminating:
backtracking:
ht-1=ψt(ht)
the Viterbi algorithm is used to infer the hidden state for each point and merge adjacent points of the same state, then fit a straight line and calculate the corresponding thread parameters.
The invention provides a high-precision thread full-automatic precise measurement method based on AA R2Unet and HMM. The method utilizes the AAR2Unet network to extract the thread edge so as to filter foreign matters in the image. And then, classifying the thread edge points by using an HMM (hidden Markov model), fitting a straight line and calculating thread parameters.
The invention is not described in detail, but is well known to those skilled in the art.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (10)
1. A high-precision thread full-automatic accurate measurement method based on AA R2Unet and HMM is characterized in that: the method comprises the following steps:
A. AA R2Unet thread edge identification: establishing an AA R2Unet network to extract the thread edge of the acquired thread picture and acquire an image only containing the thread edge;
B. HMM-based classification of thread edge points: calculating the gradient direction of the thread edge point by using the thread edge extracted in the step A and the gray information in the original image, and dividing the thread edge point into a peak-valley type, a linear type and a transition type by taking the gradient direction as an observation object of the HMM;
C. calculating parameters by fitting a straight line: and D, fitting the straight line obtained in the step B by using a least square method, pairing the edges of the threads, and calculating the major diameter and the tooth angle of the threads.
2. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 1, wherein the method comprises the following steps: in the step A, an R2 module and an Attention augmentation module are added into the Unet; the Unet structure is a symmetrical U-shaped structure overall and comprises 12 units (F1-F12) during design, wherein the left side F1-F6 are contraction paths and are used for feature extraction; the right side F6-F12 are dilation paths for recovery of details to achieve accurate prediction.
3. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 2, wherein the method comprises the following steps: the R2 module includes a residual learning unit and a recursive convolution,
(1) a residual learning unit: assuming that an input of a neural network unit is x, an expected output is h (x), a residual map f (x) ═ h (x) -x is defined, and if x is directly transmitted to the output, the target to be learned by the neural network unit is the residual map f (x) ═ h (x) -x, the residual learning unit is composed of a series of convolution layers and a shortcut, and the input x is transmitted to the output of the residual learning unit through the shortcut, and the output of the residual learning unit is z ═ f (x) + x
(2) And (3) recursive convolution: assuming that the input is x, successive convolutions are performed on the input x, and the current input is added to the output of each convolution as the input for the next convolution.
The R2 module replaces the normal convolution in the residual learning unit with a recursive convolution.
4. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 2, wherein the method comprises the following steps: the attribute authority essentially obtains a series of key-value pair mappings through query; first, the input size is (w, h, c)in) Is subjected to a 1 × 1 convolution, where w, h, cinRepresented as width, length and depth of the input feature map, respectively. Output QKV momentArray of size (w, h,2 x d)k+dv) Wherein w, h,2 x dk+dVThe width, length and depth of the QKV matrix are shown, respectively. QKV the width and length of the matrix are the same as the width and length dimensions of the input features; and then, QKV matrixes are segmented from the depth channels to obtain Q, K, V three matrixes with the depth channel sizes dk、dk、dv(ii) a Then, a structure of a multi-head attention mechanism is adopted, Q, K, V three matrixes are respectively divided into N equal matrixes from a depth channel for subsequent calculation, and the multi-head attention mechanism expands the original single attention calculation into a plurality of calculations which are smaller and independent in parallel, so that the model can learn feature information in different subspaces.
5. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 4, wherein the method comprises the following steps: flattening the divided Q, K, V matrix to generate three matrixes of Flat _ Q, Flat _ K, Flat _ V, i.e. Q, K, V keeps the depth channel unchanged and compresses the matrix from length to width to 1 dimension, wherein the sizes of the first two matrixes are (w × h, d)k) The latter matrix size is (w x h, d)v) (ii) a Then, the attribute evaluation stores the original Self-attribute method, and uses Flat _ Q, Flat _ K two matrixes to perform matrix multiplication to calculate a weight matrix, and adds calculation of embedding relative position on the basis of the weight matrix, and obtains the relative position information of each point on the feature map by performing weight calculation on the Q matrix in the length direction and the width direction, thereby preventing the transformation of the feature position and reducing the final effect of the model.
6. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 5, wherein the method comprises the following steps: the related position information in the length direction and the width direction is obtained by inner products of the Q matrix and the weight matrixes H and W respectively and is recorded as ShAnd SwWherein the weight matrixes H and W are obtained by training a model, and the weight matrixes H and W have the size of (wh, wh, 1); then, the resulting three matrices are added and multiplied by a scaling factorTo prevent the calculation result from being too large; then, processing by using a softmax function to obtain a final characteristic weight matrix; and finally, multiplying the weight matrix by the V matrix, and performing 1 × 1 convolution operation on the result reshape to the original length and width to obtain a final attention feature matrix O.
7. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 6, wherein the method comprises the following steps: splicing (concat) the Attention feature matrix O and the normal convolution process according to the depth direction to obtain the result of the Attention augmentation; the formula for the calculation of the attention characterization matrix O is as follows:
8. the full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 1, wherein the method comprises the following steps: the HMM model in step B consists of the following 5 parts:
(1) hidden state sequence:
according to the difference of gradient directions, the thread edge points are divided into three hidden states: peak-valley type, transition type, and straight line type, denoted as M ═ M1,m2,m3}; generally, the difference of the gradient directions of the peak-valley type points is the largest, the difference of the gradient directions of the transition type points is the next largest, and the difference of the gradient directions of the straight line type points is the smallest; the peak valley type points can be used for fitting the outer diameter line and the inner diameter line of the thread and calculating the outer diameter of the thread; linear points are used for fitting two sides of the thread angle to calculate tooth angle parameters; the transitional points are used for eliminating arc line parts on the thread edges, so that the fitting precision of straight lines is improved; the sequence of hidden states is denoted S ═ S1,s2…st},st∈M;
(2) Observation sequences
Calculating the gradient direction of the edge point identified by the AA R2Unet method and recording the gradient direction as thetat(ii) a In order to eliminate the change of the gradient direction caused by the placement angle of the threaded part and enable the observation characteristic to have rotation invariance, the difference value of the gradient directions of adjacent edge points is taken as an observation value and is marked as at. The observed value calculation formula is as follows:
gx(k)=Sx*I(k)
gy(k)=Sy*I(k)
at=|θt-θt-1|
wherein S isxAnd SyThe gradient template of the Sobel operator in the horizontal and vertical directions is shown, and I is an image matrix;
(3) initial probability distribution
The initial probability distribution represents the probability of each hidden state occurring in the initial state, and is denoted as pii=P(s1=mi). Assume that the sample size of the observed sequence is N and the hidden state type miNumber of occurrences is NiThen the initial probability of the hidden state type is πi=Ni/N;
(4) State transition matrix
The state transition matrix represents the current hidden state stProbability of transition to the next hidden state; the transition of the hidden state of the HMM satisfies the following markov property:
P(st+1|st,st-1…s1)=P(st+1|st)
i.e. the probability of transitioning to the next hidden state depends only on the current hidden state; transitions between hidden states are represented by a transition probability matrix A, where αmnRepresents the current state miTransition to the next state mjThe probability of (d);
each hidden state has self-transition, and no direct transition exists between a peak-valley point and a straight line point;
(5) probability of occurrence of observed value
Due to the observed value otIs a continuous random variable, and uses a continuous hidden Markov model to deduce a hidden state sequence; assuming that the observed values corresponding to the respective hidden states follow a gaussian distribution, the mean and variance of the gaussian density function is as follows:
in the formula, ai(t) sample data representing a corresponding hidden state type; the probability of the sample data appearing in different hidden state types is different because the mean value and the variance of the Gaussian density function corresponding to different states are different; the probability of occurrence of a given observation is as follows:
for a given HMM and observation sequence O ═ O1,o2…otH, it is necessary to deduce the most probable hidden state sequence S ═ S1,s2…st}。
9. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 8, wherein the method comprises the following steps: to find the optimal hidden state sequence, the Viterbi algorithm is used, in which the definitions are madet(i) Is the maximum probability of a hidden state for a given observation sequence and defines psit(i) Is a backward pointer which records the state on the Viterbi path; the recursion of the Viterbi algorithm can be described as follows:
initialization:
1(i)=P(oi|mi)·πi
ψ1(i)=0
recursion:
and (4) terminating:
backtracking:
ht-1=ψt(ht)
the Viterbi algorithm is used to infer the hidden state for each point and merge adjacent points of the same state and then fit the thread outer diameter line to the two side lines of the thread angle.
10. The full-automatic precise measurement method for the high-precision threads based on the AA R2Unet and the HMM as claimed in claim 1, wherein the method comprises the following steps: the thread edge mating steps are as follows:
(1) dividing the thread edge into different angle sets according to the inclination angle of the thread outer diameter line;
(2) in the same angle set, connecting the middle point of the thread edge profile with the middle points of other thread edge profiles in sequence, and if the connecting line is in the thread profile, classifying the connecting line into a communicating set;
(3) and sequentially taking the middle point of the thread edge profile in the same communicated set, making a straight line perpendicular to the outer diameter line of the thread edge after the middle point passes through the middle point, and if the straight line has an intersection point with the profiles of other thread edges in the set, judging that the two thread edges are successfully matched, otherwise, judging that the two thread edges are not matched.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010741349.6A CN111882542A (en) | 2020-07-29 | 2020-07-29 | Full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010741349.6A CN111882542A (en) | 2020-07-29 | 2020-07-29 | Full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111882542A true CN111882542A (en) | 2020-11-03 |
Family
ID=73201855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010741349.6A Pending CN111882542A (en) | 2020-07-29 | 2020-07-29 | Full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111882542A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107742289A (en) * | 2017-10-15 | 2018-02-27 | 哈尔滨理工大学 | One kind is based on machine vision revolving body workpieces detection method |
CN108764286A (en) * | 2018-04-24 | 2018-11-06 | 电子科技大学 | The classifying identification method of characteristic point in a kind of blood-vessel image based on transfer learning |
CN109060836A (en) * | 2018-08-28 | 2018-12-21 | 南通大学 | High-pressure oil pipe joint external screw thread detection method based on machine vision |
CN109785338A (en) * | 2018-12-26 | 2019-05-21 | 汕头大学 | The online visible sensation method of screw thread critical size parameter under a kind of movement background |
CN110992346A (en) * | 2019-09-17 | 2020-04-10 | 浙江工业大学 | Fatigue crack length online detection method based on DIP and DICM |
-
2020
- 2020-07-29 CN CN202010741349.6A patent/CN111882542A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107742289A (en) * | 2017-10-15 | 2018-02-27 | 哈尔滨理工大学 | One kind is based on machine vision revolving body workpieces detection method |
CN108764286A (en) * | 2018-04-24 | 2018-11-06 | 电子科技大学 | The classifying identification method of characteristic point in a kind of blood-vessel image based on transfer learning |
CN109060836A (en) * | 2018-08-28 | 2018-12-21 | 南通大学 | High-pressure oil pipe joint external screw thread detection method based on machine vision |
CN109785338A (en) * | 2018-12-26 | 2019-05-21 | 汕头大学 | The online visible sensation method of screw thread critical size parameter under a kind of movement background |
CN110992346A (en) * | 2019-09-17 | 2020-04-10 | 浙江工业大学 | Fatigue crack length online detection method based on DIP and DICM |
Non-Patent Citations (6)
Title |
---|
IRWAN BELLO: "Attention Augmented Convolutional Networks", ICCV_2019, pages 1 - 10 * |
MD ZAHANGIR ALOM: "Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation", ARXIV, 20 February 2018 (2018-02-20), pages 1 - 12, XP055524884 * |
MD ZAHANGIR ALOM: "Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation", ARXIV, pages 1 - 12 * |
ZIJIE LI: "External Thread Measurement Based on ResUnet and HMM", 2020 5TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS, 16 June 2020 (2020-06-16), pages 400 - 403 * |
张 堃: "大视场大规模目标精确检测算法应用研究", 仪 器 仪 表 学 报, vol. 41, no. 4, 30 April 2020 (2020-04-30), pages 191 - 199 * |
张堃: "大视场大规模目标精确检测算法应用研究", 《仪器仪表学报》, vol. 41, no. 1, pages 191 - 199 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709909B (en) | General printing defect detection method based on deep learning and model thereof | |
CN107480585B (en) | Target detection method based on DPM algorithm | |
CN113808180B (en) | Heterologous image registration method, system and device | |
CN113312973B (en) | Gesture recognition key point feature extraction method and system | |
Lv et al. | Few-shot learning combine attention mechanism-based defect detection in bar surface | |
CN111199558A (en) | Image matching method based on deep learning | |
CN114241469A (en) | Information identification method and device for electricity meter rotation process | |
CN115147418A (en) | Compression training method and device for defect detection model | |
CN114399505A (en) | Detection method and detection device in industrial detection | |
CN117315670A (en) | Water meter reading area detection method based on computer vision | |
CN111882542A (en) | Full-automatic precise measurement method for high-precision threads based on AA R2Unet and HMM | |
CN116228682A (en) | Radiographic image weld defect identification method based on multi-branch convolutional neural network | |
CN108710886B (en) | Repeated image matching method based on SIFT algorithm | |
CN114529517A (en) | Industrial product defect detection method based on single sample learning | |
CN115018787A (en) | Anomaly detection method and system based on gradient enhancement | |
CN113989793A (en) | Graphite electrode embossed seal character recognition method | |
CN106778915A (en) | A kind of target matching method towards multiple-camera | |
Li et al. | Improvement of YOLOv3 network based on ROI | |
Chen et al. | Shape interpolating on closed curves space with stretching channels | |
Wang et al. | A deep learning-based method for aluminium foil-surface defect recognition | |
Trairattanapa et al. | Real-time multiple analog gauges reader for an autonomous robot application | |
CN116818778B (en) | Rapid and intelligent detection method and system for automobile parts | |
Ren et al. | Structural Image Anomaly Detection Method Based on Multi-scale Masking Feature Reconstruction | |
CN115239657B (en) | Industrial part increment identification method based on deep learning target segmentation | |
CN111797925B (en) | Visual image classification method and device for power system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |