CN107462182A - A kind of cross section profile deformation detecting method based on machine vision and red line laser - Google Patents

A kind of cross section profile deformation detecting method based on machine vision and red line laser Download PDF

Info

Publication number
CN107462182A
CN107462182A CN201710851620.XA CN201710851620A CN107462182A CN 107462182 A CN107462182 A CN 107462182A CN 201710851620 A CN201710851620 A CN 201710851620A CN 107462182 A CN107462182 A CN 107462182A
Authority
CN
China
Prior art keywords
mrow
msub
red line
pixel
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710851620.XA
Other languages
Chinese (zh)
Other versions
CN107462182B (en
Inventor
康波
李云霞
李夏霖
甘君
唐诗
杨丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710851620.XA priority Critical patent/CN107462182B/en
Publication of CN107462182A publication Critical patent/CN107462182A/en
Application granted granted Critical
Publication of CN107462182B publication Critical patent/CN107462182B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge

Abstract

The invention discloses a kind of cross section profile deformation detecting method based on machine vision and red line laser, mainly includes the demarcation of laser smooth surface and shape changing detection two parts;Wherein, laser smooth surface calibration process irradiates next smooth surface including the demarcation to camera and to red line laser and demarcated, and makes that three-dimensional coordinate of the laser rays come in camera coordinates system can be irradiated on effective expression picture after demarcation;Shape changing detection process be in the photo captured by camera by red line laser irradiate Lai red line detect, according still further to demarcation result carry out profile reconstruct, finally contrasted with standard section profile, so as to realize structural deformation detect.

Description

A kind of cross section profile deformation detecting method based on machine vision and red line laser
Technical field
The invention belongs to technical field of image processing, more specifically, is related to one kind and is swashed based on machine vision and red line The cross section profile deformation detecting method of light device.
Background technology
Quality testing of the detection of the cross-section profile shape of the object of large scale structure for product or building body and safety are raw Produce significant.Such as track, bridge, tunnel, girder steel etc., may meaning when trickle change occurs for its surface configuration And larger quality and potential safety hazard be present.
Traditional detection is typically artificial inspection, and this depends not only upon artificial experience, and for most trickle Deformation is manually imperceptible, thus can have missing inspection;Another kind is just to rely on Large-scale professional instrument, such as laser scanning Measurement, certain point detection is typically fixed to, is then being moved to subsequent point, but this quasi-instrument is generally expensive, detection speed It is relatively slow, carry and use nor easily.Consider that The present invention gives one for cost, precision, detection speed and ease for use The quick determination method of simple and easy object section profile deformation of the kind based on machine vision and red line laser.
The content of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of based on machine vision and red line laser Cross section profile deformation detecting method, object section profile deformation inspection is carried out by way of red line laser and machine vision combine Survey, there is preferable universality.
For achieving the above object, the present invention is a kind of cross section profile deformation based on machine vision and red line laser Detection method, it is characterised in that comprise the following steps:
(1), laser smooth surface is demarcated
(1.1), camera and red line laser are fixed according to certain angle, ensure red line laser illumination to object The red line on surface is within the scope of camera fields of view;
(1.2), camera is demarcated using Zhang Zhengyou standardizations, writes down the internal reference coefficient matrix M of camera;
(1.3) camera shooting N group pictures, are set, every group two is opened, common 2N width picture, wherein, every group of two pictures are respectively Camera shoots to obtain when camera shoots to obtain and is the closing of red line laser when red line laser is opened, and the difference of two pictures is only Red line laser is opened to whether there is;
M point in red line picture on red line in every group is taken, common n=m*N point, is calibrated often using Zhang Zhengyou standardizations The outer ginseng coefficient matrix of group pictureInternal reference coefficient matrix M is being combined, is obtaining the three-dimensional coordinate in n corresponding camera coordinates system (xck,yck,zck);
(1.4), by n three-dimensional coordinate (xck,yck,zck) optimization formula obtains smooth surface coefficient matrix W=[w below for substitution1, w2,w3,w4];
Wherein, η represents regularization parameter;
(2), image preprocessing;
(2.1), set camera acquisition as triple channel RGB image;
Calculate single channel red characteristic pattern R:In each pixel (i, j) pixel value R (i, j):
Wherein, R (i, j) is the pixel value of pixel (i, j);D is dependent variable, and it has been reacted in a RGB image, as Ratio in vegetarian refreshments (i, j) shared by red, its numerical value is bigger to represent that the pixel is redder;L is corresponding to pixel (i, j) Brightness in Lab space;R, g, b are respectively three color components in the rgb space corresponding to pixel (i, j);
(2.2) processing, is filtered to each pixel (i, j) using multistage median filtering device, obtains pixel (i, j) New pixel value P (i, j);
(3) red line region ROI, is extracted
(3.1), according to the pixel value P (i, j) of each pixel (i, j) in single channel red characteristic pattern R, thrown according to level Shadow and upright projection extraction red line region ROI;
Wherein, X_proj (x) represents that xth row pixel carries out the result after floor projection, and Y_proj (y) represents y rows Pixel carries out the result after upright projection;Nrow, ncol are single channel red characteristic pattern R line numbers and columns;
(3.2) process of convolution, is done to X_proj (x) and Y_proj (y):
Xconv=X_proj (x) * h
Yconv=Y_proj (y) * h
Wherein, h is warp factor, and * is convolution symbol, and Xconv and Yconv are the result after convolution respectively;
(3.3), Xconv is traveled through from head to tail and from tail to first both direction respectively using threshold value thro1, wherein, from When head is to tail traversal, first position x for being more than thro1 is recorded1, when being traveled through from tail to head, record first and be more than Thro1 position y1
Similarly, identical processing is done to Yconv using threshold value thro2, respectively obtains two position x2、y2
(3.4), according to position x1、y1And x2、y2, red line region ROI is obtained, is expressed as R (x2:y2, x1:y1);
(4) red line region ROI center line, is extracted
In R (x2:y2, x1:y1) every a line on, coordinate weighted average is not asked for 0 point to all transverse directions;
Wherein, i span is:x2≤i≤y2;ρ (P (i, j)) represents pixel value P (i, j) weighting function;J is represented The pixel of j row;
By R (x2:y2, x1:y1) in the coordinate weighted average X (i) that asks for of all rows be combined into vectorial X, as red line area Domain ROI center line;
(5), center line is smoothed
(5.1), vectorial X is normalized, obtains vectorial X ';
(5.2) vectorial X ' and the dot product of itself, are asked for, obtains vectorial X ", X "=X ' .*X ';
(5.3), to vectorial X ' and X " be all 1/ τ convolution based on core, respectively obtain mean_o and mean_o2, its In, the size of τ cores;
(5.4) the smooth sequence q of center line, is calculated;
Q=a ' .*X '+b '
Wherein, a ' and b ' is vector;
(6), by internal reference coefficient matrix M and smooth surface coefficient matrix W, equation below is brought into smooth sequence q:
Wherein, qiRepresent smooth sequence q i-th of element
By solving above-mentioned equation, coordinate sequence (x of the section of object in camera coordinates system is obtainedci,yci,zci), x2 ≤i≤y2
Again by coordinate sequence (xci,yci,zci) coordinate sequence that calculates with standard section profileCompare, such as Fruit coordinate sequence changes, then object section profile deforms upon.
What the goal of the invention of the present invention was realized in:
A kind of cross section profile deformation detecting method based on machine vision and red line laser of the present invention, mainly including laser Smooth surface is demarcated and shape changing detection two parts;Wherein, laser smooth surface calibration process is including the demarcation to camera and to red line laser Irradiate the smooth surface come to be demarcated, make that the laser rays come can be irradiated in camera on effective expression picture after demarcation Three-dimensional coordinate in coordinate system;Shape changing detection process be in the photo captured by camera by red line laser irradiate Lai Red line is detected, and is carried out profile reconstruct according still further to the result of demarcation, is finally contrasted with standard section profile, so as to realize Structural deformation detects.
Meanwhile the cross section profile deformation detecting method of the invention based on machine vision and red line laser also has with following Beneficial effect:
(1) it is linear the advantages of the wave filter, present invention employs a kind of a kind of smoothing method based on wave filter Time complexity, it can ensure that the space structure of original pixel collection is constant while producing smaller influence to pixel as far as possible, from And reach the purpose of smoothing denoising;
(2), the present invention is that camera is demarcated firstly the need of what is done, employs a kind of demarcation side of nonlinear model Method, the scaling method precision is high, can reach the stated accuracy of sub-pixel, while the equipment demarcated also needs for relatively simple One gridiron pattern;
(3), in the present invention, the characteristics of for red line itself, the red feature graph model of use can be carried out accurately to it Expression.
Brief description of the drawings
Fig. 1 is the cross section profile deformation detecting method flow chart of the invention based on machine vision and red line laser;
Fig. 2 is the cross section profile shape changing detection schematic diagram of machine vision and red line laser;
Fig. 3 is RGB image;
Fig. 4 is red characteristic pattern;
Fig. 5 is the red characteristic pattern of enhancing;
Fig. 6 is ROI interception results;
Fig. 7 is the extraction at red line center;
Fig. 8 is the smooth of red line center;
Fig. 9 be RGB image and its for contour images;
Figure 10 is defect part unusual part.
Embodiment
The embodiment of the present invention is described below in conjunction with the accompanying drawings, so as to those skilled in the art preferably Understand the present invention.Requiring particular attention is that in the following description, when known function and the detailed description of design perhaps When can desalinate the main contents of the present invention, these descriptions will be ignored herein.
Embodiment
Fig. 1 is the cross section profile deformation detecting method flow chart of the invention based on machine vision and red line laser.
In the present embodiment, as shown in figure 1, a kind of cross section profile shape based on machine vision and red line laser of the present invention Become detection method, mainly include the demarcation of laser smooth surface and shape changing detection two parts, expansion explanation is carried out to this two parts below.
S1, the demarcation of laser smooth surface
S1.1, camera calibration
Camera and red line laser are fixed according to certain angle, angle can determine according to actual conditions, as long as Ensure red line laser illumination to the red line of body surface within the scope of camera fields of view;For cutting for effective reconstruction of objects Face, a word red line laser irradiate the red line come and must be able to effectively cut the cross section of object, as shown in Figure 2;
After the completion of camera and red line laser are fixed, camera is demarcated using Zhang Zhengyou standardizations, writes down camera Internal reference coefficient matrix M;
S1.2, smooth surface demarcation
Gridiron pattern is fixed on before camera, for shooting picture, keeps camera and tessellated current location constant, then Red line laser is opened, its light can be irradiated on gridiron pattern, then shoot picture.
Assuming that camera shoots N group pictures, every group two is opened, common 2N (N substantially 5 to 7 groups) width picture, wherein, the two of every group Camera shoots to obtain when camera shoots to obtain and is the closing of red line laser when pictures are respectively red line laser opening, two The difference of picture is only to whether there is to open red line laser;
M point in red line picture on red line in every group is taken, because picture number is few, takes method a little to adopt here Point (general 5 to 8 points) is taken with the method for manual amplification picture, common n=m*N point, is calibrated often using Zhang Zhengyou standardizations The outer ginseng coefficient matrix of group pictureAccording to camera projection formula, point coordinates that first inverse goes out under world coordinate system is combining Internal reference coefficient matrix M calculates the point coordinates under camera coordinates system, so obtains the three-dimensional seat in n corresponding camera coordinates system altogether Mark (xck,yck,zck);
By n three-dimensional coordinate (xck,yck,zck) optimization formula obtains smooth surface coefficient matrix W=[w below for substitution1,w2,w3, w4];
Wherein, w1,w2,w3,w4For smooth surface equation coefficient;η represents regularization parameter, and value is tested according to actual conditions.For Trivial solution is avoided, can fix w1Coefficient value is 1, and the smooth surface equation coefficient obtained is similar as follows:
[w1,w2,w3,w4]=[1.000, -0.0107,0.3101,42.1917]
S2, shape changing detection
S2.1 image preprocessings;
S2.1.1, set camera acquisition as triple channel RGB image, as shown in Figure 3;
Calculate single channel red characteristic pattern R:In each pixel (i, j) pixel value R (i, j):
Wherein, R (i, j) is the pixel value of pixel (i, j);D is dependent variable, and it has been reacted in a RGB image, as Ratio in vegetarian refreshments (i, j) shared by red, its numerical value is bigger to represent the pixel to be redder, and it subtracts green and red by red Blueness is subtracted to portray;L is the brightness in the Lab space corresponding to pixel (i, j), and actual Lab brightness span needs 255 scopes for being amplified to 0-255 are multiplied by after normalizing;R, g, b are respectively in the rgb space corresponding to pixel (i, j) Three color components;In the present embodiment, single channel red characteristic pattern R is as shown in Figure 4;
S2.1.2, the effect for the red characteristic pattern of enhancing, are carried out using multistage median filtering device to each pixel (i, j) Filtering process, the new pixel value P (i, j) of pixel (i, j) is obtained, it is as shown in Figure 5 to strengthen red characteristic pattern;
S2.2, extraction red line region ROI
S2.2.1, red line shared region in picture is very small in a width picture, rather than red line region is cut for object The reconstruct in face be do not have it is effective, it would therefore be desirable to the picture according to each pixel (i, j) in single channel red characteristic pattern R Element value P (i, j), red line region ROI is extracted according to floor projection and upright projection;
Wherein, X_proj (x) represents that xth row pixels carries out the result after floor projection, x span for 0≤x≤ ncol;Y_proj (y) represents that y rows pixel carries out the result after upright projection, and y span is 0≤y≤nrow; Nrow, ncol are single channel red characteristic pattern R line numbers and columns, so eliminate the influence of picture of different sizes to result;
S2.2.2, process of convolution is done to X_proj (x) and Y_proj (y):
Xconv=X_proj (x) * h
Yconv=Y_proj (y) * h
Wherein, h is warp factor, typically takes 10-20;* it is convolution symbol, after Xconv and Yconv are respectively convolution As a result;
S2.2.3, using threshold value thro1 Xconv is traveled through from head to tail and from tail to first both direction respectively, wherein, from When head is to tail traversal, first position x for being more than thro1 is recorded1, when being traveled through from tail to head, record first and be more than Thro1 position y1
Similarly, identical processing is done to Yconv using threshold value thro2, respectively obtains two position x2、y2
S2.2.4, according to position x1、y1And x2、y2, red line region ROI is obtained, is expressed as R (x2:y2, x1:y1), such as Fig. 6 institutes Show;
S2.3, extraction red line region ROI center line
In R (x2:y2, x1:y1) every a line on, coordinate weighted average is not asked for 0 point to all transverse directions;
Wherein, i span is:x2≤i≤y2;ρ (P (i, j)) represents pixel value P (i, j) weighting function, ρ (P (i, j)) function expression be:σ joins coefficient, the scope of value 0.5 to 5 to be super;J represents j The pixel of row;
By R (x2:y2, x1:y1) in the coordinate weighted average X (i) that asks for of all rows be combined into vectorial X, acquired results are made For red line region ROI center line, as shown in Figure 7;This step needs to have corresponded to X (i) and line number i all the time;
S2.4, center line is smoothed
S2.4.1, vectorial X is normalized, obtains vectorial X ';
S2.4.2, vectorial X ' and the dot product of itself are asked for, obtain vectorial X ", X "=X ' .*X ';
S2.4.3, the convolution for carrying out being all 1/ τ to vectorial X ' and X " based on core, respectively obtain mean_o and mean_o2, its In, τ is the size of core, is determined by actual effect, this example value τ=9;
S2.4.4, the smooth sequence q for calculating center line;
Q=a ' .*X '+b '
Sequence after final smooth is q, as shown in Figure 8;
Wherein, vectorial a ' and b ' acquiring method is:
1) vectorial a and b, is calculated:
A=(mean_o2-mean_o.*mean_o)/(mean_o2+ ε)
B=mean_o-a.*mean_o
Wherein, ε is smoothness parameter, and value 0.1-1.5 .* represent dot product, and/expression point removes;
2), a and b be all based on core to 1/ τ convolution, respectively obtain vectorial a ' and b ', wherein, the size of τ cores;
S2.5, three-dimensional coordinate are established
By internal reference coefficient matrix M and smooth surface coefficient matrix W, equation below is brought into smooth sequence q:
Wherein, qiRepresent smooth sequence q i-th of element
By solving above-mentioned equation, coordinate sequence (x of the section of object in camera coordinates system is obtainedci,yci,zci), x2 ≤i≤y2
S2.6, object section shape changing detection
If thinking whether detection object has deformation, by coordinate sequence (xci,yci,zci) calculated with standard section profile Coordinate sequenceContrast, if coordinate sequence changes, monitored object section deforms upon.Such as figure 9, left figure is RGB image, and solid line is the experimental result according to above step in right figure, and dotted line is the profile of standard item, is passed through The defects of contrast can be found at rectangle frame position.Figure 10 is the picture for being amplified rejected region in Fig. 9.
Although the illustrative embodiment of the present invention is described above, in order to the technology of the art Personnel understand the present invention, it should be apparent that the invention is not restricted to the scope of embodiment, to the common skill of the art For art personnel, if various change in the spirit and scope of the present invention that appended claim limits and determines, these Change is it will be apparent that all utilize the innovation and creation of present inventive concept in the row of protection.

Claims (3)

1. a kind of cross section profile deformation detecting method based on machine vision and red line laser, it is characterised in that including following Step:
(1), laser smooth surface is demarcated
(1.1), camera and red line laser are fixed according to certain angle, ensure red line laser illumination to body surface Red line within the scope of camera fields of view;
(1.2), camera is demarcated using Zhang Zhengyou standardizations, writes down the internal reference coefficient matrix M of camera;
(1.3) camera shooting N group pictures, are set, every group two is opened, common 2N width picture, wherein, every group of two pictures are respectively red line Camera shoots to obtain when camera shoots to obtain and is the closing of red line laser when laser is opened, and the difference of two pictures is only to have Without opening red line laser;
M point in red line picture on red line in every group is taken, common n=m*N point, every group of figure is calibrated using Zhang Zhengyou standardizations The outer ginseng coefficient matrix of pieceInternal reference coefficient matrix M is being combined, is obtaining the three-dimensional coordinate (x in n corresponding camera coordinates systemck, yck,zck);
(1.4), by n three-dimensional coordinate (xck,yck,zck) optimization formula obtains smooth surface coefficient matrix W=[w below for substitution1,w2, w3,w4];
<mrow> <munder> <mrow> <mi>arg</mi> <mi>min</mi> </mrow> <mi>W</mi> </munder> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <msub> <mi>x</mi> <mrow> <mi>c</mi> <mi>k</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> <msub> <mi>y</mi> <mrow> <mi>c</mi> <mi>k</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mn>3</mn> </msub> <msub> <mi>z</mi> <mrow> <mi>c</mi> <mi>k</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mn>4</mn> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;eta;</mi> <mo>|</mo> <mo>|</mo> <mi>W</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow>
Wherein, η represents regularization parameter;
(2), image preprocessing;
(2.1), set camera acquisition as triple channel RGB image;
Calculate single channel red characteristic pattern R:In each pixel (i, j) pixel value R (i, j):
<mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mfrac> <mrow> <mn>255</mn> <mo>*</mo> <mi>D</mi> </mrow> <mrow> <mi>r</mi> <mo>+</mo> <mi>g</mi> <mo>+</mo> <mi>b</mi> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
<mfenced open = "{" close = "}"> <mtable> <mtr> <mtd> <mi>D</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>L</mi> <mo>&amp;GreaterEqual;</mo> <mn>250</mn> </mtd> </mtr> <mtr> <mtd> <mi>D</mi> <mo>=</mo> <mn>5</mn> <mo>*</mo> <mi>min</mi> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>g</mi> <mo>,</mo> <mi>r</mi> <mo>-</mo> <mi>b</mi> <mo>)</mo> <mo>,</mo> <mn>190</mn> <mo>&amp;le;</mo> <mi>L</mi> <mo>&lt;</mo> <mn>250</mn> </mtd> </mtr> <mtr> <mtd> <mi>D</mi> <mo>=</mo> <mi>min</mi> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>g</mi> <mo>,</mo> <mi>r</mi> <mo>-</mo> <mi>b</mi> <mo>)</mo> <mo>,</mo> <mn>40</mn> <mo>&lt;</mo> <mi>L</mi> <mo>&lt;</mo> <mn>190</mn> </mtd> </mtr> <mtr> <mtd> <mi>D</mi> <mo>=</mo> <mi>max</mi> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>g</mi> <mo>,</mo> <mi>r</mi> <mo>-</mo> <mi>b</mi> <mo>)</mo> <mo>,</mo> <mi>L</mi> <mo>&amp;le;</mo> <mn>40</mn> </mtd> </mtr> </mtable> </mfenced>
Wherein, R (i, j) is the pixel value of pixel (i, j);D is dependent variable, and it has been reacted in a RGB image, pixel Ratio in (i, j) shared by red, its numerical value is bigger to represent that the pixel is redder;L is the Lab corresponding to pixel (i, j) Brightness in space;R, g, b are respectively three color components in the rgb space corresponding to pixel (i, j);
(2.2) processing, is filtered to each pixel (i, j) using multistage median filtering device, obtains the new of pixel (i, j) Pixel value P (i, j);
(3) red line region ROI, is extracted
(3.1), according to the pixel value P (i, j) of each pixel (i, j) in single channel red characteristic pattern R, according to floor projection and Upright projection extraction red line region ROI;
<mfenced open = "{" close = "}"> <mtable> <mtr> <mtd> <mi>X</mi> <mo>_</mo> <mi>p</mi> <mi>r</mi> <mi>o</mi> <mi>j</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&amp;Sigma;</mi> <mi>x</mi> </msub> <mi>P</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>n</mi> <mi>c</mi> <mi>o</mi> <mi>l</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mo>_</mo> <mi>p</mi> <mi>r</mi> <mi>o</mi> <mi>j</mi> <mo>(</mo> <mi>y</mi> <mo>)</mo> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&amp;Sigma;</mi> <mi>y</mi> </msub> <mi>P</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>n</mi> <mi>r</mi> <mi>o</mi> <mi>w</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> 1
Wherein, X_proj (x) represents that xth row pixel carries out the result of horizontal movie queen, and Y_proj (y) represents y row pixels Carry out the result after upright projection;Nrow, ncol are single channel red characteristic pattern R line numbers and columns;
(3.2) process of convolution, is done to X_proj (x) and Y_proj (y):
Xconv=X_proj (x) * h
Yconv=Y_proj (y) * h
Wherein, h is warp factor, and * is convolution symbol, and Xconv and Yconv are the result after convolution respectively;
(3.3), Xconv is traveled through from head to tail and from tail to first both direction respectively using threshold value thro1, wherein, from head to When tail travels through, first position x for being more than thro1 is recorded1, when being traveled through from tail to head, first is recorded more than thro1's Position y1
Similarly, identical processing is done to Yconv using threshold value thro2, respectively obtains two position x2、y2
(3.4), according to position x1、y1And x2、y2, red line region ROI is obtained, is expressed as R (x2:y2, x1:y1);
(4) red line region ROI center line, is extracted
In R (x2:y2, x1:y1) every a line on, coordinate weighted average is not asked for 0 point to all transverse directions;
<mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> </mrow> </munderover> <mi>j</mi> <mo>*</mo> <mi>&amp;rho;</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> </mrow> </munderover> <mi>&amp;rho;</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
Wherein, i span is:x2≤i≤y2;ρ (P (i, j)) represents pixel value P (i, j) weighting function;J represents j row Pixel;
By R (x2:y2, x1:y1) in the coordinate weighted average X (i) that asks for of all rows be combined into vectorial X, as red line region ROI Center line;
(5), center line is smoothed
(5.1), vectorial X is normalized, obtains vectorial X ';
(5.2) vectorial X ' and the dot product of itself, are asked for, obtains vectorial X ", X "=X ' .*X ';
(5.3), to vectorial X ' and X " be all 1/ τ convolution based on core, respectively obtain mean_o and mean_o2, wherein, τ The size of core;
(5.4) the smooth sequence q of center line, is calculated;
Q=a ' .*X '+b '
Wherein, a ' and b ' is vector;
(6), by internal reference coefficient matrix M and smooth surface coefficient matrix W, equation below is brought into smooth sequence q:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mi>M</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>c</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>c</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>z</mi> <mrow> <mi>c</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;prime;</mo> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <msub> <mi>x</mi> <mrow> <mi>c</mi> <mi>i</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> <msub> <mi>y</mi> <mrow> <mi>c</mi> <mi>i</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mn>3</mn> </msub> <msub> <mi>z</mi> <mrow> <mi>c</mi> <mi>i</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>w</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, qiRepresent smooth sequence q i-th of element
By solving above-mentioned equation, coordinate sequence (x of the section of object in camera coordinates system is obtainedci,yci,zci), x2≤i≤ y2
Again by coordinate sequence (xci,yci,zci) coordinate sequence that calculates with standard section profileCompare, if sat Mark sequence changes, then object section profile deforms upon.
2. the cross section profile deformation detecting method according to claim 1 based on machine vision and red line laser, it is special Sign is that described ρ (P (i, j)) function expression is:
<mrow> <mi>&amp;rho;</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>P</mi> <msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> <mo>+</mo> <mi>P</mi> <msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow>
Wherein, σ is super ginseng coefficient.
3. the cross section profile deformation detecting method according to claim 1 based on machine vision and red line laser, it is special Sign is that described a ' and b ' acquiring method are:
1) vectorial a and b, is calculated:
A=(mean_o2-mean_o.*mean_o)/(mean_o2+ ε)
B=mean_o-a.*mean_o
Wherein, ε is smoothness parameter, and .* represents dot product, and/expression point removes;
2), a and b be all based on core to 1/ τ convolution, respectively obtain vectorial a ' and b ', wherein, the size of τ cores.
CN201710851620.XA 2017-09-19 2017-09-19 A kind of cross section profile deformation detecting method based on machine vision and red line laser Expired - Fee Related CN107462182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710851620.XA CN107462182B (en) 2017-09-19 2017-09-19 A kind of cross section profile deformation detecting method based on machine vision and red line laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710851620.XA CN107462182B (en) 2017-09-19 2017-09-19 A kind of cross section profile deformation detecting method based on machine vision and red line laser

Publications (2)

Publication Number Publication Date
CN107462182A true CN107462182A (en) 2017-12-12
CN107462182B CN107462182B (en) 2019-05-28

Family

ID=60551652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710851620.XA Expired - Fee Related CN107462182B (en) 2017-09-19 2017-09-19 A kind of cross section profile deformation detecting method based on machine vision and red line laser

Country Status (1)

Country Link
CN (1) CN107462182B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108645862A (en) * 2018-04-26 2018-10-12 长春新产业光电技术有限公司 A kind of large format glass plate Local Convex concave defect detection method based on laser
CN109668524A (en) * 2019-01-30 2019-04-23 深圳科瑞技术股份有限公司 The method and system of detection battery main body appearance profile are illuminated based on linear type linear laser
CN110899147A (en) * 2019-11-28 2020-03-24 武汉工程大学 Laser scanning-based online stone sorting method for conveyor belt
CN113313750A (en) * 2020-12-01 2021-08-27 中冶长天国际工程有限责任公司 System and method for detecting material layer thickness of sintering machine
CN114324363A (en) * 2021-12-31 2022-04-12 苏州艾方芯动自动化设备有限公司 Product state detection method and system
CN114578384A (en) * 2022-05-07 2022-06-03 成都凯天电子股份有限公司 Self-adaptive constant false alarm detection method for laser atmospheric system
CN114324363B (en) * 2021-12-31 2024-04-26 无锡艾方芯动自动化设备有限公司 Product state detection method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2302499Y (en) * 1997-05-05 1998-12-30 南京理工大学 Portable laser pavement deflection detecting instrument
JP2009092535A (en) * 2007-10-10 2009-04-30 Ono Sokki Co Ltd Optical displacement gauge
CN101936732A (en) * 2009-07-03 2011-01-05 南京理工大学 Large-span high-straightness laser surface reticle instrument
CN102135414A (en) * 2010-12-29 2011-07-27 武汉大学 Method for calculating displacement of wall rock
CN103344190A (en) * 2013-06-26 2013-10-09 科瑞自动化技术(深圳)有限公司 Method and system for measuring postures of elastic arm based on line scanning
CN105651198A (en) * 2016-01-14 2016-06-08 清华大学 Stress monitoring method and stress monitoring device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2302499Y (en) * 1997-05-05 1998-12-30 南京理工大学 Portable laser pavement deflection detecting instrument
JP2009092535A (en) * 2007-10-10 2009-04-30 Ono Sokki Co Ltd Optical displacement gauge
CN101936732A (en) * 2009-07-03 2011-01-05 南京理工大学 Large-span high-straightness laser surface reticle instrument
CN102135414A (en) * 2010-12-29 2011-07-27 武汉大学 Method for calculating displacement of wall rock
CN103344190A (en) * 2013-06-26 2013-10-09 科瑞自动化技术(深圳)有限公司 Method and system for measuring postures of elastic arm based on line scanning
CN105651198A (en) * 2016-01-14 2016-06-08 清华大学 Stress monitoring method and stress monitoring device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王星: "隧道形变动态监测与分析系统研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
蔡波: "桥梁形变的图像检测关键技术研究", 《中国优秀博士学位论文全文数据库信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108645862A (en) * 2018-04-26 2018-10-12 长春新产业光电技术有限公司 A kind of large format glass plate Local Convex concave defect detection method based on laser
CN109668524A (en) * 2019-01-30 2019-04-23 深圳科瑞技术股份有限公司 The method and system of detection battery main body appearance profile are illuminated based on linear type linear laser
CN110899147A (en) * 2019-11-28 2020-03-24 武汉工程大学 Laser scanning-based online stone sorting method for conveyor belt
CN110899147B (en) * 2019-11-28 2021-10-08 武汉工程大学 Laser scanning-based online stone sorting method for conveyor belt
CN113313750A (en) * 2020-12-01 2021-08-27 中冶长天国际工程有限责任公司 System and method for detecting material layer thickness of sintering machine
CN114324363A (en) * 2021-12-31 2022-04-12 苏州艾方芯动自动化设备有限公司 Product state detection method and system
CN114324363B (en) * 2021-12-31 2024-04-26 无锡艾方芯动自动化设备有限公司 Product state detection method and system
CN114578384A (en) * 2022-05-07 2022-06-03 成都凯天电子股份有限公司 Self-adaptive constant false alarm detection method for laser atmospheric system

Also Published As

Publication number Publication date
CN107462182B (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN107462182B (en) A kind of cross section profile deformation detecting method based on machine vision and red line laser
CN107451590B (en) Gas detection identification and concentration representation method based on hyperspectral infrared image
CN103927741B (en) SAR image synthesis method for enhancing target characteristics
CN107179322A (en) A kind of bridge bottom crack detection method based on binocular vision
CN109754377A (en) A kind of more exposure image fusion methods
CN110322522B (en) Vehicle color recognition method based on target recognition area interception
CN109598762A (en) A kind of high-precision binocular camera scaling method
CN108921819B (en) Cloth inspecting device and method based on machine vision
CN104732900B (en) Picture element flaw detection method and device
CN103440644B (en) A kind of multi-scale image weak edge detection method based on minimum description length
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN115100206B (en) Printing defect identification method for textile with periodic pattern
CN103234475B (en) Sub-pixel surface morphology detecting method based on laser triangular measuring method
CN105894520B (en) A kind of automatic cloud detection method of optic of satellite image based on gauss hybrid models
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN110111711A (en) The detection method and device of screen, computer readable storage medium
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
CN110503623A (en) A method of Bird&#39;s Nest defect on the identification transmission line of electricity based on convolutional neural networks
CN108257125A (en) A kind of depth image quality based on natural scene statistics is without with reference to evaluation method
CN109507198A (en) Mask detection system and method based on Fast Fourier Transform (FFT) and linear Gauss
CN105225243B (en) One kind can antimierophonic method for detecting image edge
CN109741285A (en) A kind of construction method and system of underwater picture data set
CN108492306A (en) A kind of X-type Angular Point Extracting Method based on image outline
CN108830856A (en) A kind of GA automatic division method based on time series SD-OCT retinal images
CN104598906B (en) Vehicle outline detection method and its device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190528

Termination date: 20210919