CN107835931B - Method for monitoring linear dimension of three-dimensional entity - Google Patents

Method for monitoring linear dimension of three-dimensional entity Download PDF

Info

Publication number
CN107835931B
CN107835931B CN201580080870.7A CN201580080870A CN107835931B CN 107835931 B CN107835931 B CN 107835931B CN 201580080870 A CN201580080870 A CN 201580080870A CN 107835931 B CN107835931 B CN 107835931B
Authority
CN
China
Prior art keywords
lines
camera
projector
matrix
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201580080870.7A
Other languages
Chinese (zh)
Other versions
CN107835931A (en
Inventor
安德烈.弗拉基米罗维奇.克里莫夫
亚历山.格尔盖维奇.洛马金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anna Stibwa
Lomakin Aleksandr Georgievich
Original Assignee
Lomakin Aleksandr Georgievich
Annena
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lomakin Aleksandr Georgievich, Annena filed Critical Lomakin Aleksandr Georgievich
Publication of CN107835931A publication Critical patent/CN107835931A/en
Application granted granted Critical
Publication of CN107835931B publication Critical patent/CN107835931B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for performing 3D object measurements, comprising: projecting an image having a periodic structure composed of lines using a projector; recording projector light reflected from the object using pixels of the camera receive matrix, wherein a triangulation angle is formed between a central beam of the projector and a central beam of the camera; then identifying lines formed by the received pixels and the reflected light of the camera matrix to determine coordinates and identify lines in the camera image; an image consisting of two sets of intersecting lines is projected using a projector, the lines in each set being parallel to each other and at an angle to the vertical axis of the triangulation angle plane, and then the intersections of each pair of lines with each other and the pixel columns and rows of the camera matrix on the camera matrix registered by them are determined and identified. The invention can shorten the duration time and reduce the probability of imaging and measuring errors of the measured object.

Description

Method for monitoring linear dimension of three-dimensional entity
Technical Field
The present invention relates to a measuring device and can be used for accurate three-dimensional (3D) measurement and three-dimensional object profile visualization by observing known projection patterns at different triangulation angles.
Background
A method for optically measuring a surface shape is known which includes: arranging a surface in an illumination field of a projection optical system and simultaneously in a field of view of an image detector of the surface; projecting a set of images onto a surface to be measured with a light flux of a prescribed pattern using the projection optical system; a corresponding set of images of the surface when viewed at an angle different from the projection angle of the set of images is detected and the shape of the surface under test is determined based on the recorded images. Furthermore, at least three periodic light intensity distributions are alternately projected onto the surface, the periodic light intensity distributions being a set of bands of light whose intensities vary in a sinusoidal order in a transverse direction, and the periodic light intensity distributions being distinguished by shifting the set of bands of light by a controlled amount within the band of light in a direction perpendicular to the band of light, and the registered images being processed to obtain a preliminary phase distribution containing the phase corresponding to the surface point. Furthermore, projecting an additional light intensity distribution once onto said surface enables determining for each point of said surface the number of bands of light from said set of bands of light, additional images of said surface being registered, the resulting phase distribution of each visible point of said surface being obtained based on said image of an object illuminated by said preliminary phase distribution and said image of said object illuminated by a complementary illumination distribution. Absolute coordinates of points of the surface are obtained using pre-calibration data based on the obtained phase distribution. When performing measurements using the above method, it is assumed that: image registration of each surface point occurs under conditions where it is illuminated only by the direct beam of the projector, and the illumination of the image of the target point in the image detector is considered to be proportional to the brightness of the beam incident on that point directly from the projector (RU No. 2148793).
The disadvantage of this method is the complexity of its implementation and the duration of the process, which, in the case of mechanical fluctuations of the device position (projector and camera), requires a considerable amount of time to measure and takes into account the source of error.
A method and a device for contactless control and recognition of the surface of a three-dimensional object by means of structured lighting are known, which comprise: a light radiation source and a transparent body capable of forming a non-periodic line structured light band; an afocal optical system for projecting a transparency image onto a controlled surface; a receiving lens that forms an image of a line structure pattern appearing on the surface of the controlled object, distorted by the contour of the surface of the controlled object; a photographic recorder which converts an image formed by the receiving lens into a digital image; a computational digital electronics unit which recalculates the digital image produced by the photographic recorder into coordinates of the controlled surface, said components being mounted in turn along the radiation path; and is provided with: additional N-1 radiation sources, each of whichOne is different from the remaining spectral range of radiation; n-1 transparent bodies, each of which is different from the others at least for one optical band; n-1 lenses installed at the rear of the transparent body; n-1 mirrors installed in front of the second member of the afocal optical system and at an angle of 45 ° to the optical axis of each of the N-1 lenses, and installed behind the receiving lens and at an angle of 45 ° to the optical axis of the receiving lens; n-1 secondary receiving lenses, each of which is installed behind each of the second N-1 reflecting mirrors, and forms an image of a line structure pattern appearing on the surface of the controlled object, distorted by the contour of the surface of the controlled object, together with the receiving lenses; n-1 photographic recorders, each of which has a spectral sensitivity region coinciding with the spectral radiation range of one of the N-1 radiation sources; n-1 computational digital electronic units; an image addition electronic unit designed to have a number of inputs equal to the number of the calculation digital electronic units, each input of the image addition electronic block being connected to an output of each calculation digital electronic unit, and the number N being represented by the formula N ═ Log2(L) determining, wherein L is the number of spatial resolution element pairs of the photographic recorder (RU No. 2199718).
The disadvantage of this method is also the complexity of its implementation and the duration of the process, which, in the case of mechanical fluctuations in the position of the device (projector and camera), requires a considerable amount of time for the measurement and takes into account the source of error.
A method and apparatus for implementing it is known to monitor the linear dimensions of a three-dimensional object by means of three cartesian coordinates. Two cameras are located on the right and left sides of the projector to form a stereo pair (stereo pair) as human vision. The projector projects a strip image onto the object. Images are obtained from the right and left cameras and the two images are then compared by a correlation method, i.e. for each band from the right image, similar pairs in the left image are searched by a direct search method from all bands from the left image (US 6377700).
The disadvantages of this method are: searching for all possible pairs of bands on a computer and running the correlation algorithm takes a long time.
A method for three-dimensional object measurement using structured illumination is known, in which: projecting a predetermined image having at least two non-intersecting lines along one of the longitudinal axes onto an object to be examined by the projector; recording the light of the projector reflected from the object using at least two cameras placed at different distances from the projector, while different triangulation angles are formed between the central beam of the projector and the central beam of the cameras; then identifying each line formed by the reflected light projected by the projector and received by each camera by comparing the line coordinates obtained by the cameras, wherein the triangulation angle between the central beam of the projector and the central beam of the first camera located at the minimum distance from the projector is selected as the arctangent of the ratio of the distance between the projected bands of light to the depth of field of the camera lens; in the image of the first camera, the longitudinal and vertical coordinates of the line center are determined as the quotient of the longitudinal coordinate and the tangent of the triangulation angle between the central beam of the projector and the central beam of the first camera, while for the purpose of defining the vertical coordinate its value is obtained using a second camera located at a third triangulation angle greater than the first, so that in the image of the second camera: a position of a line identical to a line closest to a vertical coordinate calculated as a product of a vertical coordinate determined by the first camera multiplied by a tangent of a triangulation angle of the second camera; the specified values for the longitudinal and vertical coordinates are then determined for these lines (WO2014074003, prototype).
The disadvantages of this approach are as follows: in practice, at least two cameras are required, three or more preferably, and if one camera is used to determine the Z coordinate there may be significant errors due to errors in determining the points on the lines reflected along the object and in the fields recorded by the cameras associated with the periods on the rows and columns of pixels of the camera matrix, whereas the received fields associated with the periods of the reflected lines are located on the maximum number of columns and rows of pixels of the camera matrix. The first camera, at a small angle to the projector, obtains an image in which the area of the line that can be projected never occupies the area of the other line anywhere the object is located in the work area where the line is reflected, but the accuracy of determining the 3D coordinates is not very high, whereas the second camera is used for unambiguous determination.
The line field in the camera image is the area of the camera matrix pixels where the center of the projection line can be located, and the size of the field depends on the period between the projection lines and the thickness of the projection lines. Without using a second camera, it is almost impossible to specify the area of the projected position of the target point on the matrix.
Disclosure of Invention
The technical purpose of the invention is as follows: an efficient method is developed that uses structured illumination to perform 3D object measurements and extends the scope of methods that use structured illumination to perform 3D object measurements.
The technical effect of providing a solution to the formulated task is: the duration is shortened and the probability of imaging and measurement errors of the measured object, related to the errors in determining the points on the line reflected by the object and the fields recorded by the cameras on the pixel columns and rows of the camera matrix, is reduced, since the search and formation of each point in the camera image is performed along two lines, i.e. as intersections of two mutually perpendicular lines, which almost excludes the possibility of a wrong search for points along the line and a wrong determination of the row number, wherein the received fields of the reflected intersecting lines rotated with respect to the matrix columns and rows are located on the smallest possible number of pixel columns and rows of the unique camera matrix required for implementing the claimed method. In the present application, a projected horizontal line perpendicular to the vertical line is used as the second camera, all intersections of the vertical line and the horizontal line are uniquely assigned the number of horizontal lines, and 3D coordinates are determined by the intersections of the lines or by the horizontal line by using one camera and are in the image of one camera.
The blurred region, the region where the desired point is located, therefore contains the minimum number of pixels and is therefore significantly smaller than when the known method is implemented.
The essence of the invention is that the method for performing 3D object measurements comprises: projecting an image having a periodic structure composed of lines using a projector; recording projector light reflected from the object using pixels of the camera receive matrix, wherein a triangulation angle is formed between a central beam of the projector and a central beam of the camera; then identifying lines formed by the received pixels and the reflected light of the camera matrix to determine coordinates and identify lines in the camera image; an image consisting of two sets of intersecting lines is projected using a projector, the lines in each set being parallel to each other and at an angle to the vertical axis of the triangulation angle plane, and then the intersections of each pair of lines with each other and the pixel columns and rows of the camera matrix on the camera matrix registered by them are determined and identified.
Preferably, each intersection of a pair of projection lines found on the matrix with a vertical column of pixels is determined as the coordinate Xn of a point N on the object, the intersection of a horizontal line of pixels with the pair of projection lines is determined as the coordinate Yn on the object, and the coordinate Z is determined by the relation Z-M x Yn/sin (α), where M is the lens scale factor used to represent the pixels in the spatial dimension and α is the triangulation angle.
Preferably, the sets of parallel projection lines are perpendicular to each other in pairs, and the lines in each set are located at equal distances from each other, while the inclination angle of the projection lines is chosen to be acute.
Preferably, one of the mutually perpendicular lines is at an acute angle to the columns and the other is at an acute angle to the rows of pixels of the camera matrix selected from the following relationships: β ═ arcsin (Tv2 × M/((Z2-Z1) × sin α)), in which: β is the angle of the projection line position, t τ v2 is the distance between adjacent projection lines, M is the lens scale factor used to represent the pixel in the spatial dimension, Z1 and Z2 are the boundaries of the joint working area of the projector 1 and camera 5, and α is the triangulation angle.
Preferably, the measurements and coordinate determinations are made using a computer processor, and a 3D image of the measurement object is formed on a computer monitor.
Drawings
FIG. 1 shows a layout of a projector and a camera when projecting a single horizontal line onto an object;
FIG. 2 shows a layout of the projector and camera when projecting a line rotated by an angle β relative to the camera pixel columns and rows onto an object;
FIG. 3 shows a layout of the projector and camera when projecting two mutually perpendicular lines rotated with respect to the camera pixel columns and rows onto an object;
FIG. 4 shows the intersection of a projected mutual perpendicular with a column of pixels on the camera matrix;
fig. 5 is a blurred region formed when columns of the camera matrix intersect mutually perpendicular lines of projection.
In the drawings, the reference positions include: projector 1, projector 1 comprises a radiation source 2, a template pattern 3 for projecting an image, and a lens 4. The camera 5 comprises a receiving matrix 6 and the same lens 4 as the projector lens.
A template pattern 3 (equivalent: transparent body, template, slide etc.), for example a thin plate, having different absorption capacities or refractive indices at different points of the plane onto which the beam of the radiation source 2 impinges. The projector 1 and the camera 5 are positioned with a distance a between their lenses 4, while the central beam of the projector 1 and the central beam of the camera 5 form a triangulation angle α and a triangulation plane. In this case, Z1 and Z2 in fig. 1 are boundaries (depths) of the joint working area of the projector 1 and the camera 5. The working area of the scanner, which is geometrically viewed as the area of space where the beams of the projector intersect, forms an image on the object, while the beams define the coverage of the camera.
Detailed Description
In fig. 1, the horizontal line 8 projected by the projector 1 onto the measured object 7 is reflected on the latter and is recorded by pixels on the matrix 6 of the camera 5 in an area Ly (defined by a horizontal broken line in the figure) over the entire width of the matrix 6. In fig. 2, the intersection of the horizontal line 8 with the line 9 projected at the angle β onto the object to be measured 7 is reflected on the object to be measured and is recorded by a pixel on the matrix 6 of cameras 5 in the region Ly Tv2, which region Ly Tv2 contains a significantly smaller number of pixels in the region defined by the oblique and horizontal dashed lines in the figure, which are the recorded images of the mutually perpendicular lines 8 and 9 rotated at the angle β in the plane of the template pattern 3 and thus make an angle β with the pixel columns and rows in the plane of the matrix 6 of cameras 5. In fig. 3, mutually perpendicular lines 10 and 11, which project onto the object to be measured 7 at an acute angle β, are reflected on the object to be measured 7 and are registered by pixels on the matrix 6 of the camera 5 in an intersection area containing a smaller number of pixels in the area defined by the oblique dashed lines (which are registered images of the lines 10 and 11) on the pixel column 13.
The top part of fig. 4 shows the intersection of mutually perpendicular lines 9 and 11,12 projected at an angle β onto the object 7 to be measured, reflected on the object 7 to be measured and recorded by two columns 13,14 of pixels on the matrix 6 of the camera 5 in the area containing the smallest number of columns and rows of pixels located in the area defined by the two thick black dots in the figure, which is the recorded image of the intersection of line 9 with line 11 and line 12.
The bottom part of fig. 4 shows the intersection of mutually perpendicular lines 9 and 11,12 projected onto the object to be measured 7 at an angle β to the triangulation plane (columns and rows of pixels of the camera matrix), which is reflected on the object to be measured 7 and recorded by two columns 13,14 of pixels on the matrix 6 of the camera 5 in the area containing the minimum number of columns and rows of pixels located in the area defined by the four thick black dots in the figure, which are the recorded images of the intersection of the projection lines.
Fig. 5 shows the recorded field 15 of the reflected cross-field (thickness b) of the lines 9,11 rotated with respect to the triangulation plane and the columns 13 of the matrix located on the smallest possible number of pixel columns and rows of the matrix 6 of the camera 5.
The method is carried out as follows.
The method comprises the following steps: an image of the periodic structure is projected onto the surface of the object 7 by the projector 1. The light of the projector 1 reflected from the object 7 is recorded using pixels of a receiving matrix 6 of the camera 5, which camera 5 is displaced a distance a relative to the projection system of the projector 1 and is arranged to: a triangular measurement angle alpha is formed between the central beam of the projector 1 and the central beam of the camera 5.
Using the projector 1, images of the periodic structure are projected simultaneously onto an investigation object 7 consisting of two sets of pairs of intersecting lines, for example, mutually perpendicular lines 9,10,11 at an acute angle β with respect to a plane at a triangular measurement angle (triangulation plane), i.e. in general with respect to the columns 13 and rows of pixels of the matrix 6 of the camera 5, the light of the projector 1 reflected from the object 7 being recorded by the pixels of the receiving matrix 6 of the camera 5. One set of lines provides an initial measurement of the shape of the object 7, while a second set (e.g. perpendicular to the first set) is used for its refinement.
In fig. 1, as in the known analog, a projector 1 projects an image of a stencil pattern 3, the stencil pattern 3 being composed of a horizontal line 8 passing through the center of the image of the projector 1. The camera 5 observes the object 7 at an angle α and, depending on the position of the object 7 in the work zone Z1-Z2, the line 8 reflected from the object 7 is projected onto the matrix 6 of cameras 5 at different positions in the Ly region. In addition, Ly ═ ((Z1-Z2) × sin (α))/M, where M is the scale factor of the lens 4 used to project the image onto the matrix 6 of the camera 5. It can thus be observed that depending on the position of the object 7 in the work area, the projection line 8 can take any position in the Ly range on the matrix 6 of cameras. Thus, in order to uniquely identify and not obscure at least the projected lines on the matrix 6 of cameras 5, it is necessary to project lines having a period greater than Ly, i.e. T v1> Ly ((Z1-Z2) sin (α))/M.
In this case, for the sake of clarity, it is assumed that: the same lens 4 with the same scale factor can be used for both the projector 1 and the camera 5. If different lenses are used, the value M should take into account the ratio of the proportions between the different projection lenses on the projector 1 and on the camera 5. M may be not only a number but also a matrix for each lens that contains scale corrections for the horizontal and vertical directions of the projected image. These corrections are intended to correct the distortion of the lens (spatial optical distortion).
If the image in the projector 1 is rotated and not a horizontal line but a line 9 at an angle beta to the triangulation plane is projected, as shown in fig. 2, more parallel lines, i.e. with a shorter period, can be projected. In this case, the period between the lines will depend on the rotation angle β of the projected image in the projector 1. The distance between the parallel lines is tt v2> Ly sin (β).
If the period Tv2 is less than Ly x sin (β), the line may be located in the Tv2 region of another line and the number of lines may be erroneously detected, and therefore, the position Z of the object 7 in the work area may be erroneously determined.
A larger number of projectors, for example two projectors, the central beams of which lie in one triangulation plane, can be used to design a composite periodic image of the lines 9,11,12, but in this case the calculations become more complex.
All lines 9 projected at such a rotation angle β and period are unique, i.e. depending on the position of the object 7 in the working zones Z2-Z1, all projected lines will be projected into their specific area on the matrix 6 of the camera 5.
The mutually perpendicular lines 9 and 11,12 in fig. 4 and 5 lie on the vertical axis of the plane relative to the triangular measurement angle α and at an acute angle β relative to the pixel column of the camera 5. One of the mutually perpendicular lines, for example line 9, is located at an acute angle to the pixel columns and other lines, for example lines 11,12, of the matrix 6 of the camera 5. In this case, the acute angle β is preferably equal to the ratio of the distance between the projection lines to the working area multiplied by the sine of the triangulation angle α taking into account the scaling factor, i.e. determined by the following relation: β ═ arcsin (Tv2 × M/((Z2-Z1) ×) where β is the angle of the projected lines, Tv2 is the distance between adjacent projected lines, M is the lens scale factor used to represent the pixels in the spatial dimension, Z1 and Z2 are the boundaries of the joint working area of the projector 1 and camera 5, and α is the angle of triangulation.
Thus, when the image 3 is rotated in the projector 1, it is possible to project more lines 9,11,12 onto the matrix 6 of the camera 5 and obtain more information about the object 7, thereby narrowing the blurred area for each point of the object 7 on the matrix 6 of the camera 5.
Since the camera 5 is positioned at an angle α in the vertical plane relative to the projector 1, movement of the object 7 within the work area along the axis Z causes all lines and points on the lines to move along the lines of the vertical columns of pixels on the matrix 6 of the camera 5.
The following determination (localization and study) on the matrix 6 of the camera 5 for the intersection areas of each pair of projection lines with each other and with the pixel columns and rows is based on the following.
If a line 9 projected at an angle beta to the vertical (hereinafter "vertical") in the image of the projector 1 intersects a line 11 perpendicular to it at an angle beta to the horizontal (hereinafter "horizontal"), on the matrix 6 of the camera 5 the intersection 10 of the above-mentioned lines will always be projected onto the vertical column of the matrix 6 of the camera 5.
If line 9 intersects lines 11 and 12, each intersection point will be projected onto its column on matrix 6, as shown in fig. 4. The intersection of lines 9 and 12 will be projected onto column 14 and the intersection of lines 9 and 11 will be projected onto column 13.
Each line formed by the reflected light and the recorded pixels of the matrix 6 of the camera 5 are identified to determine a coordinate line in the image of the camera 5.
The zero point positions of the line and column numbers of the pixels on the matrix 6 of the camera 5 can be preset (before operating the system consisting of the camera 5 and the projector 1) if it is necessary to make as accurate a measurement as possible for each particular template pattern 3. This operation can be used for the advance correction of the distortion of the lens (optical spatial distortion) and the refinement of the above-described scale factor M.
The zero position setting is performed by a predetermined calibration procedure (before setting the object 7), where an arbitrarily selected calibration plane (e.g. in the form of a moving screen) in the working area of the device is moved along the coordinate Z and all columns on the matrix 6 of the camera 5 along which the movement of all intersections of the projection lines is followed are recorded. The position of the calibration plane at the intersection of the beams of the projector 1 and the camera 5 is chosen as the zero position. At the zero position, a line 9 in the image 3 of the projector 1 passing through the center of the image 3 will be projected onto the matrix 6 of cameras 5 located at its center, and this position of the projected line 9 on the camera matrix will also be referred to as the zero point. In fig. 1, the zero position is marked 0 along the axis Y. When the calibration plane is moved closer to or further away from the system consisting of camera 5 and projector 1, the deviation Δ Yn of a line on the matrix 6 of camera 5 is used to refine the located intersection point of that line on the matrix 6.
The positioning of the line 9 on the matrix 6 of the camera 5 is performed by searching the center of the line 9 of fig. 5, since in practice the projection line has a certain thickness b on the matrix 6 of the camera 5, which occupies several pixels. When the image 3 of the projector 1 is rotated, the thickness of the line 9 intersecting the pixel column 13 on the matrix 6 increases. And the positioning of the line 9 may be less accurate, which leads to ambiguity in determining the intersection of the "vertical" line with the "horizontal" line. In this respect, it is preferable to choose the period between the "horizontal" lines 9 to be greater than the blurred region 15 shown in fig. 5 at the intersection of the column 13 and the line 9, i.e. t gor > b/tg (β), where b is the thickness of the projected line 9 and t gor is the period between the "horizontal" lines 11, 12.
In order to project more lines 11,12 than the horizontal, it is necessary to rotate the "vertical" line 9, as shown in fig. 1, in which one line and its location area Ly on the matrix occupy almost the entire matrix 6 of the camera 5, so that the "vertical" line 9 is projected at an angle β with respect to the vertical. Fig. 2 shows that the region Tv2 is much smaller than the region Ly. The "vertical" line 9 intersects the "horizontal" lines 11,12, and the intersection of these lines provides unambiguously data on the number of vertical and horizontal lines that intersect at a given point.
It is reasonable to choose the "horizontal" lines 11,12 to be equal to or smaller than the vertical lines to avoid ambiguity in determining the line intersection point. At the same time, it is proposed to select a period of the horizontal line that is larger than the blurring region that occurs when lines cross, which results in a degree of blurring when determining the intersection of the vertical line and the horizontal line.
Therefore, the image 3 projected by the projector 1 may be realized with a vertical grid period t χ 2> ((Z1-Z2) sin (α) sin (β))/M and a horizontal grid period t gor > b/tg (β). The image 3 should be rotated by an angle beta with respect to the vertical columns 13 of pixels of the matrix 6.
Such an image projected on the object 7 allows to determine exactly the number of projected "horizontal" lines 11,12, which makes: given the geometry (relative position of camera 5 and projector 1, i.e. angle α of the system formed by camera 5 and projector 1), it is possible to determine the shape Z of object 7 in the working area of the system formed by camera 5 and projector 1 as M × Yn/sin (α). Yn is the offset (number) of the horizontal line 11 on the matrix 6 of the camera 5 from its central position, i.e. the position where it passes through the centre m of the matrix 6. When the object 7 is located in the middle of the working area, the line 11 intersects the centre 6 of the matrix.
Therefore, the intersections of the projection lines with each other and with the pixel columns on the camera matrix can be determined quickly and accurately, and further, the intersection of the pair of projection lines with the nearest vertical column on the camera matrix is defined as the coordinate Xn of the point N on the object, and the intersection of the pair of lines with the nearest horizontal pixel line is determined as the coordinate Yn on the object, and the coordinate Z is determined by the relationship Z ═ M × Yn/sin (α), where M is the lens scale factor for representing the pixels in the spatial dimension, and the angle α is the triangulation angle.
Fig. 5 shows that the blur area 15, the location field of the desired point for determining the number of lines, contains a minimum number of pixels and is therefore substantially smaller than when the known method is implemented. This eliminates the need for a second camera, thereby simplifying the design, technique and processing of the measurement results for the equipment used. The measurement (calculation of specific characteristics) and coordinate determination are performed by using a computer processor, and a 3D image of the object under measurement is formed on a computer monitor.
Therefore, the duration can be shortened, and the probability of imaging and measurement errors of the measured object can be reduced.
INDUSTRIAL APPLICABILITY
The present invention is implemented using general-purpose equipment widely used in the industry.

Claims (4)

1. A method for making 3D object measurements, comprising: projecting an image having a periodic structure composed of lines using a projector; recording projector light reflected from the object using pixels of the camera receive matrix, wherein a triangulation angle is formed between a central beam of the projector and a central beam of the camera; then identifying lines formed by the received pixels and the reflected light of the camera matrix to determine coordinates and identify lines in the camera image; projecting an image made up of two sets of intersecting lines using a projector, the lines in each set being parallel to each other and at an angle to the vertical axis of the triangulation angle plane, and subsequently determining and identifying the intersections of each pair of lines with each other and the columns and rows of pixels of the camera matrix on the camera matrix registered by them; each intersection of a pair of projection lines found on the matrix with a vertical column of pixels is determined as the coordinate Xn of a point N on the object, the intersection of a horizontal line of pixels with the pair of projection lines is determined as the coordinate Yn on the object, and the coordinate Z is determined by the relation Z ═ M × Yn/sin (α), where M is the lens scale factor used to represent the pixels in the spatial dimension and α is the triangulation angle.
2. A method as claimed in claim 1, characterized in that groups of parallel projection lines are mutually perpendicular in pairs, and the lines in each group are situated at equal distances from each other, while the angle of inclination of the projection lines is chosen to be acute.
3. A method according to claim 2, wherein one of the mutually perpendicular lines is at an acute angle to the columns and the other is at an acute angle to the rows of pixels of the camera matrix selected from the following relationships: β ═ arcsin (Tv2 × M/((Z2-Z1) × sin α)), in which: β is the angle of the projection line position, t τ v2 is the distance between adjacent projection lines, M is the lens scale factor used to represent the pixel in the spatial dimension, Z1 and Z2 are the boundaries of the joint working area of the projector (1) and camera (5), and α is the triangulation angle.
4. A method according to claim 1, 2 or 3, characterized in that the measurement and coordinate determination are performed using a computer processor and that a 3D image of the measurement object is formed on a computer monitor.
CN201580080870.7A 2015-12-04 2015-12-04 Method for monitoring linear dimension of three-dimensional entity Expired - Fee Related CN107835931B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/RU2015/000851 WO2017095259A1 (en) 2015-12-04 2015-12-04 Method for monitoring linear dimensions of three-dimensional entities

Publications (2)

Publication Number Publication Date
CN107835931A CN107835931A (en) 2018-03-23
CN107835931B true CN107835931B (en) 2020-11-10

Family

ID=58797378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580080870.7A Expired - Fee Related CN107835931B (en) 2015-12-04 2015-12-04 Method for monitoring linear dimension of three-dimensional entity

Country Status (2)

Country Link
CN (1) CN107835931B (en)
WO (1) WO2017095259A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10198808B2 (en) * 2016-01-15 2019-02-05 Instrumental, Inc. Methods for automatically generating a common measurement across multiple assembly units
CN112017238B (en) * 2019-05-30 2024-07-19 北京初速度科技有限公司 Method and device for determining spatial position information of linear object

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065242B2 (en) * 2000-03-28 2006-06-20 Viewpoint Corporation System and method of three-dimensional image capture and modeling
CN103228228B (en) * 2010-07-12 2016-04-13 3形状股份有限公司 Use the 3D object modeling of textural characteristics
US8947677B2 (en) * 2010-08-06 2015-02-03 University Of Kentucky Research Foundation Dual-frequency phase multiplexing (DFPM) and period coded phase measuring (PCPM) pattern strategies in 3-D structured light systems, and lookup table (LUT) based data processing
JP5816773B2 (en) * 2012-06-07 2015-11-18 ファロ テクノロジーズ インコーポレーテッド Coordinate measuring machine with removable accessories
RU125335U1 (en) * 2012-11-07 2013-02-27 Общество с ограниченной ответственностью "Артек Венчурз" DEVICE FOR MONITORING LINEAR SIZES OF THREE-DIMENSIONAL OBJECTS
US20150015701A1 (en) * 2013-07-10 2015-01-15 Faro Technologies, Inc. Triangulation scanner having motorized elements
CN104006762B (en) * 2014-06-03 2017-01-04 大族激光科技产业集团股份有限公司 Obtain the methods, devices and systems of object dimensional information
CN104014905A (en) * 2014-06-06 2014-09-03 哈尔滨工业大学 Observation device and method of three-dimensional shape of molten pool in GTAW welding process

Also Published As

Publication number Publication date
WO2017095259A1 (en) 2017-06-08
CN107835931A (en) 2018-03-23

Similar Documents

Publication Publication Date Title
JP5334835B2 (en) Method and system for measuring shape of reflecting surface
EP2183544B1 (en) Non-contact measurement apparatus and method
KR102345886B1 (en) Method for the three-dimensional measuring of moving objects during known movement
CN108802043B (en) Tunnel detection device, tunnel detection system and tunnel defect information extraction method
EP2568253B1 (en) Structured-light measuring method and system
EP2918967B1 (en) Method for monitoring linear dimensions of three-dimensional objects
Liu et al. An improved online dimensional measurement method of large hot cylindrical forging
CN107121079B (en) A kind of curved surface elevation information measuring device and method based on monocular vision
CN110057552B (en) Virtual image distance measuring method, device, equipment, controller and medium
JP2012058076A (en) Three-dimensional measurement device and three-dimensional measurement method
CN112888913B (en) Three-dimensional sensor with column-to-column channels
CN113034612B (en) Calibration device, method and depth camera
CN110672037A (en) Linear light source grating projection three-dimensional measurement system and method based on phase shift method
JP3435019B2 (en) Lens characteristic measuring device and lens characteristic measuring method
CN107835931B (en) Method for monitoring linear dimension of three-dimensional entity
CN114170321A (en) Camera self-calibration method and system based on distance measurement
Li et al. Monocular underwater measurement of structured light by scanning with vibrating mirrors
CN107810384B (en) Stripe projection method, stripe projection apparatus, and computer program product
JP4382430B2 (en) Head three-dimensional shape measurement system
RU125335U1 (en) DEVICE FOR MONITORING LINEAR SIZES OF THREE-DIMENSIONAL OBJECTS
US20160349045A1 (en) A method of measurement of linear dimensions of three-dimensional objects
WO1992005403A1 (en) Moire distance measurements using a grating printed on or attached to a surface
Yang et al. Camera calibration with active standard Gaussian stripes for 3D measurement
RU164082U1 (en) DEVICE FOR MONITORING LINEAR SIZES OF THREE-DIMENSIONAL OBJECTS
CN105008903A (en) Method and device for analyzing the surface of a substrate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200927

Address after: 25-2-21 Spring Street, Moscow, Russian Federation

Applicant after: Anna Stibwa

Applicant after: LOMAKIN ALEKSANDR GEORGIEVICH

Address before: 40, building 1, 25 wernatsky Road, Moscow

Applicant before: Andre Vladimirovich Kerrey mov

Applicant before: LOMAKIN ALEKSANDR GEORGIEVICH

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201110

Termination date: 20211204