WO2010071139A1 - Shape measurement device and program - Google Patents
Shape measurement device and program Download PDFInfo
- Publication number
- WO2010071139A1 WO2010071139A1 PCT/JP2009/070940 JP2009070940W WO2010071139A1 WO 2010071139 A1 WO2010071139 A1 WO 2010071139A1 JP 2009070940 W JP2009070940 W JP 2009070940W WO 2010071139 A1 WO2010071139 A1 WO 2010071139A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- point
- measurement
- unit
- points
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/245—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
- A61B5/0064—Body surface scanning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present invention relates to a shape measurement technique for measuring a three-dimensional shape of a measurement object based on overlapping images obtained by photographing the measurement object from a plurality of photographing positions, and in particular, initial values necessary for measurement of the three-dimensional shape.
- the present invention relates to a technique for automatically acquiring measured values.
- stereo matching is performed on the pixels of the measurement object.
- LSM least-square matching
- a normalized correlation method, or the like is used. This process requires many points and lines associated with the left and right images, but manually setting initial values of these points and lines is cumbersome and involves skills.
- Patent Documents 1 and 2 disclose techniques for solving such problems.
- a pair of first photographed images obtained by photographing a measurement object provided with a reference feature pattern from different directions and a measurement object provided with no reference feature pattern are first.
- a feature pattern is extracted by taking the difference between the first captured image and the second captured image obtained in each direction.
- the position of the feature pattern can be automatically and accurately detected. Also, by increasing the number of feature pattern points, it is possible to automatically detect the corresponding surface in the left and right images.
- the present invention provides a technique for automatically acquiring measurement values including initial values necessary for measurement of a three-dimensional shape by automatically determining erroneous correspondence points in overlapping images. With the goal.
- the imaging unit that images the measurement object in the overlapping imaging region from a plurality of imaging positions, and the position of the feature point of the measurement object in the overlapping image captured by the imaging unit
- a feature model associating unit a measurement model forming unit for forming a model of the measurement object based on the feature points on the overlapped image associated by the feature point associating unit, and the measurement model forming unit.
- an erroneous corresponding point determination unit that determines an erroneous corresponding point, and the erroneous corresponding point determination unit determines that the erroneous corresponding point is detected.
- a three-dimensional shape measurement unit that obtains the three-dimensional coordinates of the feature point of the measurement object or the three-dimensional shape of the measurement object based on the position of the feature point excluding the point and the plurality of imaging positions.
- the first aspect of the present invention it is possible to automatically acquire a measurement value including an initial value necessary for measurement of a three-dimensional shape by automatically determining an erroneous correspondence point in an overlapping image.
- the reference model includes at least one of data obtained by MRI, CT, CAD, and another shape measuring device and shape data obtained in the past. It is characterized by being one.
- the miscorresponding point can be automatically determined using the reference model similar to the shape of the measurement model.
- the invention described in claim 3 is characterized in that, in the invention described in claim 2, the reference model is actual size data to which an actual size is given.
- the reference model is a pseudo model that is similar to the shape of the object to be measured and is not given actual dimensions, and has the same volume.
- the miscorresponding point determination unit determines the miscorresponding point.
- the fourth aspect of the present invention it is possible to automatically determine a miscorresponding point of a measurement model using a reference model (pseudo model) having different dimensions.
- the reference model is a pseudo model that is similar to the shape of the measurement object and is not given an actual dimension, and is a distance from the center of gravity. After the coordinate conversion is performed so as to be the same, the miscorresponding point determination unit determines the miscorresponding point.
- the fifth aspect of the present invention it is possible to automatically determine an erroneous correspondence point of a measurement model using a reference model (pseudo model) having different dimensions.
- the reference model is a pseudo model that is similar to the shape of the measurement object and is not given actual dimensions, and has at least four points. After performing coordinate conversion so as to match the positions of the feature points, the miscorresponding point determination unit determines the miscorresponding points.
- the sixth aspect of the present invention it is possible to automatically determine a miscorresponding point of a measurement model using a reference model (pseudo model) having different dimensions.
- the invention according to claim 7 is the invention according to claim 1, wherein the reference model and the measurement are minimized by minimizing a distance between a point of the reference model and a point of the measurement model closest to the reference model point. It is characterized by aligning the model.
- the reference model and the measurement model can be automatically aligned.
- the reference model and the measurement model are aligned by specifying four points of the measurement model corresponding to the reference model point. It is characterized by performing.
- the alignment of the reference model and the measurement model can be performed either manually, semi-automatically or fully automatically.
- the measurement is performed. It is characterized in that a point of the model is determined as an erroneous correspondence point.
- the miscorresponding point is determined based only on the point-to-point distance between the reference model and the measurement model, the miscorresponding point determination process can be simplified.
- a tenth aspect of the present invention is the invention according to any one of the first to ninth aspects, wherein the texture extracting unit extracts the texture of the measurement object from the image of the measurement object photographed by the photographing unit; A texture synthesis unit for attaching the texture to at least one of the measurement model formed by the measurement model formation unit and the reference model, and a model image with a texture based on the textured model synthesized by the texture synthesis unit. And a display unit.
- the invention according to claim 11 is characterized in that, in the invention according to any one of claims 1 to 9, when the miscorresponding point determination unit determines that it is a miscorresponding point, it corresponds to the miscorresponding point. The point designation is canceled.
- the feature point group excluding the miscorresponding points can be set as a measured value including an initial value necessary for measuring the three-dimensional shape.
- the invention according to claim 12 is a feature point associating step for associating the positions of the feature points of the measurement object in the overlapping images taken from a plurality of photographing positions, and on the overlapping image associated in the feature point associating step.
- a measurement model forming step for forming a model of the measurement object based on the feature points in FIG. 5; a measurement model formed in the measurement model formation step; and a reference model formed based on the form of the measurement object.
- the measurement object based on the miscorresponding point determination step for determining the miscorresponding point, the position of the feature point excluding the point determined as the miscorresponding point in the miscorresponding point determination step, and the plurality of photographing positions.
- a three-dimensional shape measuring step for obtaining a three-dimensional coordinate of the feature point or a three-dimensional shape of the measurement object.
- the twelfth aspect of the present invention it is possible to automatically acquire a measurement value including an initial value necessary for measurement of a three-dimensional shape by automatically determining an erroneous correspondence point in an overlapping image.
- the present invention it is possible to automatically acquire a measurement value including an initial value necessary for measuring a three-dimensional shape by automatically determining an erroneous correspondence point in an overlapping image.
- Drawing substitute photo (A) showing reference model by MRI, drawing substitute photo (B) of dense surface measurement result when miscorresponding point is not judged, and drawing substitute of dense surface measurement result when miscorresponding point is judged
- FIG. 6 is a block diagram of a shape measuring apparatus according to second to fourth embodiments.
- 10 is a flowchart of a program of the shape measuring apparatus according to the second to fourth embodiments. It is drawing substitute photograph (A) which shows a perimeter model, and drawing substitute photograph (B) which shows a mode that the perimeter model was cut into circles.
- drawing substitute photo A showing the centroid position of the reference model and a drawing substitute photo (B) showing the centroid position of the measurement model.
- drawing substitute photograph A showing the feature point positions of the reference model and a drawing substitute photograph (B) showing the feature point positions of the measurement model.
- SYMBOLS 1 Shape measuring apparatus, 2-9 ... Imaging
- FIG. 1 is a top view of the shape measuring apparatus.
- the shape measuring apparatus 1 includes photographing units 2 to 9, feature projecting units 10 to 13, a relay unit 14, a calculation processing unit 15, a display unit 17, and an operation unit 16.
- the shape measuring apparatus 1 measures the shape of the measuring object 18 arranged in the center of the photographing units 2 to 9.
- a video camera for example, a CCD camera for industrial measurement (Charge Coupled Device Camera), a CMOS camera (Complementary Metal Oxide Semiconductor Camera), or the like is used.
- a CCD camera for industrial measurement
- a CMOS camera Complementary Metal Oxide Semiconductor Camera
- data can be transferred to the calculation processing unit 15 using a compact flash (registered trademark) memory, a USB cable, or the like.
- the imaging units 2 to 9 are arranged around the measurement object 18. The imaging units 2 to 9 shoot the measurement object 18 in overlapping imaging areas from a plurality of imaging positions.
- the photographing units 2 to 9 are arranged in the horizontal direction or the vertical direction separated by a predetermined baseline length. Note that an imaging unit may be added and arranged in both the horizontal direction and the vertical direction.
- the shape measuring apparatus 1 measures the three-dimensional shape of the measuring object 18 based on at least a pair of overlapping images. Therefore, one or a plurality of photographing units 2 to 9 can be appropriately selected depending on the size and shape of the photographing subject.
- a projector or a laser device is used for the feature projection units 10 to 13.
- the feature projection units 10 to 13 project a pattern such as a random dot pattern, dot spot light, or linear slit light onto the measurement object 18. Thereby, a feature enters a portion where the feature of the measurement object 18 is poor.
- the feature projection units 10 to 13 are arranged between the photographing units 2 and 3, between the photographing units 4 and 5, between the photographing units 6 and 7, and between the photographing units 8 and 9. Note that the feature projection units 10 to 13 may be omitted when the measurement object 18 has a feature or when a pattern can be applied.
- the photographing units 2 to 9 are connected to the relay unit 14 via an interface such as Ethernet (registered trademark), camera link, or IEEE 1394 (Institut of Electrical and Electronic Engineers 1394).
- Ethernet registered trademark
- camera link or IEEE 1394 (Institut of Electrical and Electronic Engineers 1394).
- IEEE 1394 Institute of Electrical and Electronic Engineers 1394.
- a switching hub or an image capture board is used as the relay unit 14. Images taken by the photographing units 2 to 9 are input to the calculation processing unit 15 via the relay unit 14.
- the calculation processing unit 15 uses a personal computer (Personal Computer: PC), or a PLD (ProgrammableLigerDigware Hardware) such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit).
- PC Personal Computer
- PLD ProgrammableLigerDigware Hardware
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- the calculation processing unit 15 is operated by the operation unit 16, and the processing contents and calculation results of the calculation processing unit 15 are displayed on the display unit 17.
- a keyboard and a mouse are used for the operation unit 16, and a liquid crystal monitor is used for the display unit 17.
- the operation unit 16 and the display unit 17 may be configured integrally with a touch panel type liquid crystal monitor.
- FIG. 2 is a block diagram of the shape measuring apparatus.
- the calculation processing unit 15 includes an imaging position / orientation measurement unit 20, a feature point association unit 21, a three-dimensional coordinate calculation unit 22, a triangle network formation unit 23, an incorrect corresponding point determination unit 24, and a three-dimensional shape measurement unit 25. These may be implemented as a module of a program that can be executed by a PC, or may be implemented as a PLD such as an FPGA.
- the imaging position / orientation measurement unit 20 measures external orientation elements (imaging positions and orientations) of the imaging units 2 to 9 based on an image obtained by imaging the calibration subject 19 shown in FIG. If the internal orientation elements (principal point, focal length, lens distortion) of the photographing units 2 to 9 are not known, the photographing position / orientation measuring unit 20 obtains them simultaneously.
- the calibration subject 19 is a cubic calibration box in which a plurality of reference points are arranged.
- a color code target is used as the reference point (see Japanese Patent Application Laid-Open No. 2007-64627).
- the color code target has three retro targets (retroreflective targets).
- the photographing position / orientation measurement unit 20 binarizes an image obtained by photographing the calibration subject 19, thereby detecting a retro target and obtaining its center-of-gravity position (reference point image coordinates).
- the photographing position / orientation measurement unit 20 labels each reference point based on the color code target color scheme (color code). Thereby, the position of the corresponding reference point in the overlapping image is known.
- the photographing position / orientation measuring unit 20 calculates the external orientation elements of the photographing units 2 to 9 by using the relative orientation method, the single photograph orientation method, the DLT method, or the bundle adjustment method. These may be used alone or in combination. Specific processing by the relative orientation method will be described later.
- This embodiment employs the first method in which two or more photographing units 2 to 9 are fixed, the calibration subject 19 is photographed in advance, and the positions and orientations of the photographing units 2 to 9 are calculated. Yes.
- the advantage of this first method is that even when a moving object (for example, a living body) is measured, the measurement object 18 can be captured and measured in an instant.
- three-dimensional measurement can be performed at any time by placing the measurement object 18 in the space.
- the calibration subject may be photographed together with the measurement object 18 and the three-dimensional coordinates of the external orientation element and the feature point of the measurement object 18 may be obtained in parallel.
- the feature point association unit 21 extracts feature points of the measurement object 18 from at least a pair of stereo images, and associates the positions of the feature points in the stereo image.
- the feature point association unit 21 searches for the position of the feature point in the horizontal direction, and when the photographing units 2 to 9 are arranged in the vertical direction.
- the position of the feature point is searched in the vertical direction and the photographing units 2 to 9 are arranged in the horizontal direction and the vertical direction, the position of the feature point is searched in the horizontal direction and the vertical direction.
- the feature point association unit 21 includes a background removal unit 26, a feature point extraction unit 27, and a corresponding point search unit 28.
- the background removal unit 26 generates a background removal image in which only the measurement object 18 is copied by subtracting the background image from the processed image in which the measurement object 18 is copied.
- the feature point extraction unit 27 extracts feature points from the background removed image. At this time, feature points are extracted from the left and right stereo images in order to limit the search range of corresponding points.
- a differential filter such as Sobel, Laplacian, Prewitt, or Roberts is used.
- Corresponding point search unit 28 searches for the corresponding point corresponding to the feature point extracted in one image in the other image.
- template matching such as a residual sequential test method (Sequential Similarity Detection Algorithm Method: SSDA), a normalized correlation method, a direction code matching method (Orientation Code Matching: OCM), or the like is used.
- the three-dimensional coordinate calculation unit 22 is based on the external orientation elements measured by the shooting position / orientation measurement unit 20 and the image coordinates of the feature points associated by the feature point association unit 21. Calculate 3D coordinates.
- Triangular network forming unit 23 forms an irregular triangular network (TIN: Triangulated Irregular Network) in which the feature points associated by the feature point association unit 21 are connected by line segments.
- TIN Triangulated Irregular Network
- a Delaunay method is used to form TIN.
- the TIN is formed based on the image coordinates of the feature points associated by the feature point association unit 21 or the 3D coordinates obtained by the 3D coordinate calculation unit 22.
- the miscorresponding point determination unit 24 includes a shape matching unit 29 and a shape model comparison unit 30.
- the shape matching unit 29 aligns the measurement model formed by the triangular mesh forming unit 23 with the reference model of another measurement object.
- the reference model is a model of another measurement object similar to the measurement object 18, and an actual size model to which actual dimensions are given and a pseudo model to which no actual dimensions are given can be used. .
- the actual size model will be described in this embodiment, and the pseudo model will be described in detail in the second embodiment and thereafter.
- the actual size model is measured by, for example, MRI (Magnetic Resonance Imaging), CT (Computed Tomography), CAD (Computer Aided Design), or another shape measuring device. Data can be used. Further, shape data obtained in the past by the present apparatus may be used.
- the ICP Intelligent Closest Point
- the method of minimizing the distance between the points of the measurement model and the reference model, or four or more points of the reference model corresponding to the points of the measurement model The method of specifying is used.
- the ICP method “Shunichi Kaneko et al., Robust ICP positioning method introducing M-estimation, Journal of Precision Engineering, Vol. 67, No. 8” or “Besl, PJ and McKay, ND. , A Method for Registration of 3-D Shapes, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.14, NO.2. Other alignment methods will be described later.
- the shape model comparison unit 30 compares the measurement model with the reference model to determine a miscorresponding point.
- the shape model comparison unit 30 determines a miscorresponding point based on a point-to-point distance between a point constituting the measurement model and a point constituting the reference model. When it is determined that the feature point is a miscorresponding point, the designation of the feature point is canceled.
- the three-dimensional shape measurement unit 25 performs stereo matching on the pixels in the predetermined region using the feature point group excluding the points determined as the erroneous correspondence points by the erroneous correspondence point determination unit 24, and Find the 3D shape.
- LSM that searches while deforming a template image, a normalized correlation method, or the like is used.
- the three-dimensional shape is displayed on the display unit 17 as a point cloud or TIN.
- the process of removing the miscorresponding point performed when the miscorresponding point determination unit 24 obtains the initial value can be similarly performed when the three-dimensional shape measuring unit 25 subsequently obtains the measurement value. .
- the texture extraction unit 34 extracts the texture (image) of the measurement target 18 from the images photographed by the photographing units 2 to 9.
- the texture synthesis unit 35 pastes the texture on at least one of the measurement model from which the miscorresponding points are removed and the reference model.
- FIG. 3 is a flowchart of the program of the shape measuring apparatus. Note that a program for executing this flowchart and a program according to an embodiment to be described later can be provided by being stored in a recording medium such as a CDROM.
- FIG. 4 is a drawing substitute photo (A) showing a processed image, a drawing substitute photo (B) showing a background image, and a drawing substitute photo (C) showing a background removed image.
- the background removed image is generated by subtracting the background image from the processed image. Background removal is performed on the left and right images. Note that this process may not be performed if a background image cannot be obtained.
- step S12 feature points are extracted from the left and right images by the feature point extraction unit 27 (step S12).
- the search range for corresponding points can be reduced.
- the feature point extraction unit 27 performs preprocessing such as reduction processing, brightness correction, and contrast correction as necessary (step S12-1).
- the edge strength is calculated from the preprocessed left and right images by the Sobel filter (step S12-2).
- FIG. 5 is a Sobel filter in the x and y directions. If the luminance values of the nine pixels corresponding to the matrix of the Sobel filter are I 1 to I 9 from the upper left to the lower right, the intensity in the x direction is dx, and the intensity in the y direction is dy, the target pixel (center The edge intensity Mag of the pixel is calculated by the following equation (1).
- FIG. 6 is a drawing substitute photo (A) showing an input image compressed by 1/4 and a drawing substitute photo (B) showing an edge intensity image by a Sobel filter.
- the feature point extraction unit 27 performs post-processing such as thinning the edge intensity image in FIG. 6B (step S12-3). By thinning, the edge becomes one pixel width. As a result, since the positions of the feature points finally extracted are thinned out, the feature points are extracted without deviation in the image.
- the feature point extraction unit 27 performs binarization processing (step S12-4).
- a histogram of edge strength is created.
- FIG. 7 is a histogram of edge strength.
- the feature point extraction unit 27 uses, as a binarization threshold, an edge intensity corresponding to a position where the cumulative frequency counted from the edge having the stronger edge intensity is 50% of the total number of edge pixels in the created histogram.
- FIG. 7 is a drawing substitute photograph showing the result of binarization with the threshold 52
- FIG. 9 is a drawing substitute photograph showing the feature points extracted in the left and right images.
- the corresponding points in the left and right images are associated by the corresponding point search unit 28 (step S13).
- the corresponding point search unit 28 creates a template image centered on each feature point in the left image, and searches for a corresponding point having the strongest correlation with the template image in a predetermined region in the right image.
- FIG. 10 is an explanatory diagram (A) for explaining a template creation method, an explanatory diagram (B) for explaining a search line determination method, and an explanatory diagram (C) for explaining a search width determination method.
- the template image is composed of 21 pixels ⁇ 21 pixels centered on the feature point of interest. Since the vertical parallax is removed from the stereo image, a search line is provided in parallel to the x-axis as shown in FIG.
- the corresponding point search unit 28 detects the leftmost feature point and the rightmost feature point on the search line of the right image, and the leftmost feature point and the rightmost feature point. Corresponding points are searched using the search range up to the feature point of.
- the point having the strongest correlation with the template image becomes the corresponding point.
- the search line is parallel to the y axis.
- the search lines are the x axis and the y axis.
- the miscorresponding point determination unit 24 determines a feature point (miscorresponding point) that is incorrectly associated (step S14).
- the miscorresponding point determination process first, the three-dimensional coordinates of the feature points associated with the overlapping images are calculated, and step S14-1 for forming the measurement model is aligned with the measurement model and the reference model (shape matching). Step S14-2, and Step S14-3 for determining a miscorresponding point by comparing the measurement model with the reference model.
- step S14-1 first, the three-dimensional coordinates of the feature points are calculated based on the positions of the feature points associated by the feature point association unit 21. Then, based on the three-dimensional coordinates of the feature points, the triangle network forming unit 23 forms a TIN in three dimensions. A Delaunay method is used to form TIN.
- step S14-2 shape matching between the measurement model and the reference model is performed (step S14-2).
- a method for minimizing the distance between the points of the measurement model and the reference model will be described below.
- the point of the measurement model closest to each point of the reference model is searched. Assuming that the three-dimensional coordinate of the point of the reference model is (x1i, y1i, z1i) and the three-dimensional coordinate of the point of the measurement model closest to that point is (x2i, y2i, z2i), , Move all points in the measurement model or reference model to minimize the total distance between all points.
- i is a point number
- n is the number of points of the reference model.
- FIG. 11 is a drawing substitute photo (A) of the measurement model and a drawing substitute photo (B) of the reference model.
- A drawing substitute photo
- B drawing substitute photo
- FIG. 12 shows a drawing substitute photo (A) of the reference model, a drawing substitute photo (B) of the measurement model, and a drawing substitute photo (C) displaying miscorresponding points.
- the comparison between the reference model and the measurement model is performed by searching for the point of the measurement model closest to the reference model point and determining whether the distance between the points is within the range of ⁇ 2 ⁇ to ⁇ 3 ⁇ of the prediction accuracy ⁇ ⁇ . . Points outside the range are determined as miscorresponding points. For example, if the prediction accuracy is ⁇ 0.5 mm, a point having a distance between points of 1 mm or more is determined as an erroneous correspondence point.
- FIG. 13 is a drawing substitute photo (A) showing a reference model by MRI, a drawing substitute photo (B) of a dense surface measurement result when an erroneous correspondence point is not determined, and a dense surface measurement when an erroneous correspondence point is determined. It is a drawing substitute photograph (C) of the result.
- the three-dimensional shape shown in FIG. 13C is a dense surface measurement result in which an erroneous corresponding point is determined based on the reference model shown in FIG. As shown in FIG. 13B, when an erroneous correspondence point is not determined, an unnecessary TIN is formed in the contour portion (background portion) of the face. On the other hand, as shown in FIG. 13C, when an erroneous correspondence point is determined, unnecessary TIN is not formed in that portion.
- the process of removing the miscorresponding point performed when the initial value is obtained can be similarly performed when the three-dimensional shape measuring unit 25 subsequently obtains the measured value.
- texture can be pasted on the measurement model or reference model. Details of the texture mapping will be described below.
- the texture (image) of the measurement object 18 is extracted from the images photographed by the photographing units 2 to 9.
- the texture of the measurement object 18 is obtained by surrounding the feature point group excluding the miscorresponding points with a convex hull and extracting an image in the region of the convex hull.
- image coordinates (x, y) on the photograph are converted into spatial coordinates (X, Y, Z) in order to create a texture-mapped image.
- the spatial coordinates (X, Y, Z) are values calculated by the three-dimensional coordinate calculation unit 22.
- FIG. 14 is a drawing-substituting photograph (A) showing a wire frame, and a drawing-substituting photograph (B) showing an image obtained by mapping a texture.
- an external orientation element can be obtained based on six or more corresponding reference points copied in the duplicate image. If the three-dimensional position of the reference point is known, the absolute coordinates of the photographing units 2 to 9 can be obtained by absolute orientation. In this case, the finally obtained measurement model is an actual scale.
- FIG. 15 is an explanatory diagram for explaining relative orientation.
- an external orientation element is obtained from six or more corresponding points (pass points) in the two left and right images.
- a coplanar condition that two rays connecting the projection centers O 1 and O 2 and the reference point P must be in the same plane is used.
- the following formula 4 shows the coplanar conditional expression.
- the origin of the model coordinate system is taken as the left projection center O 1
- the line connecting the right projection center O 2 is taken as the X axis.
- the base length is the unit length.
- the parameters to be obtained are the rotation angle ⁇ 1 of the left camera, the rotation angle ⁇ 1 of the Y axis, the rotation angle ⁇ 2 of the right camera, the rotation angle ⁇ 2 of the Y axis, and the X axis
- the coplanar conditional expression of Expression 4 becomes as shown in Expression 5, and each parameter can be obtained by solving this expression.
- an unknown parameter (external orientation element) is obtained by the following procedure.
- the initial approximate values of unknown parameters ( ⁇ 1 , ⁇ 1 , ⁇ 2 , ⁇ 2 , ⁇ 2 ) are normally 0.
- the coplanar conditional expression of Equation 5 is Taylor-expanded around the approximate value, and the value of the differential coefficient when linearized is obtained by Equation 6, and the observation equation is established.
- a least square method is applied to obtain a correction amount for the approximate value.
- the approximate value is corrected.
- the operations (1) to (4) are repeated until convergence.
- connection orientation is a process of unifying the inclination and scale between a plurality of models to make the same coordinate system.
- a connection range represented by the following formula 7 is calculated. As a result of the calculation, if ⁇ Z j and ⁇ D j are equal to or less than a predetermined value (for example, 0.0005 (1/2000)), it is determined that the connection orientation has been normally performed.
- the measurement value including the initial value of LSM or normalized correlation method used for stereo matching is automatically acquired. Can do.
- the scale adjustment between the measurement model and the reference model becomes unnecessary by using the reference model to which the actual dimension is given.
- the reference model and the measurement model can be automatically aligned.
- the alignment of the reference model and the measurement model can be performed manually, semi-automatically or fully automatically.
- the miscorresponding point is determined based only on the point-to-point distance between the reference model and the measurement model, the miscorresponding point determination process can be simplified.
- the photographing position and posture of the photographing units 2 to 9 can be measured by the calibration subject 19. After obtaining the photographing position and orientation, three-dimensional measurement can be performed at any time by placing the measuring object 18 in the space. Further, even a dynamic measurement object 18 having movement can be measured by simultaneously photographing with the photographing units 2 to 9.
- FIG. 16 is a block diagram of the shape measuring apparatus according to the second to fourth embodiments.
- the calculation processing unit 15 of the shape measuring apparatus includes a feature amount calculation unit 31 that calculates a feature amount (volume), a scale rate calculation unit 32 that calculates a scale rate based on the volume of the measurement model and the pseudo model, and a calculated scale. And a scale changing unit 33 that changes the scale based on the rate.
- FIG. 17 is a flowchart of the program of the shape measuring apparatus according to the second to fourth embodiments.
- the triangular network forming unit 23 (measurement model forming unit) combines a plurality of measurement models to form a measurement model (entire model) of the entire circumference of the measurement object 18.
- FIG. 18 is a drawing substitute photo (A) showing the all-around model and a drawing substitute photo (B) showing a state in which the all-around model is cut into circles.
- the scale ratio is calculated by dividing the volume of the pseudo model by the volume of the entire circumference model (step S24-3).
- the volume data of the pseudo model is calculated or prepared in advance.
- the scale of the pseudo model is changed with the calculated scale factor (step S24-4).
- the scale is changed by multiplying the coordinate value of each point of the pseudo model by the scale factor.
- the entire model and the pseudo model are aligned (shape matching) (step S24-5).
- shape matching the above-described ICP method, a method for minimizing the distance between the points of the measurement model and the reference model, or a method for designating four or more points of the reference model corresponding to the points of the measurement model are used.
- the mis-corresponding point is determined by comparing the aligned all-around model (measurement model) and the pseudo model (reference model) (step S24-6).
- the comparison between the measurement model and the reference model is performed by searching for the point of the measurement model closest to the reference model point and determining whether the distance between the points is within the range of ⁇ 2 ⁇ to ⁇ 3 ⁇ of the prediction accuracy ⁇ ⁇ . . Points outside this range are determined as miscorresponding points.
- this miscorresponding point determination process can be similarly performed when the three-dimensional shape measurement unit 25 subsequently obtains a measurement value.
- the pseudo model is scaled so that the distance from the barycentric position (feature amount) of the pseudo model to the surface is approximately the same as that of the measurement model.
- the block diagram of the shape measuring apparatus according to the third embodiment and the flowchart of the program of the shape measuring apparatus are the same as those in FIGS.
- the triangular network forming unit 23 (measurement model forming unit) combines a plurality of measurement models to form an all-around model (step S24-1), and the feature amount calculating unit 31 calculates the center of gravity position of the all-around model. (Step S24-2).
- the barycentric position (xm, ym, zm) is calculated by the following equation (8).
- the scale ratio is the maximum and minimum values of the distance from the center of gravity to the surface, or the distance from the center of gravity to the surface in the horizontal direction (X-axis or Y-axis direction) and vertical direction (Z-axis direction). Is required.
- FIG. 19 is a drawing substitute photo (A) showing the center of gravity position of the reference model and a drawing substitute photo (B) showing the center of gravity position of the measurement model. In FIGS. 19A and 19B, the distance from the center of gravity position to the surface in the horizontal and vertical directions is depicted.
- the scale of the pseudo model is changed based on the scale ratio (step S24-4).
- the scale is changed by multiplying the coordinate value of each point of the pseudo model by the scale factor.
- shape matching between the all-around model and the pseudo model is performed (step S24-5).
- shape matching the above-described ICP method, a method of minimizing the distance between the points of the measurement model and the reference model, or a method of designating four or more points of the reference model corresponding to the points of the measurement model is used.
- the mis-corresponding point is determined by comparing the aligned all-around model (measurement model) and the pseudo model (reference model) (step S24-6).
- the comparison between the measurement model and the reference model is performed by searching for the point of the measurement model closest to the reference model point and determining whether the distance between the points is within the range of ⁇ 2 ⁇ to ⁇ 3 ⁇ of the prediction accuracy ⁇ ⁇ . . Points outside this range are determined as miscorresponding points.
- this miscorresponding point determination process can be similarly performed when the three-dimensional shape measurement unit 25 subsequently obtains a measurement value.
- a miscorresponding point of a measurement model can be automatically determined using a reference model (pseudo model) having a different scale.
- the block diagram of the shape measuring apparatus and the flowchart of the program of the shape measuring apparatus according to the fourth embodiment are the same as those in FIGS. 16 and 17.
- the triangular network forming unit 23 (measurement model forming unit) forms one measurement model or an entire circumference model (step S24-1).
- the feature quantity calculation unit 31 extracts four or more feature points corresponding to the measurement model and the pseudo model (step S24-2).
- FIG. 20 is a drawing substitute photograph (A) showing the feature point positions of the reference model and a drawing substitute photograph (B) showing the feature point positions of the measurement model.
- the measurement target 18 is a human head
- the positions of eyes, nostrils, the top of the head, the back of the head, and the temporal region are extracted.
- the positions of the eyes and the nostrils are extracted by binarizing the edge intensity and the luminance value.
- the positions of the top of the head and the temporal region are extracted by obtaining the maximum values in the vertical and horizontal directions.
- the feature points may be manually specified on the screen.
- FIG. 21 is a flowchart of scale change and alignment.
- the coordinates of the measurement model are (XM, YM, ZM)
- the coordinates of the reference model are (X, Y, Z)
- the rotation angles around the three axes are ( ⁇ , ⁇ , ⁇ )
- the translation amount is (X0, If Y0, Z0)
- the following formula 9 is established between the coordinates of the measurement model and the reference model.
- the coordinates of four or more feature points corresponding to the measurement model and the reference model are input, and four or more simultaneous equations are established.
- the unknown variables to be obtained are the scale S, the rotation angles ( ⁇ , ⁇ , ⁇ ) around the three axes, and the translation amounts (X0, Y0, Z0), but the translation amounts (X0, Y0, Z0). Is ignored, there are four unknown variables: scale S, rotation angles around three axes ( ⁇ , ⁇ , ⁇ ).
- a scale S (scale factor) is calculated (step S31), and a rotation matrix R composed of rotation angles ( ⁇ , ⁇ , ⁇ ) around three axes is calculated (step S32). Further, the amount of translation is calculated according to the number of feature points to be substituted (step S33). Then, based on the calculated scale S, the rotation angles ( ⁇ , ⁇ , ⁇ ) around the three axes, and the parallel movement amounts (X0, Y0, Z0), the coordinates of all the points of the pseudo model are transformed by the above equation (8) ( Step S34). Thereby, the scale change and alignment of the pseudo model are performed simultaneously.
- the measurement model and the pseudo model are compared to determine a miscorresponding point (step S24-6).
- the measurement model is compared with the reference model by searching for the point of the measurement model that is closest to the reference model point, and whether the distance between the points is within the range of ⁇ 2 ⁇ to ⁇ 3 ⁇ of the prediction accuracy ⁇ ⁇ . A point outside this range is determined as a miscorresponding point.
- this miscorresponding point determination process can be similarly performed when the three-dimensional shape measurement unit 25 subsequently obtains a measurement value.
- a miscorresponding point of a measurement model can be automatically determined using a reference model (pseudo model) having a different scale. Further, the scale adjustment and alignment of the reference model and the measurement model can be performed without creating an all-around model.
- the fifth embodiment is based on a second method in which the position of the photographing unit is photographed in parallel with the object to be measured and a reference scale (subject for calibration) is obtained in parallel. It is a modification of the processing method of the imaging position and orientation measurement unit 20 in the embodiment.
- the second method is a method in which a measurement object to be measured and a calibration subject are simultaneously photographed, and the position and orientation of the photographing unit are obtained to perform three-dimensional measurement.
- the advantage of this method is that the position of the photographing unit is free and can be measured even from one unit, so that the configuration can be simplified.
- the calibration subject to be imaged together uses a fixed length such as a reference scale, or a fixed coordinate.
- FIG. 22 is a top view of the shape measuring apparatus according to the fifth embodiment
- FIG. 23 is a top view of a modification of the shape measuring apparatus according to the fifth embodiment.
- FIG. 22 shows the relationship between the reference rule 19, the imaging unit 2 and the measurement object 18, the calculation processing unit 15, the operation unit 16, and the display unit 17 in a mode in which shooting is performed while moving the place with one imaging unit. Yes.
- FIG. 23 shows a modification in a mode in which two cameras have a stereo camera configuration or multiple cameras have a multi-camera configuration.
- the calculation processing unit 15, the operation unit 16, and the display unit 17 can be configured by only the photographing unit and the PC by using the PC. If there is no pattern on the object, the pattern is projected by a projector or a pattern is applied to the object.
- a method for obtaining the photographing position and orientation of the photographing unit a relative orientation method, a single photo orientation method, a DLT method, or a bundle adjustment method is used, and these may be used alone or in combination.
- the external orientation element can be obtained based on the relative positional relationship of the reference points captured in one photograph. it can.
- FIG. 24 is a drawing substitute photo (A) showing a left image obtained by photographing a reference scale and a measurement object, and a drawing substitute photo (B) showing a right image.
- the measurement object 18 is photographed with a reference scale 35 (calibration subject) in which the color code targets 36a to 36d are arranged in a relative positional relationship.
- the imaging position / orientation measurement unit 20 shown in FIG. 2 acquires the duplicate image shown in FIG.
- the shooting position / orientation measurement unit 20 binarizes the overlapping images to obtain the barycentric positions (image coordinates of the reference points) of the color code targets 36a to 36d. Further, the photographing position / orientation measurement unit 20 reads a color code from the color scheme of the color code targets 36a to 36d and labels each color code target 36a to 36d.
- the photographing position / orientation measuring unit 20 uses the mutual orientation method, single photograph orientation or DLT method, or bundle adjustment method based on the image coordinates of the reference point and the three-dimensional relative positional relationship of the reference points. Find the shooting position and posture of ⁇ 9. A combination of these also requires a highly accurate position and orientation.
- the second method which is the processing method of the photographing position / orientation measurement unit 20 employed in the present embodiment, is a modification of the first method employed in the first embodiment, and for other configurations and processes, the first method is used.
- the configuration and processing shown in the fourth embodiment can be adopted.
- the second method is adopted, and the position and orientation of the photographing unit can be obtained and three-dimensional measurement can be performed by simultaneously photographing the measurement object 18 and the calibration subject.
- the imaging unit can be configured from one unit to any number of units, and since it is not necessary to fix the imaging unit, the configuration can be simplified.
- the present invention can be used for a shape measuring apparatus for measuring a three-dimensional shape of a measurement object and a program thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Theoretical Computer Science (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Measurement values including an initial value necessary to measure a three-dimensional shape are automatically acquired by automatically determining wrongly matched points in overlapping images. A shape measurement device (1) is provided with image capturing units (2-9) each for capturing an image of an object to be measured (18) in an overlapping image capturing region, a feature point matching unit (21) for matching the positions of feature points of the object to be measured (18) in the overlapping images captured by the image capturing units (2-9), a measurement model formation unit (23) for forming a model of the object to be measured (18) on the basis of the feature points matched by the feature point matching unit (21), a wrongly matched point determination unit (24) for determining wrongly matched points on the basis of the measurement model formed by the measurement model formation unit (23) and a reference model of another object to be measured, and a three-dimensional shape measurement unit (25) for obtaining the three-dimensional shape of the object to be measured (18) on the basis of the positions of feature points other than the points determined as the wrongly matched points by the wrongly matched point determination unit (24), and the like.
Description
本発明は、複数の撮影位置から測定対象物を撮影した重複画像に基づいて、測定対象物の三次元形状を測定する形状測定技術に係り、特に三次元形状の測定に必要な初期値を始めとする測定値を自動で取得する技術に関する。
The present invention relates to a shape measurement technique for measuring a three-dimensional shape of a measurement object based on overlapping images obtained by photographing the measurement object from a plurality of photographing positions, and in particular, initial values necessary for measurement of the three-dimensional shape. The present invention relates to a technique for automatically acquiring measured values.
従来から写真測量の理論が研究されている。近年では、写真測量の理論を用いて、複数の撮影位置から撮影した重複画像に基づき、測定対象物の三次元形状を測定する技術が開示されている。測定対象物の三次元位置を測定するためには、左右画像で6点以上の点を対応付ける必要があるが、この処理は手動で行うか、測定対象物にマークを貼り付けて自動で行う必要があった。
Previously, photogrammetry theory has been studied. In recent years, a technique for measuring the three-dimensional shape of an object to be measured based on overlapping images taken from a plurality of shooting positions using the theory of photogrammetry has been disclosed. In order to measure the three-dimensional position of the measurement object, it is necessary to associate six or more points in the left and right images, but this processing must be performed manually or automatically by attaching a mark to the measurement object. was there.
また、測定対象物の三次元形状を測定するために、測定対象物の画素についてステレオマッチングを行う。ステレオマッチングには、テンプレート画像を変形させながら探索する最小二乗マッチング(Least−Square Matching:LSM)や正規化相関法などが用いられる。この処理には、左右画像で対応付けた多くの点や線が必要であるが、これらの点や線などの初期値を手動で設定するのは煩雑でスキルを伴う。
Also, in order to measure the three-dimensional shape of the measurement object, stereo matching is performed on the pixels of the measurement object. For stereo matching, least-square matching (LSM) for searching while deforming a template image, a normalized correlation method, or the like is used. This process requires many points and lines associated with the left and right images, but manually setting initial values of these points and lines is cumbersome and involves skills.
このような課題を解決する技術が、例えば、特許文献1や2に開示されている。特許文献1に記載の発明では、基準となる特徴パターンが設けられた測定対象物を異なる方向から撮影した一対の第1撮影画像と、基準となる特徴パターンが設けられていない測定対象物を第1撮影画像の撮影方向と同じ方向から撮影した一対の第2撮影画像とに基づいて、各方向で得られた第1撮影画像と第2撮影画像との差をとり特徴パターンを抽出する。
For example, Patent Documents 1 and 2 disclose techniques for solving such problems. In the invention described in Patent Document 1, a pair of first photographed images obtained by photographing a measurement object provided with a reference feature pattern from different directions and a measurement object provided with no reference feature pattern are first. Based on a pair of second captured images captured from the same direction as the captured direction of one captured image, a feature pattern is extracted by taking the difference between the first captured image and the second captured image obtained in each direction.
この態様によれば、特徴パターンのみの画像が作成できることから、特徴パターンの位置検出を自動で精度よく行うことができる。また、特徴パターンの点の数を増やすことにより、左右画像で対応する面の検出を自動で行うことできる。
According to this aspect, since an image of only the feature pattern can be created, the position of the feature pattern can be automatically and accurately detected. Also, by increasing the number of feature pattern points, it is possible to automatically detect the corresponding surface in the left and right images.
また、特許文献2に記載の発明では、測定対象の撮影位置と設計データによって決定されている測定対象の基準位置との位置補正を行い、測定対象の三次元形状と設計データを比較することによって、誤って対応付けられた誤対応点を削除する。この態様によれば、三次元形状の測定処理を自動化することができる。
特開平10−318732号公報
特開2007−212430号公報
Further, in the invention described in Patent Document 2, position correction is performed between the photographing position of the measurement target and the reference position of the measurement target determined by the design data, and the three-dimensional shape of the measurement target is compared with the design data. , The miscorresponding points associated with each other are deleted. According to this aspect, the three-dimensional shape measurement process can be automated.
JP 10-318732 A JP 2007-212430 A
このような背景を鑑み、本発明は、重複画像における誤対応点を自動判定することで、三次元形状の測定に必要な初期値を始めとする測定値を自動で取得する技術を提供することを目的とする。
In view of such a background, the present invention provides a technique for automatically acquiring measurement values including initial values necessary for measurement of a three-dimensional shape by automatically determining erroneous correspondence points in overlapping images. With the goal.
請求項1に記載の発明は、複数の撮影位置から重複した撮影領域で、測定対象物を撮影する撮影部と、前記撮影部によって撮影された重複画像における前記測定対象物の特徴点の位置を対応付ける特徴点対応付部と、前記特徴点対応付部で対応付けた前記重複画像上での特徴点に基づき、前記測定対象物のモデルを形成する測定モデル形成部と、前記測定モデル形成部で形成した測定モデルと、前記測定対象物の形態に基づき形成された基準モデルとに基づいて、誤対応点を判定する誤対応点判定部と、前記誤対応点判定部で誤対応点と判定された点を除いた特徴点の位置および前記複数の撮影位置に基づき、前記測定対象物の特徴点の三次元座標または前記測定対象物の三次元形状を求める三次元形状測定部と、を備えることを特徴とする形状測定装置である。
According to the first aspect of the present invention, the imaging unit that images the measurement object in the overlapping imaging region from a plurality of imaging positions, and the position of the feature point of the measurement object in the overlapping image captured by the imaging unit A feature model associating unit, a measurement model forming unit for forming a model of the measurement object based on the feature points on the overlapped image associated by the feature point associating unit, and the measurement model forming unit. Based on the formed measurement model and the reference model formed based on the form of the measurement object, an erroneous corresponding point determination unit that determines an erroneous corresponding point, and the erroneous corresponding point determination unit determines that the erroneous corresponding point is detected. A three-dimensional shape measurement unit that obtains the three-dimensional coordinates of the feature point of the measurement object or the three-dimensional shape of the measurement object based on the position of the feature point excluding the point and the plurality of imaging positions. Features A shape measuring apparatus.
請求項1に記載の発明によれば、重複画像における誤対応点を自動判定することで、三次元形状の測定に必要な初期値を始めとする測定値を自動で取得することができる。
According to the first aspect of the present invention, it is possible to automatically acquire a measurement value including an initial value necessary for measurement of a three-dimensional shape by automatically determining an erroneous correspondence point in an overlapping image.
請求項2に記載の発明は、請求項1に記載の発明において、前記基準モデルは、MRI、CT、CAD、および別の形状測定装置によるデータならびに過去に求められた形状データのうちの少なくとも一つであることを特徴とする。
According to a second aspect of the present invention, in the first aspect of the present invention, the reference model includes at least one of data obtained by MRI, CT, CAD, and another shape measuring device and shape data obtained in the past. It is characterized by being one.
請求項2に記載の発明によれば、測定モデルの形状に類似する基準モデルを用いて、誤対応点を自動判定することができる。
According to the second aspect of the present invention, the miscorresponding point can be automatically determined using the reference model similar to the shape of the measurement model.
請求項3に記載の発明は、請求項2に記載の発明において、前記基準モデルは、実際の寸法が与えられた実寸データであることを特徴とする。
The invention described in claim 3 is characterized in that, in the invention described in claim 2, the reference model is actual size data to which an actual size is given.
請求項3に記載の発明によれば、測定モデルの実際の寸法が、絶対標定によって与えられている場合、測定モデルと基準モデルを比較する際にスケール調整をする必要がない。
According to the invention described in claim 3, when the actual dimensions of the measurement model are given by absolute orientation, there is no need to adjust the scale when comparing the measurement model with the reference model.
請求項4に記載の発明は、請求項1に記載の発明において、前記基準モデルは、前記測定対象物の形状に類似し、実際の寸法が与えられていない疑似モデルであり、体積が同程度になるように座標変換を行った後に、前記誤対応点判定部が、誤対応点を判定することを特徴とする。
According to a fourth aspect of the present invention, in the first aspect of the present invention, the reference model is a pseudo model that is similar to the shape of the object to be measured and is not given actual dimensions, and has the same volume. After the coordinate conversion is performed, the miscorresponding point determination unit determines the miscorresponding point.
請求項4に記載の発明によれば、寸法の異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。
According to the fourth aspect of the present invention, it is possible to automatically determine a miscorresponding point of a measurement model using a reference model (pseudo model) having different dimensions.
請求項5に記載の発明は、請求項1に記載の発明において、前記基準モデルは、前記測定対象物の形状に類似し、実際の寸法が与えられていない疑似モデルであり、重心からの距離が同程度になるように座標変換を行った後に、前記誤対応点判定部が、誤対応点を判定することを特徴とする。
According to a fifth aspect of the present invention, in the first aspect of the present invention, the reference model is a pseudo model that is similar to the shape of the measurement object and is not given an actual dimension, and is a distance from the center of gravity. After the coordinate conversion is performed so as to be the same, the miscorresponding point determination unit determines the miscorresponding point.
請求項5に記載の発明によれば、寸法の異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。
According to the fifth aspect of the present invention, it is possible to automatically determine an erroneous correspondence point of a measurement model using a reference model (pseudo model) having different dimensions.
請求項6に記載の発明は、請求項1に記載の発明において、前記基準モデルは、前記測定対象物の形状に類似し、実際の寸法が与えられていない疑似モデルであり、少なくとも4点以上の特徴点の位置を合わせるように座標変換を行った後に、前記誤対応点判定部が、誤対応点を判定することを特徴とする。
According to a sixth aspect of the present invention, in the first aspect of the present invention, the reference model is a pseudo model that is similar to the shape of the measurement object and is not given actual dimensions, and has at least four points. After performing coordinate conversion so as to match the positions of the feature points, the miscorresponding point determination unit determines the miscorresponding points.
請求項6に記載の発明によれば、寸法の異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。
According to the sixth aspect of the present invention, it is possible to automatically determine a miscorresponding point of a measurement model using a reference model (pseudo model) having different dimensions.
請求項7に記載の発明は、請求項1に記載の発明において、前記基準モデルの点と、これに最も近い前記測定モデルの点との距離を最小化することで、前記基準モデルと前記測定モデルの位置合わせを行うことを特徴とする。
The invention according to claim 7 is the invention according to claim 1, wherein the reference model and the measurement are minimized by minimizing a distance between a point of the reference model and a point of the measurement model closest to the reference model point. It is characterized by aligning the model.
請求項7に記載の発明によれば、基準モデルと測定モデルの位置合わせを自動で行うことができる。
According to the invention described in claim 7, the reference model and the measurement model can be automatically aligned.
請求項8に記載の発明は、請求項1に記載の発明において、前記基準モデルの点と対応する前記測定モデルの点を4点上指定することで、前記基準モデルと前記測定モデルの位置合わせを行うことを特徴とする。
According to an eighth aspect of the present invention, in the first aspect of the invention, the reference model and the measurement model are aligned by specifying four points of the measurement model corresponding to the reference model point. It is characterized by performing.
請求項8に記載の発明によれば、基準モデルと測定モデルの位置合わせをマニュアル、半自動または全自動のいずれかで行うことができる。
According to the invention described in claim 8, the alignment of the reference model and the measurement model can be performed either manually, semi-automatically or fully automatically.
請求項9に記載の発明は、請求項1に記載の発明において、前記基準モデルの点と、これに最も近い前記測定モデルの点との距離が、所定範囲外である場合には、前記測定モデルの点を誤対応点と判定することを特徴とする。
According to a ninth aspect of the present invention, in the first aspect of the present invention, when the distance between the point of the reference model and the point of the measurement model closest to the reference model is outside a predetermined range, the measurement is performed. It is characterized in that a point of the model is determined as an erroneous correspondence point.
請求項9に記載の発明によれば、基準モデルと測定モデルとの点間距離のみに基づいて誤対応点を判定するため、誤対応点の判定処理を簡略化することができる。
According to the ninth aspect of the present invention, since the miscorresponding point is determined based only on the point-to-point distance between the reference model and the measurement model, the miscorresponding point determination process can be simplified.
請求項10に記載の発明は、請求項1~9のいずれかに記載の発明において、前記撮影部で撮影された測定対象物の画像から、前記測定対象物のテクスチャーを抽出するテクスチャー抽出部と、前記測定モデル形成部が形成した測定モデルおよび前記基準モデルの少なくとも一つに前記テクスチャーを貼り付けるテクスチャー合成部と、前記テクスチャー合成部で合成されたテクスチャー付きモデルに基づき、テクスチャー付きモデル画像を表示する表示部と、をさらに備えることを特徴とする。
A tenth aspect of the present invention is the invention according to any one of the first to ninth aspects, wherein the texture extracting unit extracts the texture of the measurement object from the image of the measurement object photographed by the photographing unit; A texture synthesis unit for attaching the texture to at least one of the measurement model formed by the measurement model formation unit and the reference model, and a model image with a texture based on the textured model synthesized by the texture synthesis unit. And a display unit.
請求項10に記載の発明によれば、基準モデルに測定対象物のテクスチャーを貼り付けることで、ある測定対象物と他の測定対象物との形状の比較が容易になる。また、基準モデルに貼り付けたテクスチャーと、測定モデルに貼り付けたテクスチャーとを比較することで、設計データ(基準モデル)と実物(測定モデル)との比較検証も可能となる。
According to the invention described in claim 10, by comparing the texture of the measurement object to the reference model, it becomes easy to compare the shape of a measurement object with another measurement object. Further, by comparing the texture pasted on the reference model with the texture pasted on the measurement model, comparison verification between the design data (reference model) and the actual product (measurement model) becomes possible.
請求項11に記載の発明は、請求項1~9のいずれかに記載の発明において、前記誤対応点判定部が誤対応点であると判定した場合には、前記誤対応点に相当する特徴点の指定が解除されることを特徴とする。
The invention according to claim 11 is characterized in that, in the invention according to any one of claims 1 to 9, when the miscorresponding point determination unit determines that it is a miscorresponding point, it corresponds to the miscorresponding point. The point designation is canceled.
請求項11に記載の発明によれば、誤対応点を除いた特徴点群を、三次元形状の測定に必要な初期値を始めとする測定値とすることができる。
According to the eleventh aspect of the present invention, the feature point group excluding the miscorresponding points can be set as a measured value including an initial value necessary for measuring the three-dimensional shape.
請求項12に記載の発明は、複数の撮影位置から撮影した重複画像における測定対象物の特徴点の位置を対応付ける特徴点対応付ステップと、前記特徴点対応付ステップで対応付けた前記重複画像上での特徴点に基づき、前記測定対象物のモデルを形成する測定モデル形成ステップと、前記測定モデル形成ステップで形成した測定モデルと、前記測定対象物の形態に基づき形成された基準モデルとに基づいて、誤対応点を判定する誤対応点判定ステップと、前記誤対応点判定ステップで誤対応点と判定された点を除いた特徴点の位置および前記複数の撮影位置に基づき、前記測定対象物の特徴点の三次元座標または前記測定対象物の三次元形状を求める三次元形状測定ステップと、を実行させるためのプログラムである。
The invention according to claim 12 is a feature point associating step for associating the positions of the feature points of the measurement object in the overlapping images taken from a plurality of photographing positions, and on the overlapping image associated in the feature point associating step. A measurement model forming step for forming a model of the measurement object based on the feature points in FIG. 5; a measurement model formed in the measurement model formation step; and a reference model formed based on the form of the measurement object. The measurement object based on the miscorresponding point determination step for determining the miscorresponding point, the position of the feature point excluding the point determined as the miscorresponding point in the miscorresponding point determination step, and the plurality of photographing positions. And a three-dimensional shape measuring step for obtaining a three-dimensional coordinate of the feature point or a three-dimensional shape of the measurement object.
請求項12に記載の発明によれば、重複画像における誤対応点を自動判定することで、三次元形状の測定に必要な初期値を始めとする測定値を自動で取得することができる。
According to the twelfth aspect of the present invention, it is possible to automatically acquire a measurement value including an initial value necessary for measurement of a three-dimensional shape by automatically determining an erroneous correspondence point in an overlapping image.
本発明によれば、重複画像における誤対応点を自動判定することで、三次元形状の測定に必要な初期値を始めとする測定値を自動で取得することができる。
According to the present invention, it is possible to automatically acquire a measurement value including an initial value necessary for measuring a three-dimensional shape by automatically determining an erroneous correspondence point in an overlapping image.
1…形状測定装置、2~9…撮影部、10~13…特徴投影部、14…中継部、15…計算処理部、16…操作部、17…表示部、18…測定対象物、19…校正用被写体、20…撮影位置姿勢測定部、21…特徴点対応付部、22…三次元座標演算部、23…三角形網形成部(測定モデル形成部)、24…誤対応点判定部、25…三次元形状測定部、26…背景除去部、27…特徴点抽出部、28…対応点探索部、29…形状マッチング部、30…形状モデル比較部、31…特徴量計算部、32…縮尺率計算部、33…スケール変更部、34…テクスチャー抽出部、35…テクスチャー合成部。
DESCRIPTION OF SYMBOLS 1 ... Shape measuring apparatus, 2-9 ... Imaging | photography part, 10-13 ... Feature projection part, 14 ... Relay part, 15 ... Calculation processing part, 16 ... Operation part, 17 ... Display part, 18 ... Measurement object, 19 ... Calibration subject, 20 ... shooting position / orientation measurement unit, 21 ... feature point correspondence unit, 22 ... three-dimensional coordinate calculation unit, 23 ... triangle network formation unit (measurement model formation unit), 24 ... miscorresponding point determination unit, 25 3D shape measurement unit 26 Background extraction unit 27 Feature point extraction unit 28 Corresponding point search unit 29 Shape matching unit 30 Shape model comparison unit 31 Feature amount calculation unit 32 Scale Rate calculation unit, 33 ... scale change unit, 34 ... texture extraction unit, 35 ... texture synthesis unit.
1.第1の実施形態
以下、形状測定装置およびプログラムの一例について、図面を参照して説明する。 1. First Embodiment Hereinafter, an example of a shape measuring apparatus and a program will be described with reference to the drawings.
以下、形状測定装置およびプログラムの一例について、図面を参照して説明する。 1. First Embodiment Hereinafter, an example of a shape measuring apparatus and a program will be described with reference to the drawings.
(形状測定装置の構成)
図1は、形状測定装置の上面図である。形状測定装置1は、撮影部2~9、特徴投影部10~13、中継部14、計算処理部15、表示部17、操作部16を備える。形状測定装置1は、撮影部2~9の中央に配置された測定対象物18の形状を測定する。 (Configuration of shape measuring device)
FIG. 1 is a top view of the shape measuring apparatus. Theshape measuring apparatus 1 includes photographing units 2 to 9, feature projecting units 10 to 13, a relay unit 14, a calculation processing unit 15, a display unit 17, and an operation unit 16. The shape measuring apparatus 1 measures the shape of the measuring object 18 arranged in the center of the photographing units 2 to 9.
図1は、形状測定装置の上面図である。形状測定装置1は、撮影部2~9、特徴投影部10~13、中継部14、計算処理部15、表示部17、操作部16を備える。形状測定装置1は、撮影部2~9の中央に配置された測定対象物18の形状を測定する。 (Configuration of shape measuring device)
FIG. 1 is a top view of the shape measuring apparatus. The
撮影部2~9には、例えば、ビデオカメラ、工業計測用のCCDカメラ(Charge Coupled Device Camera)、CMOSカメラ(Complementary Metal Oxide Semiconductor Camera)等を用いる。また、市販のデジタルカメラを利用して、コンパクトフラッシュ(登録商標)メモリや、USBケーブルなどでデータを計算処理部15に転送することもできる。撮影部2~9は、測定対象物18の周囲に配置される。撮影部2~9は、複数の撮影位置から重複した撮影領域で測定対象物18を撮影する。
For the photographing units 2 to 9, for example, a video camera, a CCD camera for industrial measurement (Charge Coupled Device Camera), a CMOS camera (Complementary Metal Oxide Semiconductor Camera), or the like is used. In addition, using a commercially available digital camera, data can be transferred to the calculation processing unit 15 using a compact flash (registered trademark) memory, a USB cable, or the like. The imaging units 2 to 9 are arranged around the measurement object 18. The imaging units 2 to 9 shoot the measurement object 18 in overlapping imaging areas from a plurality of imaging positions.
撮影部2~9は、所定の基線長だけ離して横方向または縦方向に並べられる。なお、撮影部を追加して、横方向および縦方向の両方に並べてもよい。形状測定装置1は、少なくとも一対の重複画像に基づき、測定対象物18の三次元形状を測定する。したがって、撮影部2~9は、撮影被写体の大きさや形状により、一つもしくは複数に適宜することができる。
The photographing units 2 to 9 are arranged in the horizontal direction or the vertical direction separated by a predetermined baseline length. Note that an imaging unit may be added and arranged in both the horizontal direction and the vertical direction. The shape measuring apparatus 1 measures the three-dimensional shape of the measuring object 18 based on at least a pair of overlapping images. Therefore, one or a plurality of photographing units 2 to 9 can be appropriately selected depending on the size and shape of the photographing subject.
特徴投影部10~13には、例えば、プロジェクター、レーザー装置などが用いられる。特徴投影部10~13は、測定対象物18に対してランダムドットパターン、点状のスポット光、線状のスリット光などのパターンを投影する。これにより、測定対象物18の特徴が乏しい部分に特徴が入る。特徴投影部10~13は、撮影部2と3の間、撮影部4と5の間、撮影部6と7の間、および撮影部8と9の間に配置される。なお、測定対象物18に特徴がある場合、あるいは模様を塗布できる場合には、特徴投影部10~13を省略することもできる。
For example, a projector or a laser device is used for the feature projection units 10 to 13. The feature projection units 10 to 13 project a pattern such as a random dot pattern, dot spot light, or linear slit light onto the measurement object 18. Thereby, a feature enters a portion where the feature of the measurement object 18 is poor. The feature projection units 10 to 13 are arranged between the photographing units 2 and 3, between the photographing units 4 and 5, between the photographing units 6 and 7, and between the photographing units 8 and 9. Note that the feature projection units 10 to 13 may be omitted when the measurement object 18 has a feature or when a pattern can be applied.
撮影部2~9は、イーサネット(登録商標)、または、カメラリンクもしくはIEEE1394(Institute of Electrical and Electronic Engineers 1394)などのインターフェースを介して中継部14に接続する。中継部14には、スイッチングハブ、または、画像キャプチャボード等を用いる。撮影部2~9が撮影した画像は、中継部14を介して、計算処理部15に入力される。
The photographing units 2 to 9 are connected to the relay unit 14 via an interface such as Ethernet (registered trademark), camera link, or IEEE 1394 (Institut of Electrical and Electronic Engineers 1394). As the relay unit 14, a switching hub or an image capture board is used. Images taken by the photographing units 2 to 9 are input to the calculation processing unit 15 via the relay unit 14.
計算処理部15には、パーソナルコンピューター(Personal Computer:PC)、または、FPGA(Field Programmable Gate Array)もしくはASIC(Application Specific Integrated Circuit)などのPLD(Programmable Logic Device)で構成したハードウェアを用いる。計算処理部15は、操作部16によって操作され、計算処理部15の処理内容および計算結果は、表示部17に表示される。操作部16には、キーボードやマウスが用いられ、表示部17には、液晶モニタが用いられる。また、操作部16および表示部17は、タッチパネル式液晶モニタで一体として構成してもよい。
The calculation processing unit 15 uses a personal computer (Personal Computer: PC), or a PLD (ProgrammableLigerDigware Hardware) such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). The calculation processing unit 15 is operated by the operation unit 16, and the processing contents and calculation results of the calculation processing unit 15 are displayed on the display unit 17. A keyboard and a mouse are used for the operation unit 16, and a liquid crystal monitor is used for the display unit 17. In addition, the operation unit 16 and the display unit 17 may be configured integrally with a touch panel type liquid crystal monitor.
図2は、形状測定装置のブロック図である。計算処理部15は、撮影位置姿勢測定部20、特徴点対応付部21、三次元座標演算部22、三角形網形成部23、誤対応点判定部24、三次元形状測定部25を備える。これらは、PCで実行可能なプログラムのモジュールとして実装してもよいし、FPGAなどのPLDとして実装してもよい。
FIG. 2 is a block diagram of the shape measuring apparatus. The calculation processing unit 15 includes an imaging position / orientation measurement unit 20, a feature point association unit 21, a three-dimensional coordinate calculation unit 22, a triangle network formation unit 23, an incorrect corresponding point determination unit 24, and a three-dimensional shape measurement unit 25. These may be implemented as a module of a program that can be executed by a PC, or may be implemented as a PLD such as an FPGA.
撮影位置姿勢測定部20は、図1の校正用被写体19を撮影した画像に基づき、撮影部2~9の外部標定要素(撮影位置および姿勢)を測定する。なお、撮影位置姿勢測定部20は、撮影部2~9の内部標定要素(主点、焦点距離、レンズ歪み)が既知でない場合は、これも同時に求める。校正用被写体19は、複数の基準点を配置した立方体形状のキャリブレーションボックスである。
The imaging position / orientation measurement unit 20 measures external orientation elements (imaging positions and orientations) of the imaging units 2 to 9 based on an image obtained by imaging the calibration subject 19 shown in FIG. If the internal orientation elements (principal point, focal length, lens distortion) of the photographing units 2 to 9 are not known, the photographing position / orientation measuring unit 20 obtains them simultaneously. The calibration subject 19 is a cubic calibration box in which a plurality of reference points are arranged.
基準点には、カラーコードターゲットを用いる(特開2007−64627号公報参照)。カラーコードターゲットは、3つのレトロターゲット(再帰反射性ターゲット)を有する。まず、撮影位置姿勢測定部20は、校正用被写体19を撮影した画像を二値化することで、レトロターゲットを検出し、その重心位置(基準点の画像座標)を求める。また、撮影位置姿勢測定部20は、カラーコードターゲットの配色(カラーコード)に基づき、各基準点にラベルをつける。これにより、重複画像内で対応する基準点の位置が分かる。
A color code target is used as the reference point (see Japanese Patent Application Laid-Open No. 2007-64627). The color code target has three retro targets (retroreflective targets). First, the photographing position / orientation measurement unit 20 binarizes an image obtained by photographing the calibration subject 19, thereby detecting a retro target and obtaining its center-of-gravity position (reference point image coordinates). The photographing position / orientation measurement unit 20 labels each reference point based on the color code target color scheme (color code). Thereby, the position of the corresponding reference point in the overlapping image is known.
そして、撮影位置姿勢測定部20は、相互標定法、または、単写真標定法もしくはDLT法、あるいはバンドル調整法を使うことによって、撮影部2~9の外部標定要素を算出する。これらは単独で使っても、組み合わせて使ってもよい。なお、相互標定法による具体的な処理については、後述する。
The photographing position / orientation measuring unit 20 calculates the external orientation elements of the photographing units 2 to 9 by using the relative orientation method, the single photograph orientation method, the DLT method, or the bundle adjustment method. These may be used alone or in combination. Specific processing by the relative orientation method will be described later.
本実施形態は、2台以上の撮影部2~9を固定し、事前に校正用被写体19を撮影して、撮影部2~9の位置と姿勢を算出しておく第1方式を採用している。この第1方法の利点は、動きのあるようなもの(たとえば生体)を計測するときでも、一瞬にして測定対象物18をとらえて計測ができることである。また、一度撮影部2~9の位置姿勢を校正用被写体19で求めておけば、測定対象物18をその空間内に置くことで、三次元計測がいつでも可能である。なお、測定対象物18と一緒に校正用被写体を撮影し、外部標定要素と測定対象物18の特徴点の三次元座標を並列的に求めてもよい。
This embodiment employs the first method in which two or more photographing units 2 to 9 are fixed, the calibration subject 19 is photographed in advance, and the positions and orientations of the photographing units 2 to 9 are calculated. Yes. The advantage of this first method is that even when a moving object (for example, a living body) is measured, the measurement object 18 can be captured and measured in an instant. In addition, once the position and orientation of the photographing units 2 to 9 are obtained from the calibration subject 19, three-dimensional measurement can be performed at any time by placing the measurement object 18 in the space. Alternatively, the calibration subject may be photographed together with the measurement object 18 and the three-dimensional coordinates of the external orientation element and the feature point of the measurement object 18 may be obtained in parallel.
特徴点対応付部21は、少なくとも一対のステレオ画像から測定対象物18の特徴点を抽出し、ステレオ画像中の特徴点の位置を対応付ける。なお、撮影部2~9を横方向に並べた場合には、特徴点対応付部21は、横方向に特徴点の位置を探索し、撮影部2~9を縦方向に並べた場合には、縦方向に特徴点の位置を探索し、撮影部2~9を横方向および縦方向に並べた場合には、横方向および縦方向に特徴点の位置を探索する。
The feature point association unit 21 extracts feature points of the measurement object 18 from at least a pair of stereo images, and associates the positions of the feature points in the stereo image. When the photographing units 2 to 9 are arranged in the horizontal direction, the feature point association unit 21 searches for the position of the feature point in the horizontal direction, and when the photographing units 2 to 9 are arranged in the vertical direction. When the position of the feature point is searched in the vertical direction and the photographing units 2 to 9 are arranged in the horizontal direction and the vertical direction, the position of the feature point is searched in the horizontal direction and the vertical direction.
特徴点対応付部21は、背景除去部26、特徴点抽出部27、対応点探索部28を備える。背景除去部26は、測定対象物18が写された処理画像から背景画像を差分することで、測定対象物18のみが写された背景除去画像を生成する。
The feature point association unit 21 includes a background removal unit 26, a feature point extraction unit 27, and a corresponding point search unit 28. The background removal unit 26 generates a background removal image in which only the measurement object 18 is copied by subtracting the background image from the processed image in which the measurement object 18 is copied.
特徴点抽出部27は、背景除去画像から特徴点を抽出する。この際、対応点の探索範囲を制限するため、左右のステレオ画像から特徴点を抽出する。特徴点の抽出方法としては、ソーベル、ラプラシアン、プリューウィット、ロバーツなどの微分フィルタが用いられる。
The feature point extraction unit 27 extracts feature points from the background removed image. At this time, feature points are extracted from the left and right stereo images in order to limit the search range of corresponding points. As a feature point extraction method, a differential filter such as Sobel, Laplacian, Prewitt, or Roberts is used.
対応点探索部28は、一方の画像で抽出された特徴点に対応する対応点を他方の画像内で探索する。対応点の探索方法としては、残差逐次検定法(Sequential Similarity Detection Algorithm Method:SSDA)、正規化相関法、方向符号照合法(Orientation Code Matching:OCM)などのテンプレートマッチングが用いられる。
Corresponding point search unit 28 searches for the corresponding point corresponding to the feature point extracted in one image in the other image. As a method for searching for corresponding points, template matching such as a residual sequential test method (Sequential Similarity Detection Algorithm Method: SSDA), a normalized correlation method, a direction code matching method (Orientation Code Matching: OCM), or the like is used.
三次元座標演算部22は、撮影位置姿勢測定部20で測定された外部標定要素、および、特徴点対応付部21で対応付けた特徴点の画像座標に基づき、測定対象物18の特徴点の三次元座標を演算する。
The three-dimensional coordinate calculation unit 22 is based on the external orientation elements measured by the shooting position / orientation measurement unit 20 and the image coordinates of the feature points associated by the feature point association unit 21. Calculate 3D coordinates.
三角形網形成部23(測定モデル形成部)は、特徴点対応付部21で対応付けた特徴点同士を線分で結んだ不整三角形網(TIN:Triangulated Irregular Network)を形成する。TINの形成には、ドロネー(Delaunay)法が用いられる。また、TINは、特徴点対応付部21で対応付けた特徴点の画像座標、または、三次元座標演算部22で求めた三次元座標に基づいて形成される。TINについての詳細は、「伊理正夫、腰塚武志:計算幾何学と地理情報処理、p127」、「Franz Aurenhammer,杉原厚吉訳:Voronoi図、一つの基本的な幾何データ構造に関する概論、ACM Computing Surveys,Vol.23,p345−405」等を参照する。
Triangular network forming unit 23 (measurement model forming unit) forms an irregular triangular network (TIN: Triangulated Irregular Network) in which the feature points associated by the feature point association unit 21 are connected by line segments. A Delaunay method is used to form TIN. The TIN is formed based on the image coordinates of the feature points associated by the feature point association unit 21 or the 3D coordinates obtained by the 3D coordinate calculation unit 22. For more information on TIN, see “Masao Iri, Takeshi Koshizuka: Computational Geometry and Geographic Information Processing, p127”, “Franz Aurenhammer, Atsuyoshi Sugihara: Voronoi Diagram, Introduction to One Basic Geometric Data Structure, ACM Computing Surveys , Vol.23, p345-405 ".
誤対応点判定部24は、形状マッチング部29、形状モデル比較部30を備える。形状マッチング部29は、三角形網形成部23で形成した測定モデルと、別の測定対象物の基準モデルとの位置合わせを行う。基準モデルには、測定対象物18に類似した別の測定対象物のモデルであり、実際の寸法が与えられている実寸モデルと、実際の寸法が与えられていない疑似モデルとを用いることができる。実寸モデルについては本実施形態で、疑似モデルについては第2の実施形態以降に詳述する。
The miscorresponding point determination unit 24 includes a shape matching unit 29 and a shape model comparison unit 30. The shape matching unit 29 aligns the measurement model formed by the triangular mesh forming unit 23 with the reference model of another measurement object. The reference model is a model of another measurement object similar to the measurement object 18, and an actual size model to which actual dimensions are given and a pseudo model to which no actual dimensions are given can be used. . The actual size model will be described in this embodiment, and the pseudo model will be described in detail in the second embodiment and thereafter.
実寸モデルには、例えば、MRI(Magnetic Resonance Imaging:核磁気共鳴画像法)、CT(Computed Tomography:コンピュータ断層撮影)、CAD(Computer Aided Design:コンピュータ支援設計)、または別の形状測定装置で測定されたデータを用いることができる。また、本装置が過去に求めた形状データを用いても良い。
The actual size model is measured by, for example, MRI (Magnetic Resonance Imaging), CT (Computed Tomography), CAD (Computer Aided Design), or another shape measuring device. Data can be used. Further, shape data obtained in the past by the present apparatus may be used.
形状マッチング部29による位置合わせには、ICP(Iteractive Closest Point)法、測定モデルと基準モデルの点間距離を最小化する方法、または、測定モデルの点と対応する基準モデルの点を4点以上指定する方法が用いられる。ICP法については、「金子俊一他著、M推定を導入したロバストICP位置決め法、精密工学会誌、Vol.67、No.8」、または、「Besl,P.J. and McKay,N.D.,A Method for Registration of 3−D Shapes,IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.14,NO.2」を参照する。また、その他の位置合わせ方法については、後述する。
For the alignment by the shape matching unit 29, the ICP (Iterative Closest Point) method, the method of minimizing the distance between the points of the measurement model and the reference model, or four or more points of the reference model corresponding to the points of the measurement model The method of specifying is used. Regarding the ICP method, “Shunichi Kaneko et al., Robust ICP positioning method introducing M-estimation, Journal of Precision Engineering, Vol. 67, No. 8” or “Besl, PJ and McKay, ND. , A Method for Registration of 3-D Shapes, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.14, NO.2. Other alignment methods will be described later.
測定モデルと基準モデルの位置合わせが終了すると、形状モデル比較部30が、測定モデルと基準モデルとを比較することで、誤対応点を判定する。形状モデル比較部30は、測定モデルを構成する点と、基準モデルを構成する点との点間距離に基づき、誤対応点を判定する。特徴点が誤対応点であると判定された場合には、その特徴点の指定が解除される。
When the alignment between the measurement model and the reference model is completed, the shape model comparison unit 30 compares the measurement model with the reference model to determine a miscorresponding point. The shape model comparison unit 30 determines a miscorresponding point based on a point-to-point distance between a point constituting the measurement model and a point constituting the reference model. When it is determined that the feature point is a miscorresponding point, the designation of the feature point is canceled.
三次元形状測定部25は、誤対応点判定部24で誤対応点と判定された点を除いた特徴点群を初期値として、所定領域内の画素についてステレオマッチングを行い、測定対象物18の三次元形状を求める。ステレオマッチングには、テンプレート画像を変形させながら探索するLSM、または正規化相関法などが用いられる。三次元形状は、点群またはTINとして、表示部17に表示される。
The three-dimensional shape measurement unit 25 performs stereo matching on the pixels in the predetermined region using the feature point group excluding the points determined as the erroneous correspondence points by the erroneous correspondence point determination unit 24, and Find the 3D shape. For stereo matching, LSM that searches while deforming a template image, a normalized correlation method, or the like is used. The three-dimensional shape is displayed on the display unit 17 as a point cloud or TIN.
さらに、誤対応点判定部24で初期値を求める際に行った誤対応点の除去の処理を、三次元形状測定部25が、その後に測定値を求める際にも、同様に行うことができる。
Furthermore, the process of removing the miscorresponding point performed when the miscorresponding point determination unit 24 obtains the initial value can be similarly performed when the three-dimensional shape measuring unit 25 subsequently obtains the measurement value. .
また、三次元形状を測定した後、誤対応点を除去した測定モデル、および、基準モデルの少なくとも一つにテクスチャーを貼り付けることが可能である。テクスチャー抽出部34は、撮影部2~9で撮影された画像から測定対象物18のテクスチャー(画像)を抽出する。また、テクスチャー合成部35は、誤対応点を除去した測定モデル、および、基準モデルの少なくとも一つにテクスチャーを貼り付ける。
Also, after measuring the three-dimensional shape, it is possible to paste the texture on at least one of the measurement model from which the miscorresponding points are removed and the reference model. The texture extraction unit 34 extracts the texture (image) of the measurement target 18 from the images photographed by the photographing units 2 to 9. In addition, the texture synthesis unit 35 pastes the texture on at least one of the measurement model from which the miscorresponding points are removed and the reference model.
(形状測定装置の処理)
以下、形状測定装置の詳細な処理について図3を参照して説明する。図3は、形状測定装置のプログラムのフローチャートである。なお、このフローチャートを実行するプログラム、および、後述する実施形態に係るプログラムは、CDROMなどの記録媒体に格納して提供が可能である。 (Processing of shape measuring device)
Hereinafter, detailed processing of the shape measuring apparatus will be described with reference to FIG. FIG. 3 is a flowchart of the program of the shape measuring apparatus. Note that a program for executing this flowchart and a program according to an embodiment to be described later can be provided by being stored in a recording medium such as a CDROM.
以下、形状測定装置の詳細な処理について図3を参照して説明する。図3は、形状測定装置のプログラムのフローチャートである。なお、このフローチャートを実行するプログラム、および、後述する実施形態に係るプログラムは、CDROMなどの記録媒体に格納して提供が可能である。 (Processing of shape measuring device)
Hereinafter, detailed processing of the shape measuring apparatus will be described with reference to FIG. FIG. 3 is a flowchart of the program of the shape measuring apparatus. Note that a program for executing this flowchart and a program according to an embodiment to be described later can be provided by being stored in a recording medium such as a CDROM.
まず、処理画像と背景画像を入力する(ステップS10)。次に、背景除去部26によって、測定対象物18の背景が除去される(ステップS11)。図4は、処理画像を示す図面代用写真(A)と、背景画像を示す図面代用写真(B)と、背景除去画像を示す図面代用写真(C)である。背景除去画像は、処理画像から背景画像を差分することで生成される。背景除去は、左右画像に対して行われる。なお、背景画像が得られない場合には、この処理を行わなくてもよい。
First, a processed image and a background image are input (step S10). Next, the background removal unit 26 removes the background of the measurement object 18 (step S11). FIG. 4 is a drawing substitute photo (A) showing a processed image, a drawing substitute photo (B) showing a background image, and a drawing substitute photo (C) showing a background removed image. The background removed image is generated by subtracting the background image from the processed image. Background removal is performed on the left and right images. Note that this process may not be performed if a background image cannot be obtained.
次に、特徴点抽出部27によって、左右画像から特徴点が抽出される(ステップS12)。左右の両画像から特徴点を抽出することによって、対応点の探索範囲を小さくすることができる。
Next, feature points are extracted from the left and right images by the feature point extraction unit 27 (step S12). By extracting feature points from the left and right images, the search range for corresponding points can be reduced.
特徴点抽出部27は、必要に応じて縮小処理、明度補正、コントラスト補正などの前処理を行う(ステップS12−1)。次に、ソーベルフィルタによって、前処理された左右画像からエッジ強度が算出される(ステップS12−2)。図5は、x方向およびy方向のソーベルフィルタである。ソーベルフィルタのマトリクスに対応する9つの画素の輝度値を、左上から右下へ向かってI1~I9とし、x方向の強度をdx、y方向の強度をdyとすると、注目画素(中央の画素)のエッジ強度Magは、以下の数1で算出される。
The feature point extraction unit 27 performs preprocessing such as reduction processing, brightness correction, and contrast correction as necessary (step S12-1). Next, the edge strength is calculated from the preprocessed left and right images by the Sobel filter (step S12-2). FIG. 5 is a Sobel filter in the x and y directions. If the luminance values of the nine pixels corresponding to the matrix of the Sobel filter are I 1 to I 9 from the upper left to the lower right, the intensity in the x direction is dx, and the intensity in the y direction is dy, the target pixel (center The edge intensity Mag of the pixel is calculated by the following equation (1).
図6は、1/4圧縮した入力画像を示す図面代用写真(A)と、ソーベルフィルタによるエッジ強度画像を示す図面代用写真(B)である。特徴点抽出部27は、図6(B)のエッジ強度画像に対して細線化などの後処理を行う(ステップS12−3)。細線化することで、エッジは1画素幅になる。この結果、最終的に抽出される特徴点の位置が間引かれるため、画像中で偏りのなく特徴点が抽出される。
FIG. 6 is a drawing substitute photo (A) showing an input image compressed by 1/4 and a drawing substitute photo (B) showing an edge intensity image by a Sobel filter. The feature point extraction unit 27 performs post-processing such as thinning the edge intensity image in FIG. 6B (step S12-3). By thinning, the edge becomes one pixel width. As a result, since the positions of the feature points finally extracted are thinned out, the feature points are extracted without deviation in the image.
次に、特徴点抽出部27は、二値化処理を行う(ステップS12−4)。二値化の閾値を自動で決定するため、エッジ強度のヒストグラムが作成される。図7は、エッジ強度のヒストグラムである。特徴点抽出部27は、作成したヒストグラムにおいて、エッジ強度の強い方から数えた累積度数が全エッジ画素数の50%の位置に相当するエッジ強度を二値化の閾値とする。
Next, the feature point extraction unit 27 performs binarization processing (step S12-4). In order to automatically determine the binarization threshold, a histogram of edge strength is created. FIG. 7 is a histogram of edge strength. The feature point extraction unit 27 uses, as a binarization threshold, an edge intensity corresponding to a position where the cumulative frequency counted from the edge having the stronger edge intensity is 50% of the total number of edge pixels in the created histogram.
図7の場合、エッジの画素数は56986画素であり、強度の強い方から数えた累積度数が全エッジ画素数の50%となる28493画素目のエッジ強度は、52である。したがって、二値化の閾値は52となる。図8は、閾値52で二値化した結果を示す図面代用写真であり、図9は、左右画像で抽出された特徴点を示す図面代用写真である。
In the case of FIG. 7, the number of pixels of the edge is 56986 pixels, and the edge strength at the 28493th pixel, in which the cumulative frequency counted from the stronger side becomes 50% of the total number of edge pixels, is 52. Therefore, the binarization threshold is 52. FIG. 8 is a drawing substitute photograph showing the result of binarization with the threshold 52, and FIG. 9 is a drawing substitute photograph showing the feature points extracted in the left and right images.
次に、対応点探索部28によって、左右画像中の特徴点が対応付けられる(ステップS13)。対応点探索部28は、左画像中の各特徴点を中心としたテンプレート画像を作成し、右画像中の所定の領域でテンプレート画像に最も相関の強い対応点を探索する。
Next, the corresponding points in the left and right images are associated by the corresponding point search unit 28 (step S13). The corresponding point search unit 28 creates a template image centered on each feature point in the left image, and searches for a corresponding point having the strongest correlation with the template image in a predetermined region in the right image.
図10は、テンプレート作成法を説明する説明図(A)と、探索ラインの決定方法を説明する説明図(B)と、探索幅の決定方法を説明する説明図(C)である。図10(A)に示すように、テンプレート画像は、注目特徴点を中心とした21画素×21画素で構成される。ステレオ画像は縦視差が除去されているため、図10(B)に示すように、x軸に平行に探索ラインが設けられる。また、図10(C)に示すように、対応点探索部28は、右画像の探索ライン上で最も左側の特徴点と、最も右側の特徴点を検出し、最も左側の特徴点と最も右側の特徴点までを探索幅として対応点を探索する。
FIG. 10 is an explanatory diagram (A) for explaining a template creation method, an explanatory diagram (B) for explaining a search line determination method, and an explanatory diagram (C) for explaining a search width determination method. As shown in FIG. 10A, the template image is composed of 21 pixels × 21 pixels centered on the feature point of interest. Since the vertical parallax is removed from the stereo image, a search line is provided in parallel to the x-axis as shown in FIG. Also, as shown in FIG. 10C, the corresponding point search unit 28 detects the leftmost feature point and the rightmost feature point on the search line of the right image, and the leftmost feature point and the rightmost feature point. Corresponding points are searched using the search range up to the feature point of.
この結果、テンプレート画像と最も相関の強い点が対応点となる。なお、撮影部2~9を縦方向に並べた場合、探索ラインはy軸に平行となる。また、撮影部2~9を縦方向および横方向に並べた場合、探索ラインはx軸およびy軸となる。
As a result, the point having the strongest correlation with the template image becomes the corresponding point. When the photographing units 2 to 9 are arranged in the vertical direction, the search line is parallel to the y axis. When the photographing units 2 to 9 are arranged in the vertical direction and the horizontal direction, the search lines are the x axis and the y axis.
次に、誤対応点判定部24が、誤って対応付けた特徴点(誤対応点)を判定する(ステップS14)。誤対応点の判定処理は、まず、重複画像で対応付けた特徴点の三次元座標を算出し、測定モデルを形成するステップS14−1と、この測定モデルと基準モデルとを位置合わせ(形状マッチング)するステップS14−2と、測定モデルと基準モデルとを比較することで誤対応点を判定するステップS14−3とで構成される。
Next, the miscorresponding point determination unit 24 determines a feature point (miscorresponding point) that is incorrectly associated (step S14). In the miscorresponding point determination process, first, the three-dimensional coordinates of the feature points associated with the overlapping images are calculated, and step S14-1 for forming the measurement model is aligned with the measurement model and the reference model (shape matching). Step S14-2, and Step S14-3 for determining a miscorresponding point by comparing the measurement model with the reference model.
ステップS14−1では、まず、特徴点対応付部21が対応付けた特徴点の位置に基づき、特徴点の三次元座標を算出する。そして、特徴点の三次元座標に基づき、三角形網形成部23が、三次元上でTINを形成する。TINの形成には、ドロネー(Delaunay)法が用いられる。
In step S14-1, first, the three-dimensional coordinates of the feature points are calculated based on the positions of the feature points associated by the feature point association unit 21. Then, based on the three-dimensional coordinates of the feature points, the triangle network forming unit 23 forms a TIN in three dimensions. A Delaunay method is used to form TIN.
次に、測定モデルと基準モデルとの形状マッチングを行う(ステップS14−2)。以下、測定モデルの点と基準モデルの点との点間距離を最小化する方法について説明する。まず、測定モデルと基準モデルの重心位置を合わせた後、基準モデルの各点に最も近い測定モデルの点を探索する。基準モデルの点の三次元座標を(x1i,y1i,z1i)とし、その点に最も近い測定モデルの点の三次元座標を(x2i,y2i,z2i)とすると、以下の数2に示すように、全点間距離の合計を最小化するように、測定モデルまたは基準モデルの全点を移動する。なお、iは点番号、nは基準モデルの点数である。
Next, shape matching between the measurement model and the reference model is performed (step S14-2). A method for minimizing the distance between the points of the measurement model and the reference model will be described below. First, after matching the positions of the center of gravity of the measurement model and the reference model, the point of the measurement model closest to each point of the reference model is searched. Assuming that the three-dimensional coordinate of the point of the reference model is (x1i, y1i, z1i) and the three-dimensional coordinate of the point of the measurement model closest to that point is (x2i, y2i, z2i), , Move all points in the measurement model or reference model to minimize the total distance between all points. Note that i is a point number, and n is the number of points of the reference model.
次に、測定モデルの点と対応する基準モデルの点を4点以上指定する方法について説明する。図11は、測定モデルの図面代用写真(A)と、基準モデルの図面代用写真(B)である。図11に示すように、画面に表示された測定モデルと基準モデルの対応点を4点以上マニュアルで指定する。この対応点の座標が一致するように、測定モデルまたは基準モデルを座標変換することで、測定モデルと基準モデルの位置を合わせる。
Next, a method for designating four or more reference model points corresponding to the measurement model points will be described. FIG. 11 is a drawing substitute photo (A) of the measurement model and a drawing substitute photo (B) of the reference model. As shown in FIG. 11, four or more corresponding points between the measurement model and the reference model displayed on the screen are designated manually. The position of the measurement model and the reference model is matched by coordinate-transforming the measurement model or the reference model so that the coordinates of the corresponding points match.
形状マッチングが終了すると、基準モデルと測定モデルを比較する(ステップS14−3)。図12は、基準モデルの図面代用写真(A)と、測定モデルの図面代用写真(B)と、誤対応点を表示した図面代用写真(C)である。基準モデルと測定モデルの比較は、基準モデルの点に最も近い測定モデルの点を探索し、この点間距離が予測精度±σの±2σ~±3σの範囲内にあるか否かによって行われる。範囲外の点は、誤対応点として判定される。例えば、予測精度が±0.5mmであれば、その点間距離が1mm以上の点は誤対応点として判定される。
When the shape matching is completed, the reference model is compared with the measurement model (step S14-3). FIG. 12 shows a drawing substitute photo (A) of the reference model, a drawing substitute photo (B) of the measurement model, and a drawing substitute photo (C) displaying miscorresponding points. The comparison between the reference model and the measurement model is performed by searching for the point of the measurement model closest to the reference model point and determining whether the distance between the points is within the range of ± 2σ to ± 3σ of the prediction accuracy ± σ. . Points outside the range are determined as miscorresponding points. For example, if the prediction accuracy is ± 0.5 mm, a point having a distance between points of 1 mm or more is determined as an erroneous correspondence point.
誤対応点の判定が終了すると、測定対象物18の三次元形状が測定される(ステップS15)。三次元形状測定部25は、誤対応点と判定された点を除いた特徴点群をLSMの初期値としてステレオマッチングを行い、三次元形状(密面)を測定する。図13は、MRIによる基準モデルを示す図面代用写真(A)と、誤対応点を判定しない場合の密面測定結果の図面代用写真(B)と、誤対応点を判定した場合の密面測定結果の図面代用写真(C)である。
When the determination of the erroneous corresponding point is completed, the three-dimensional shape of the measuring object 18 is measured (step S15). The three-dimensional shape measurement unit 25 performs stereo matching using a feature point group excluding points determined to be miscorresponding points as an initial value of LSM, and measures a three-dimensional shape (dense surface). FIG. 13 is a drawing substitute photo (A) showing a reference model by MRI, a drawing substitute photo (B) of a dense surface measurement result when an erroneous correspondence point is not determined, and a dense surface measurement when an erroneous correspondence point is determined. It is a drawing substitute photograph (C) of the result.
図13(C)に示す三次元形状は、図13(A)に示す基準モデルに基づいて、誤対応点を判定した密面測定結果である。図13(B)に示すように、誤対応点を判定しない場合には、顔の輪郭部分(背景部分)において、不要なTINが形成される。一方、図13(C)に示すように、誤対応点を判定した場合には、その部分に不要なTINが形成されない。
The three-dimensional shape shown in FIG. 13C is a dense surface measurement result in which an erroneous corresponding point is determined based on the reference model shown in FIG. As shown in FIG. 13B, when an erroneous correspondence point is not determined, an unnecessary TIN is formed in the contour portion (background portion) of the face. On the other hand, as shown in FIG. 13C, when an erroneous correspondence point is determined, unnecessary TIN is not formed in that portion.
なお、初期値を求める際に行った誤対応点の除去の処理を、三次元形状測定部25が、その後に測定値を求める際にも、同様に行うことができる。
It should be noted that the process of removing the miscorresponding point performed when the initial value is obtained can be similarly performed when the three-dimensional shape measuring unit 25 subsequently obtains the measured value.
また、測定モデルまたは基準モデルにテクスチャーを貼り付けることが可能である。以下、テクスチャマッピングの詳細について説明する。まず、撮影部2~9で撮影された画像から測定対象物18のテクスチャー(画像)を抽出する。測定対象物18のテクスチャーは、誤対応点を除いた特徴点群を凸包線で囲み、凸包線の領域内の画像を抽出することで得られる。
Also, texture can be pasted on the measurement model or reference model. Details of the texture mapping will be described below. First, the texture (image) of the measurement object 18 is extracted from the images photographed by the photographing units 2 to 9. The texture of the measurement object 18 is obtained by surrounding the feature point group excluding the miscorresponding points with a convex hull and extracting an image in the region of the convex hull.
次に、抽出した画像上の各画素(ピクセル)の空間座標を計算する。この処理では、テクスチャマッピングした画像を作成するために、写真上の画像座標(x,y)を空間座標(X,Y,Z)に変換する。空間座標(X,Y,Z)は、三次元座標演算部22で計算した値である。
Next, the spatial coordinates of each pixel on the extracted image are calculated. In this process, image coordinates (x, y) on the photograph are converted into spatial coordinates (X, Y, Z) in order to create a texture-mapped image. The spatial coordinates (X, Y, Z) are values calculated by the three-dimensional coordinate calculation unit 22.
写真の画像座標(x,y)に対応する空間座標(X,Y,Z)は、以下の数3の式で与えられる。なお、数3の係数(a,b,c,d)は、TIN形成時における三角形内挿処理の平面方程式の係数である。このようにして、写真上の各ピクセルの濃度取得位置を求め、画像を三次元空間上に貼り付ける。図14は、ワイヤーフレームを示す図面代用写真(A)であり、テクスチャーをマッピングした画像を示す図面代用写真(B)である。
The spatial coordinates (X, Y, Z) corresponding to the image coordinates (x, y) of the photograph are given by the following equation (3). Note that the coefficients (a, b, c, d) in Equation 3 are coefficients of a plane equation for triangle interpolation processing at the time of TIN formation. In this way, the density acquisition position of each pixel on the photograph is obtained, and the image is pasted on the three-dimensional space. FIG. 14 is a drawing-substituting photograph (A) showing a wire frame, and a drawing-substituting photograph (B) showing an image obtained by mapping a texture.
以下、測定対象物の撮影前に校正用被写体を撮影し、撮影部の位置を事前に求める方法において、撮影位置姿勢測定部20が、相互標定法を採用した場合の具体的処理の例について、以下に説明する。
Hereinafter, in the method of photographing the calibration subject before photographing the measurement object and obtaining the position of the photographing unit in advance, an example of specific processing when the photographing position / orientation measuring unit 20 adopts the relative orientation method, This will be described below.
相互標定法によれば、重複画像に写された6点以上の対応する基準点に基づき、外部標定要素を求めることができる。また、基準点の三次元上の位置が既知であれば、絶対標定によって撮影部2~9の絶対座標が求められる。この場合、最終的に求める測定モデルは、実際のスケールとなる。
According to the relative orientation method, an external orientation element can be obtained based on six or more corresponding reference points copied in the duplicate image. If the three-dimensional position of the reference point is known, the absolute coordinates of the photographing units 2 to 9 can be obtained by absolute orientation. In this case, the finally obtained measurement model is an actual scale.
図15は、相互標定を説明する説明図である。相互標定は、左右2枚の画像における6点以上の対応点(パスポイント)によって外部標定要素を求める。相互標定では、投影中心O1とO2と基準点Pを結ぶ2本の光線が同一平面内になければならいという共面条件を用いる。以下の数4に、共面条件式を示す。
FIG. 15 is an explanatory diagram for explaining relative orientation. In the relative orientation, an external orientation element is obtained from six or more corresponding points (pass points) in the two left and right images. In the relative orientation, a coplanar condition that two rays connecting the projection centers O 1 and O 2 and the reference point P must be in the same plane is used. The following formula 4 shows the coplanar conditional expression.
図15に示すように、モデル座標系の原点を左側の投影中心O1にとり、右側の投影中心O2を結ぶ線をX軸にとるようにする。縮尺は、基線長を単位長さとする。このとき、求めるパラメータは、左側のカメラのZ軸の回転角κ1、Y軸の回転角φ1、右側のカメラの2軸の回転角κ2、Y軸の回転角φ2、X軸の回転角ω2の5つの回転角となる。この場合、左側のカメラのX軸の回転角ω1は0なので、考慮する必要はない。このような条件にすると、数4の共面条件式は数5のようになり、この式を解けば各パラメータが求められる。
As shown in FIG. 15, the origin of the model coordinate system is taken as the left projection center O 1 , and the line connecting the right projection center O 2 is taken as the X axis. For the scale, the base length is the unit length. At this time, the parameters to be obtained are the rotation angle κ 1 of the left camera, the rotation angle φ 1 of the Y axis, the rotation angle κ 2 of the right camera, the rotation angle φ 2 of the Y axis, and the X axis There are five rotation angles ω 2 . In this case, since the rotation angle ω 1 of the X axis of the left camera is 0, there is no need to consider it. Under such conditions, the coplanar conditional expression of Expression 4 becomes as shown in Expression 5, and each parameter can be obtained by solving this expression.
ここで、モデル座標系XYZとカメラ座標系xyzの間には、次に示すような座標変換の関係式が成り立つ。
Here, the following relational expression for coordinate transformation holds between the model coordinate system XYZ and the camera coordinate system xyz.
これらの式を用いて、次の手順により、未知パラメータ(外部標定要素)を求める。
(1)未知パラメータ(κ1,φ1,κ2,φ2,ω2)の初期近似値は通常0とする。
(2)数5の共面条件式を近似値のまわりにテーラー展開し、線形化したときの微分係数の値を数6により求め、観測方程式をたてる。
(3)最小二乗法をあてはめ、近似値に対する補正量を求める。
(4)近似値を補正する。
(5)補正された近似値を用いて、(1)~(4)までの操作を収束するまで繰り返す。 Using these equations, an unknown parameter (external orientation element) is obtained by the following procedure.
(1) The initial approximate values of unknown parameters (κ 1 , φ 1 , κ 2 , φ 2 , ω 2 ) are normally 0.
(2) The coplanar conditional expression ofEquation 5 is Taylor-expanded around the approximate value, and the value of the differential coefficient when linearized is obtained by Equation 6, and the observation equation is established.
(3) A least square method is applied to obtain a correction amount for the approximate value.
(4) The approximate value is corrected.
(5) Using the corrected approximate value, the operations (1) to (4) are repeated until convergence.
(1)未知パラメータ(κ1,φ1,κ2,φ2,ω2)の初期近似値は通常0とする。
(2)数5の共面条件式を近似値のまわりにテーラー展開し、線形化したときの微分係数の値を数6により求め、観測方程式をたてる。
(3)最小二乗法をあてはめ、近似値に対する補正量を求める。
(4)近似値を補正する。
(5)補正された近似値を用いて、(1)~(4)までの操作を収束するまで繰り返す。 Using these equations, an unknown parameter (external orientation element) is obtained by the following procedure.
(1) The initial approximate values of unknown parameters (κ 1 , φ 1 , κ 2 , φ 2 , ω 2 ) are normally 0.
(2) The coplanar conditional expression of
(3) A least square method is applied to obtain a correction amount for the approximate value.
(4) The approximate value is corrected.
(5) Using the corrected approximate value, the operations (1) to (4) are repeated until convergence.
相互標定が収束した場合、さらに接続標定が行われる。接続標定とは、複数のモデル間の傾き、縮尺を統一して同一座標系とする処理である。この処理を行った場合、以下の数7で表される接続較差を算出する。算出した結果、ΔZjおよびΔDjが、所定値(例えば、0.0005(1/2000))以下であれば、接続標定が正常に行われたと判定する。
When the relative orientation converges, connection orientation is further performed. Connection orientation is a process of unifying the inclination and scale between a plurality of models to make the same coordinate system. When this processing is performed, a connection range represented by the following formula 7 is calculated. As a result of the calculation, if ΔZ j and ΔD j are equal to or less than a predetermined value (for example, 0.0005 (1/2000)), it is determined that the connection orientation has been normally performed.
(第1の実施形態の優位性)
第1の実施形態によれば、左右の重複画像における誤対応点を自動判定することで、ステレオマッチングに用いられるLSMまたは正規化相関法の初期値を始めとする測定値を自動で取得することができる。 (Advantages of the first embodiment)
According to the first embodiment, by automatically determining the miscorresponding points in the left and right overlapping images, the measurement value including the initial value of LSM or normalized correlation method used for stereo matching is automatically acquired. Can do.
第1の実施形態によれば、左右の重複画像における誤対応点を自動判定することで、ステレオマッチングに用いられるLSMまたは正規化相関法の初期値を始めとする測定値を自動で取得することができる。 (Advantages of the first embodiment)
According to the first embodiment, by automatically determining the miscorresponding points in the left and right overlapping images, the measurement value including the initial value of LSM or normalized correlation method used for stereo matching is automatically acquired. Can do.
また、測定モデルが絶対標定によって実際の寸法が与えられている場合、実際の寸法が与えられた基準モデルを用いることで、測定モデルと基準モデルとのスケール調整が不要となる。
Also, when the measurement model is given an actual dimension by absolute orientation, the scale adjustment between the measurement model and the reference model becomes unnecessary by using the reference model to which the actual dimension is given.
また、基準モデルの点と、これに最も近い測定モデルの点との距離を最小化することで、基準モデルと測定モデルの位置合わせを自動で行うことができる。なお、基準モデルの点と対応する測定モデルの点を4点以上指定した場合には、基準モデルと測定モデルの位置合わせをマニュアル、半自動または全自動で行うことも可能である。
Also, by minimizing the distance between the reference model point and the closest measurement model point, the reference model and the measurement model can be automatically aligned. When four or more measurement model points corresponding to the reference model points are designated, the alignment of the reference model and the measurement model can be performed manually, semi-automatically or fully automatically.
また、基準モデルと測定モデルとの点間距離のみに基づいて誤対応点を判定するため、誤対応点の判定処理を簡略化することができる。
In addition, since the miscorresponding point is determined based only on the point-to-point distance between the reference model and the measurement model, the miscorresponding point determination process can be simplified.
また、基準モデルに測定対象物18のテクスチャーを貼り付けることで、ある測定対象物と他の測定対象物との形状の比較が容易になる。また、基準モデルに貼り付けたテクスチャーと、測定モデルに貼り付けたテクスチャーとを比較することで、設計データ(基準モデル)と実物(測定モデル)との比較検証も可能となる。
Also, by pasting the texture of the measurement object 18 on the reference model, it becomes easy to compare the shape of one measurement object with another measurement object. Further, by comparing the texture pasted on the reference model with the texture pasted on the measurement model, comparison verification between the design data (reference model) and the actual product (measurement model) becomes possible.
さらに、撮影部2~9の撮影位置および姿勢を校正用被写体19により測定することができる。撮影位置および姿勢を求めた後には、三次元計測は、測定対象物18をその空間内に置くことでいつでもできる。また動きをもった動的な測定対象物18でも、撮影部2~9により同時撮影すれば計測が可能である。
Furthermore, the photographing position and posture of the photographing units 2 to 9 can be measured by the calibration subject 19. After obtaining the photographing position and orientation, three-dimensional measurement can be performed at any time by placing the measuring object 18 in the space. Further, even a dynamic measurement object 18 having movement can be measured by simultaneously photographing with the photographing units 2 to 9.
2.第2の実施形態
以下、第1の実施形態の変形例について説明する。第2の実施形態では、基準モデルとして実際の寸法(スケール)が与えられていない疑似モデルを用いる。疑似モデルは、その特徴量(体積)が測定モデルと同程度になるようにスケール調整される。 2. Second Embodiment Hereinafter, a modification of the first embodiment will be described. In the second embodiment, a pseudo model in which an actual dimension (scale) is not given is used as the reference model. The pseudo model is scaled so that the feature amount (volume) is comparable to that of the measurement model.
以下、第1の実施形態の変形例について説明する。第2の実施形態では、基準モデルとして実際の寸法(スケール)が与えられていない疑似モデルを用いる。疑似モデルは、その特徴量(体積)が測定モデルと同程度になるようにスケール調整される。 2. Second Embodiment Hereinafter, a modification of the first embodiment will be described. In the second embodiment, a pseudo model in which an actual dimension (scale) is not given is used as the reference model. The pseudo model is scaled so that the feature amount (volume) is comparable to that of the measurement model.
図16は、第2~第4の実施形態に係る形状測定装置のブロック図である。形状測定装置の計算処理部15は、特徴量(体積)を算出する特徴量計算部31と、測定モデルと疑似モデルの体積に基づいて縮尺率を計算する縮尺率計算部32と、算出した縮尺率に基づいてスケールを変更するスケール変更部33とをさらに備える。
FIG. 16 is a block diagram of the shape measuring apparatus according to the second to fourth embodiments. The calculation processing unit 15 of the shape measuring apparatus includes a feature amount calculation unit 31 that calculates a feature amount (volume), a scale rate calculation unit 32 that calculates a scale rate based on the volume of the measurement model and the pseudo model, and a calculated scale. And a scale changing unit 33 that changes the scale based on the rate.
図17は、第2~第4の実施形態に係る形状測定装置のプログラムのフローチャートである。ステップS24−1において、三角形網形成部23(測定モデル形成部)は、複数の測定モデルを合成して測定対象物18全周の測定モデル(全周モデル)を形成する。
FIG. 17 is a flowchart of the program of the shape measuring apparatus according to the second to fourth embodiments. In step S 24-1, the triangular network forming unit 23 (measurement model forming unit) combines a plurality of measurement models to form a measurement model (entire model) of the entire circumference of the measurement object 18.
全周モデルが形成された後、全周モデルの特徴量(体積)が算出される(ステップS24−2)。全周モデルの体積は、全周モデルを輪切りして、輪切りした部分の面積を計算し、各面積を積算することで、算出される。図18は、全周モデルを示す図面代用写真(A)と、全周モデルを輪切りにした様子を示す図面代用写真(B)である。
After the all-around model is formed, the feature amount (volume) of the all-around model is calculated (step S24-2). The volume of the entire circumference model is calculated by rounding the circumference model, calculating the area of the rounded portion, and integrating the areas. FIG. 18 is a drawing substitute photo (A) showing the all-around model and a drawing substitute photo (B) showing a state in which the all-around model is cut into circles.
次に、疑似モデルの体積を全周モデルの体積で除算することで、縮尺率を計算する(ステップS24−3)。なお、疑似モデルの体積データは、予め計算または用意しておく。そして、算出した縮尺率で、疑似モデルのスケールを変更する(ステップS24−4)。スケールの変更は、疑似モデルの各点の座標値に縮尺率を乗算することで行われる。
Next, the scale ratio is calculated by dividing the volume of the pseudo model by the volume of the entire circumference model (step S24-3). The volume data of the pseudo model is calculated or prepared in advance. Then, the scale of the pseudo model is changed with the calculated scale factor (step S24-4). The scale is changed by multiplying the coordinate value of each point of the pseudo model by the scale factor.
疑似モデルのスケールを変更した後、全周モデルと疑似モデルの位置合わせ(形状マッチング)を行う(ステップS24−5)。形状マッチングには、上述したICP法、測定モデルと基準モデルの点間距離を最小化する方法、または測定モデルの点と対応する基準モデルの点を4点以上指定する方法が用いられる。
After changing the scale of the pseudo model, the entire model and the pseudo model are aligned (shape matching) (step S24-5). For the shape matching, the above-described ICP method, a method for minimizing the distance between the points of the measurement model and the reference model, or a method for designating four or more points of the reference model corresponding to the points of the measurement model are used.
また、位置合わせした全周モデル(測定モデル)と疑似モデル(基準モデル)を比較して、誤対応点を判定する(ステップS24−6)。測定モデルと基準モデルの比較は、基準モデルの点に最も近い測定モデルの点を探索し、この点間距離が予測精度±σの±2σ~±3σの範囲内にあるか否かによって行われる。この範囲外の点は、誤対応点として判定される。
Also, the mis-corresponding point is determined by comparing the aligned all-around model (measurement model) and the pseudo model (reference model) (step S24-6). The comparison between the measurement model and the reference model is performed by searching for the point of the measurement model closest to the reference model point and determining whether the distance between the points is within the range of ± 2σ to ± 3σ of the prediction accuracy ± σ. . Points outside this range are determined as miscorresponding points.
なお、この誤対応点の判定処理を、三次元形状測定部25が、その後に測定値を求める際にも、同様に行うことができる。
It should be noted that this miscorresponding point determination process can be similarly performed when the three-dimensional shape measurement unit 25 subsequently obtains a measurement value.
(第2の実施形態の優位性)
第2の実施形態によれば、寸法の異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。 (Advantage of the second embodiment)
According to the second embodiment, it is possible to automatically determine a miscorresponding point of a measurement model using reference models (pseudo models) having different dimensions.
第2の実施形態によれば、寸法の異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。 (Advantage of the second embodiment)
According to the second embodiment, it is possible to automatically determine a miscorresponding point of a measurement model using reference models (pseudo models) having different dimensions.
3.第3の実施形態
以下、第1の実施形態の変形例について説明する。第3の実施形態では、疑似モデルの重心位置(特徴量)から表面までの距離が、測定モデルと同程度になるように疑似モデルをスケール調整する。 3. Third Embodiment Hereinafter, a modification of the first embodiment will be described. In the third embodiment, the pseudo model is scaled so that the distance from the barycentric position (feature amount) of the pseudo model to the surface is approximately the same as that of the measurement model.
以下、第1の実施形態の変形例について説明する。第3の実施形態では、疑似モデルの重心位置(特徴量)から表面までの距離が、測定モデルと同程度になるように疑似モデルをスケール調整する。 3. Third Embodiment Hereinafter, a modification of the first embodiment will be described. In the third embodiment, the pseudo model is scaled so that the distance from the barycentric position (feature amount) of the pseudo model to the surface is approximately the same as that of the measurement model.
第3の実施形態に係る形状測定装置のブロック図および形状測定装置のプログラムのフローチャートは、図16および図17と同一である。三角形網形成部23(測定モデル形成部)は、複数の測定モデルを合成して全周モデルを形成し(ステップS24−1)、特徴量計算部31は、全周モデルの重心位置を計算する(ステップS24−2)。重心位置(xm,ym,zm)は、以下の数8によって算出される。
The block diagram of the shape measuring apparatus according to the third embodiment and the flowchart of the program of the shape measuring apparatus are the same as those in FIGS. The triangular network forming unit 23 (measurement model forming unit) combines a plurality of measurement models to form an all-around model (step S24-1), and the feature amount calculating unit 31 calculates the center of gravity position of the all-around model. (Step S24-2). The barycentric position (xm, ym, zm) is calculated by the following equation (8).
次に、縮尺率を計算する(ステップS24−3)。縮尺率は、重心位置から表面までの距離の最大値および最小値、もしくは、水平方向(X軸またはY軸方向)および垂直方向(Z軸方向)における重心位置から表面までの距離を比較することで求められる。図19は、基準モデルの重心位置を示す図面代用写真(A)と、測定モデルの重心位置を示す図面代用写真(B)である。図19(A)および(B)では、水平方向および垂直方向における重心位置から表面までの距離が描かれている。
Next, the scale ratio is calculated (step S24-3). The scale ratio is the maximum and minimum values of the distance from the center of gravity to the surface, or the distance from the center of gravity to the surface in the horizontal direction (X-axis or Y-axis direction) and vertical direction (Z-axis direction). Is required. FIG. 19 is a drawing substitute photo (A) showing the center of gravity position of the reference model and a drawing substitute photo (B) showing the center of gravity position of the measurement model. In FIGS. 19A and 19B, the distance from the center of gravity position to the surface in the horizontal and vertical directions is depicted.
縮尺率を算出後、この縮尺率に基づき、疑似モデルのスケールを変更する(ステップS24−4)。スケールの変更は、疑似モデルの各点の座標値に縮尺率を乗算することで行われる。
After calculating the scale ratio, the scale of the pseudo model is changed based on the scale ratio (step S24-4). The scale is changed by multiplying the coordinate value of each point of the pseudo model by the scale factor.
疑似モデルのスケールを変更した後、全周モデルと疑似モデルの形状マッチングを行う(ステップS24−5)。形状マッチングには、上述したICP法、測定モデルと基準モデルの点間距離を最小化する方法、または、測定モデルの点と対応する基準モデルの点を4点以上指定する方法が用いられる。
After changing the scale of the pseudo model, shape matching between the all-around model and the pseudo model is performed (step S24-5). For the shape matching, the above-described ICP method, a method of minimizing the distance between the points of the measurement model and the reference model, or a method of designating four or more points of the reference model corresponding to the points of the measurement model is used.
また、位置合わせした全周モデル(測定モデル)と疑似モデル(基準モデル)を比較して、誤対応点を判定する(ステップS24−6)。測定モデルと基準モデルの比較は、基準モデルの点に最も近い測定モデルの点を探索し、この点間距離が予測精度±σの±2σ~±3σの範囲内にあるか否かによって行われる。この範囲外の点は、誤対応点として判定される。
Also, the mis-corresponding point is determined by comparing the aligned all-around model (measurement model) and the pseudo model (reference model) (step S24-6). The comparison between the measurement model and the reference model is performed by searching for the point of the measurement model closest to the reference model point and determining whether the distance between the points is within the range of ± 2σ to ± 3σ of the prediction accuracy ± σ. . Points outside this range are determined as miscorresponding points.
なお、この誤対応点の判定処理を、三次元形状測定部25が、その後に測定値を求める際にも、同様に行うことができる。
It should be noted that this miscorresponding point determination process can be similarly performed when the three-dimensional shape measurement unit 25 subsequently obtains a measurement value.
(第3の実施形態の優位性)
第3の実施形態によれば、スケールの異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。 (Advantage of the third embodiment)
According to the third embodiment, a miscorresponding point of a measurement model can be automatically determined using a reference model (pseudo model) having a different scale.
第3の実施形態によれば、スケールの異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。 (Advantage of the third embodiment)
According to the third embodiment, a miscorresponding point of a measurement model can be automatically determined using a reference model (pseudo model) having a different scale.
4.第4の実施形態
以下、第2の実施形態の変形例について説明する。第4の実施形態では、測定モデルと疑似モデルで対応する4点以上の特徴点(特徴量)を抽出または指定し、スケール変更から位置合わせ(形状マッチング)までを同時に行う。 4). Fourth Embodiment A modification of the second embodiment will be described below. In the fourth embodiment, four or more feature points (feature amounts) corresponding to the measurement model and the pseudo model are extracted or designated, and the process from scale change to position alignment (shape matching) is performed simultaneously.
以下、第2の実施形態の変形例について説明する。第4の実施形態では、測定モデルと疑似モデルで対応する4点以上の特徴点(特徴量)を抽出または指定し、スケール変更から位置合わせ(形状マッチング)までを同時に行う。 4). Fourth Embodiment A modification of the second embodiment will be described below. In the fourth embodiment, four or more feature points (feature amounts) corresponding to the measurement model and the pseudo model are extracted or designated, and the process from scale change to position alignment (shape matching) is performed simultaneously.
第4の実施形態に係る形状測定装置のブロック図および形状測定装置のプログラムのフローチャートは、図16および図17と同一である。三角形網形成部23(測定モデル形成部)は、一つの測定モデルまたは全周モデルを形成する(ステップS24−1)。
The block diagram of the shape measuring apparatus and the flowchart of the program of the shape measuring apparatus according to the fourth embodiment are the same as those in FIGS. 16 and 17. The triangular network forming unit 23 (measurement model forming unit) forms one measurement model or an entire circumference model (step S24-1).
特徴量計算部31は、測定モデルと疑似モデルで対応する4点以上の特徴点を抽出する(ステップS24−2)。図20は、基準モデルの特徴点位置を示す図面代用写真(A)と、測定モデルの特徴点位置を示す図面代用写真(B)である。図20に示すように、例えば、測定対象物18が人体の頭部である場合、目や鼻の穴、頭頂部、後頭部、側頭部の位置を抽出する。目や鼻の穴の位置は、エッジ強度および輝度値を二値化処理することによって抽出される。頭頂部や側頭部の位置は、垂直方向や水平方向の最大値を求めることで抽出される。なお、これら特徴点を画面上でマニュアル指定する態様でもよい。
The feature quantity calculation unit 31 extracts four or more feature points corresponding to the measurement model and the pseudo model (step S24-2). FIG. 20 is a drawing substitute photograph (A) showing the feature point positions of the reference model and a drawing substitute photograph (B) showing the feature point positions of the measurement model. As illustrated in FIG. 20, for example, when the measurement target 18 is a human head, the positions of eyes, nostrils, the top of the head, the back of the head, and the temporal region are extracted. The positions of the eyes and the nostrils are extracted by binarizing the edge intensity and the luminance value. The positions of the top of the head and the temporal region are extracted by obtaining the maximum values in the vertical and horizontal directions. The feature points may be manually specified on the screen.
次に、抽出または指定した4点以上の特徴点の位置に基づき、縮尺率の計算(ステップS24−3)から形状マッチングまでの処理(ステップS24−5)を同時に行う。図21は、スケール変更および位置合わせのフローチャートである。まず、測定モデルの座標を(XM,YM,ZM)、基準モデルの座標を(X,Y,Z)、3軸周りの回転角を(ω,φ,κ)、平行移動量を(X0,Y0,Z0)とすると、測定モデルと基準モデルとの座標間には、以下の数9が成立する。
Next, based on the positions of four or more feature points extracted or designated, the processing from the calculation of the scale ratio (step S24-3) to the shape matching (step S24-5) is performed simultaneously. FIG. 21 is a flowchart of scale change and alignment. First, the coordinates of the measurement model are (XM, YM, ZM), the coordinates of the reference model are (X, Y, Z), the rotation angles around the three axes are (ω, φ, κ), and the translation amount is (X0, If Y0, Z0), the following formula 9 is established between the coordinates of the measurement model and the reference model.
上記数9において、測定モデルと基準モデルで対応する4点以上の特徴点の座標を入力し、4つ以上の連立方程式を立てる。このとき、求める未知変量は、縮尺S、3軸周りの回転角(ω,φ,κ)、および平行移動量(X0,Y0,Z0)であるが、平行移動量(X0,Y0,Z0)を無視すれば、未知変量は、縮尺S、3軸周りの回転角(ω,φ,κ)の4つとなる。
In the above equation 9, the coordinates of four or more feature points corresponding to the measurement model and the reference model are input, and four or more simultaneous equations are established. At this time, the unknown variables to be obtained are the scale S, the rotation angles (ω, φ, κ) around the three axes, and the translation amounts (X0, Y0, Z0), but the translation amounts (X0, Y0, Z0). Is ignored, there are four unknown variables: scale S, rotation angles around three axes (ω, φ, κ).
これらの連立方程式から、縮尺S(スケール係数)を計算し(ステップS31)、3軸周りの回転角(ω,φ,κ)で構成される回転行列Rを計算する(ステップS32)。また、代入する特徴点数に応じて、平行移動量を計算する(ステップS33)。そして、求めた縮尺S、3軸周りの回転角(ω,φ,κ)、および平行移動量(X0,Y0,Z0)に基づき、上記数8により、疑似モデルの全点について座標変換する(ステップS34)。これにより、疑似モデルのスケール変更および位置合わせが同時に行われる。
From these simultaneous equations, a scale S (scale factor) is calculated (step S31), and a rotation matrix R composed of rotation angles (ω, φ, κ) around three axes is calculated (step S32). Further, the amount of translation is calculated according to the number of feature points to be substituted (step S33). Then, based on the calculated scale S, the rotation angles (ω, φ, κ) around the three axes, and the parallel movement amounts (X0, Y0, Z0), the coordinates of all the points of the pseudo model are transformed by the above equation (8) ( Step S34). Thereby, the scale change and alignment of the pseudo model are performed simultaneously.
その後、測定モデルと疑似モデル(基準モデル)を比較して、誤対応点が判定される(ステップS24−6)。測定モデルと基準モデルの比較は、基準モデルの点に最も近い測定モデルの点を探索し、この点間距離が予測精度±σの±2σ~±3σの範囲内にあるか否かによって行われ、この範囲外の点は、誤対応点として判定される。
Thereafter, the measurement model and the pseudo model (reference model) are compared to determine a miscorresponding point (step S24-6). The measurement model is compared with the reference model by searching for the point of the measurement model that is closest to the reference model point, and whether the distance between the points is within the range of ± 2σ to ± 3σ of the prediction accuracy ± σ. A point outside this range is determined as a miscorresponding point.
なお、この誤対応点の判定処理を、三次元形状測定部25が、その後に測定値を求める際にも、同様に行うことができる。
It should be noted that this miscorresponding point determination process can be similarly performed when the three-dimensional shape measurement unit 25 subsequently obtains a measurement value.
(第4の実施形態の優位性)
第4の実施形態によれば、スケールの異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。また、全周モデルを作成しなくとも、基準モデルと測定モデルのスケール調整および位置合わせを行うことができる。 (Advantage of the fourth embodiment)
According to the fourth embodiment, a miscorresponding point of a measurement model can be automatically determined using a reference model (pseudo model) having a different scale. Further, the scale adjustment and alignment of the reference model and the measurement model can be performed without creating an all-around model.
第4の実施形態によれば、スケールの異なる基準モデル(疑似モデル)を用いて、測定モデルの誤対応点を自動判定することができる。また、全周モデルを作成しなくとも、基準モデルと測定モデルのスケール調整および位置合わせを行うことができる。 (Advantage of the fourth embodiment)
According to the fourth embodiment, a miscorresponding point of a measurement model can be automatically determined using a reference model (pseudo model) having a different scale. Further, the scale adjustment and alignment of the reference model and the measurement model can be performed without creating an all-around model.
5.第5の実施形態
第5の実施形態は、撮影部の位置を、測定対象物と一緒に基準尺(校正用被写体)を撮影し、並列的に求める第2方式に基づくものであり、第1の実施形態における撮影位置姿勢測定部20の処理方法の一変形例である。 5). Fifth Embodiment The fifth embodiment is based on a second method in which the position of the photographing unit is photographed in parallel with the object to be measured and a reference scale (subject for calibration) is obtained in parallel. It is a modification of the processing method of the imaging position andorientation measurement unit 20 in the embodiment.
第5の実施形態は、撮影部の位置を、測定対象物と一緒に基準尺(校正用被写体)を撮影し、並列的に求める第2方式に基づくものであり、第1の実施形態における撮影位置姿勢測定部20の処理方法の一変形例である。 5). Fifth Embodiment The fifth embodiment is based on a second method in which the position of the photographing unit is photographed in parallel with the object to be measured and a reference scale (subject for calibration) is obtained in parallel. It is a modification of the processing method of the imaging position and
第2方式は、計測対象である測定対象物と校正用被写体を同時に写し込み、撮影部の位置姿勢を求めて三次元計測する方法である。この場合は、撮影部を固定する必要がなく、撮影部は1台から複数台でもよく、撮影枚数が2枚以上あれば、計測可能である。この方法の利点は、撮影部の位置は自由でかつ1台からでも計測できるため、構成が簡単にできるという点である。また、測定対象物を撮影するのと同時に撮影部の位置姿勢を求めるので、事前に撮影部の位置姿勢を求めておく必要はない。一緒に写し込む校正用被写体は、基準尺のような長さの決まったものや、あるいは、座標が決まったものなどを使用する。
The second method is a method in which a measurement object to be measured and a calibration subject are simultaneously photographed, and the position and orientation of the photographing unit are obtained to perform three-dimensional measurement. In this case, there is no need to fix the photographing unit, and there may be one to a plurality of photographing units, and measurement is possible if the number of photographing is two or more. The advantage of this method is that the position of the photographing unit is free and can be measured even from one unit, so that the configuration can be simplified. In addition, since the position and orientation of the photographing unit are obtained at the same time as the measurement object is photographed, it is not necessary to obtain the position and orientation of the photographing unit in advance. The calibration subject to be imaged together uses a fixed length such as a reference scale, or a fixed coordinate.
撮影部は基本的に固定する必要がなく、どこに置くのも自由である。図22は、第5の実施形態に係る形状測定装置の上面図であり、図23は、第5の実施形態に係る形状測定装置の変形例の上面図である。図22は、撮影部1台で場所を移動しながら撮影する態様における、基準尺19、撮影部2と測定対象物18、計算処理部15、操作部16、および表示部17の関係を示している。一方、図23は、2台でステレオカメラ構成にしたり、複数台でマルチカメラ構成とした態様における変形例を示している。
¡The shooting part does not need to be fixed basically and can be placed anywhere. FIG. 22 is a top view of the shape measuring apparatus according to the fifth embodiment, and FIG. 23 is a top view of a modification of the shape measuring apparatus according to the fifth embodiment. FIG. 22 shows the relationship between the reference rule 19, the imaging unit 2 and the measurement object 18, the calculation processing unit 15, the operation unit 16, and the display unit 17 in a mode in which shooting is performed while moving the place with one imaging unit. Yes. On the other hand, FIG. 23 shows a modification in a mode in which two cameras have a stereo camera configuration or multiple cameras have a multi-camera configuration.
この場合、計算処理部15と、操作部16と、表示部17の三者は、PCを利用すれば、撮影部とPCだけで構成できる。また対象物に模様がない場合は、プロジェクターでパターンを投影するか、もしくは対象に模様を塗布する。撮影部の撮影位置、姿勢を求める方法は、相互標定法、または、単写真標定法もしくはDLT法、バンドル調整法を使い、これらは単独でも組み合わせて使ってもよい。
In this case, the calculation processing unit 15, the operation unit 16, and the display unit 17 can be configured by only the photographing unit and the PC by using the PC. If there is no pattern on the object, the pattern is projected by a projector or a pattern is applied to the object. As a method for obtaining the photographing position and orientation of the photographing unit, a relative orientation method, a single photo orientation method, a DLT method, or a bundle adjustment method is used, and these may be used alone or in combination.
撮影部の撮影位置および姿勢(外部標定要素)を、単写真標定またはDLT法により求めれば、1枚の写真に写された基準点の相対的な位置関係に基づき、外部標定要素を求めることができる。
If the photographing position and orientation (external orientation element) of the photographing unit are obtained by single photograph orientation or the DLT method, the external orientation element can be obtained based on the relative positional relationship of the reference points captured in one photograph. it can.
図24は、基準尺と測定対象物を撮影した左画像を示す図面代用写真(A)と、右画像を示す図面代用写真(B)である。図24に示すように、測定対象物18は、カラーコードターゲット36a~36dを相対的な位置関係で配置した基準尺35(校正用被写体)とともに撮影される。
FIG. 24 is a drawing substitute photo (A) showing a left image obtained by photographing a reference scale and a measurement object, and a drawing substitute photo (B) showing a right image. As shown in FIG. 24, the measurement object 18 is photographed with a reference scale 35 (calibration subject) in which the color code targets 36a to 36d are arranged in a relative positional relationship.
図2に示す撮影位置姿勢測定部20は、図24に示す重複画像を取得する。撮影位置姿勢測定部20は、重複画像を2値化することで、カラーコードターゲット36a~36dの重心位置(基準点の画像座標)を求める。また、撮影位置姿勢測定部20は、カラーコードターゲット36a~36dの配色からカラーコードを読み取り、各カラーコードターゲット36a~36dにラベルを付ける。
The imaging position / orientation measurement unit 20 shown in FIG. 2 acquires the duplicate image shown in FIG. The shooting position / orientation measurement unit 20 binarizes the overlapping images to obtain the barycentric positions (image coordinates of the reference points) of the color code targets 36a to 36d. Further, the photographing position / orientation measurement unit 20 reads a color code from the color scheme of the color code targets 36a to 36d and labels each color code target 36a to 36d.
このラベルによって、重複画像における基準点の対応が分かる。撮影位置姿勢測定部20は、基準点の画像座標、および基準点の三次元上の相対的な位置関係に基づき、相互標定法、単写真標定またはDLT法、あるいはバンドル調整法によって、撮影部2~9の撮影位置と姿勢を求める。これらを組み合わせることでも高精度な位置姿勢が求められる。
This label shows the correspondence of the reference points in the duplicate image. The photographing position / orientation measuring unit 20 uses the mutual orientation method, single photograph orientation or DLT method, or bundle adjustment method based on the image coordinates of the reference point and the three-dimensional relative positional relationship of the reference points. Find the shooting position and posture of ~ 9. A combination of these also requires a highly accurate position and orientation.
本実施形態において採用した撮影位置姿勢測定部20の処理方式である第2方式は、第1の実施形態で採用した第1方式の変形例であり、それ以外の構成や処理については、第1~第4の実施形態で示した構成や処理を採用できる。
The second method, which is the processing method of the photographing position / orientation measurement unit 20 employed in the present embodiment, is a modification of the first method employed in the first embodiment, and for other configurations and processes, the first method is used. The configuration and processing shown in the fourth embodiment can be adopted.
(第5の実施形態の優位性)
第5の実施形態によれば、第2方式を採用しており、測定対象物18と校正用被写体を同時に撮影することにより、撮影部の位置姿勢を求め、三次元測定することが可能なので、撮影部は1台から何台でも構成でき、また撮影部を固定する必要がないので簡単な構成にできる。 (Advantage of the fifth embodiment)
According to the fifth embodiment, the second method is adopted, and the position and orientation of the photographing unit can be obtained and three-dimensional measurement can be performed by simultaneously photographing themeasurement object 18 and the calibration subject. The imaging unit can be configured from one unit to any number of units, and since it is not necessary to fix the imaging unit, the configuration can be simplified.
第5の実施形態によれば、第2方式を採用しており、測定対象物18と校正用被写体を同時に撮影することにより、撮影部の位置姿勢を求め、三次元測定することが可能なので、撮影部は1台から何台でも構成でき、また撮影部を固定する必要がないので簡単な構成にできる。 (Advantage of the fifth embodiment)
According to the fifth embodiment, the second method is adopted, and the position and orientation of the photographing unit can be obtained and three-dimensional measurement can be performed by simultaneously photographing the
本発明は、測定対象物の三次元形状を測定する形状測定装置およびそのプログラムに利用することができる。
The present invention can be used for a shape measuring apparatus for measuring a three-dimensional shape of a measurement object and a program thereof.
Claims (12)
- 複数の撮影位置から重複した撮影領域で、測定対象物を撮影する撮影部と、
前記撮影部によって撮影された重複画像における前記測定対象物の特徴点の位置を対応付ける特徴点対応付部と、
前記特徴点対応付部で対応付けた前記重複画像上での特徴点に基づき、前記測定対象物のモデルを形成する測定モデル形成部と、
前記測定モデル形成部で形成した測定モデルと、前記測定対象物の形態に基づき形成された基準モデルとに基づいて、誤対応点を判定する誤対応点判定部と、
前記誤対応点判定部で誤対応点と判定された点を除いた特徴点の位置および前記複数の撮影位置に基づき、前記測定対象物の特徴点の三次元座標または前記測定対象物の三次元形状を求める三次元形状測定部と、を備えることを特徴とする形状測定装置。 An imaging unit for imaging the measurement object in an overlapping imaging area from a plurality of imaging positions;
A feature point association unit for correlating the position of the feature point of the measurement object in the overlapping image captured by the imaging unit;
A measurement model forming unit that forms a model of the measurement object based on the feature points on the overlapping image associated by the feature point association unit;
Based on the measurement model formed by the measurement model forming unit and the reference model formed based on the form of the measurement object, an incorrect corresponding point determination unit that determines an incorrect corresponding point;
Based on the position of the feature point excluding the point determined to be an erroneous correspondence point by the erroneous correspondence point determination unit and the plurality of imaging positions, the three-dimensional coordinates of the characteristic point of the measurement object or the three-dimensional of the measurement object A shape measuring apparatus comprising: a three-dimensional shape measuring unit for obtaining a shape. - 前記基準モデルは、MRI、CT、CAD、および別の形状測定装置によるデータならびに過去に求められた形状データのうちの少なくとも一つであることを特徴とする請求項1に記載の形状測定装置。 2. The shape measuring apparatus according to claim 1, wherein the reference model is at least one of data obtained by MRI, CT, CAD, and another shape measuring apparatus and shape data obtained in the past.
- 前記基準モデルは、実際の寸法が与えられた実寸データであることを特徴とする請求項2に記載の形状測定装置。 3. The shape measuring apparatus according to claim 2, wherein the reference model is actual size data to which actual dimensions are given.
- 前記基準モデルは、前記測定対象物の形状に類似し、実際の寸法が与えられていない疑似モデルであり、体積が同程度になるように座標変換を行った後に、前記誤対応点判定部が、誤対応点を判定することを特徴とする請求項1に記載の形状測定装置。 The reference model is a pseudo model that is similar to the shape of the object to be measured and is not given actual dimensions, and after performing coordinate transformation so that the volume is comparable, the miscorresponding point determination unit The shape measuring apparatus according to claim 1, wherein an erroneous correspondence point is determined.
- 前記基準モデルは、前記測定対象物の形状に類似し、実際の寸法が与えられていない疑似モデルであり、重心からの距離が同程度になるように座標変換を行った後に、前記誤対応点判定部が、誤対応点を判定することを特徴とする請求項1に記載の形状測定装置。 The reference model is a pseudo model that is similar to the shape of the measurement object and is not given an actual dimension, and after performing coordinate conversion so that the distance from the center of gravity is the same, the miscorresponding point The shape measuring apparatus according to claim 1, wherein the determination unit determines an erroneous correspondence point.
- 前記基準モデルは、前記測定対象物の形状に類似し、実際の寸法が与えられていない疑似モデルであり、少なくとも4点以上の特徴点の位置を合わせるように座標変換を行った後に、前記誤対応点判定部が、誤対応点を判定することを特徴とする請求項1に記載の形状測定装置。 The reference model is a pseudo model that is similar to the shape of the object to be measured and is not given actual dimensions. After the coordinate conversion is performed so that the positions of at least four or more feature points are aligned, the error may occur. The shape measuring apparatus according to claim 1, wherein the corresponding point determination unit determines an erroneous corresponding point.
- 前記基準モデルの点と、これに最も近い前記測定モデルの点との距離を最小化することで、前記基準モデルと前記測定モデルの位置合わせを行うことを特徴とする請求項1に記載の形状測定装置。 The shape according to claim 1, wherein the reference model and the measurement model are aligned by minimizing a distance between the reference model point and the closest measurement model point. measuring device.
- 前記基準モデルの点と対応する前記測定モデルの点を4点以上指定することで、前記基準モデルと前記測定モデルの位置合わせを行うことを特徴とする請求項1に記載の形状測定装置。 The shape measuring apparatus according to claim 1, wherein the reference model and the measurement model are aligned by specifying four or more points of the measurement model corresponding to the points of the reference model.
- 前記基準モデルの点と、これに最も近い前記測定モデルの点との距離が、所定範囲外である場合には、前記測定モデルの点を誤対応点と判定することを特徴とする請求項1に記載の形状測定装置。 2. The point of the measurement model is determined to be a miscorresponding point when the distance between the point of the reference model and the point of the measurement model closest to the reference model point is outside a predetermined range. The shape measuring device described in 1.
- 前記撮影部で撮影された測定対象物の画像から、前記測定対象物のテクスチャーを抽出するテクスチャー抽出部と、
前記測定モデル形成部が形成した測定モデルおよび前記基準モデルの少なくとも一つに前記テクスチャーを貼り付けるテクスチャー合成部と、
前記テクスチャー合成部で合成されたテクスチャー付きモデルに基づき、テクスチャー付きモデル画像を表示する表示部と、をさらに備えることを特徴とする請求項1~9のいずれかに記載の形状測定装置。 A texture extraction unit that extracts a texture of the measurement object from an image of the measurement object imaged by the imaging unit;
A texture synthesis unit for attaching the texture to at least one of the measurement model formed by the measurement model formation unit and the reference model;
10. The shape measuring apparatus according to claim 1, further comprising a display unit that displays a textured model image based on the textured model synthesized by the texture synthesizing unit. - 前記誤対応点判定部が誤対応点であると判定した場合には、前記誤対応点に相当する特徴点の指定が解除されることを特徴とする請求項1~9のいずれかに記載の形状測定装置。 The specification of a feature point corresponding to the miscorresponding point is canceled when the miscorresponding point determination unit determines that it is a miscorresponding point. Shape measuring device.
- 複数の撮影位置から撮影した重複画像における測定対象物の特徴点の位置を対応付ける特徴点対応付ステップと、
前記特徴点対応付ステップで対応付けた前記重複画像上での特徴点に基づき、前記測定対象物のモデルを形成する測定モデル形成ステップと、
前記測定モデル形成ステップで形成した測定モデルと、前記測定対象物の形態に基づき形成された基準モデルとに基づいて、誤対応点を判定する誤対応点判定ステップと、
前記誤対応点判定ステップで誤対応点と判定された点を除いた特徴点の位置および前記複数の撮影位置に基づき、前記測定対象物の特徴点の三次元座標または前記測定対象物の三次元形状を求める三次元形状測定ステップと、を実行させるためのプログラム。 A feature point correspondence step for correlating the position of the feature point of the measurement object in the overlapping image taken from a plurality of photographing positions;
A measurement model forming step of forming a model of the measurement object based on the feature points on the overlapping image associated in the feature point correspondence step;
An erroneous corresponding point determining step for determining an erroneous corresponding point based on the measurement model formed in the measurement model forming step and a reference model formed based on the form of the measurement object;
Based on the position of the feature point excluding the point determined as the erroneous correspondence point in the erroneous correspondence point determination step and the plurality of imaging positions, the three-dimensional coordinates of the characteristic point of the measurement object or the three-dimensional of the measurement object A program for executing a three-dimensional shape measurement step for obtaining a shape.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-321505 | 2008-12-17 | ||
JP2008321505A JP5430138B2 (en) | 2008-12-17 | 2008-12-17 | Shape measuring apparatus and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010071139A1 true WO2010071139A1 (en) | 2010-06-24 |
Family
ID=42268810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/070940 WO2010071139A1 (en) | 2008-12-17 | 2009-12-09 | Shape measurement device and program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5430138B2 (en) |
WO (1) | WO2010071139A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104380036A (en) * | 2012-06-13 | 2015-02-25 | 株式会社岛精机制作所 | Synthesis-parameter generation device for three-dimensional measurement apparatus |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5620200B2 (en) * | 2010-09-06 | 2014-11-05 | 株式会社トプコン | Point cloud position data processing device, point cloud position data processing method, point cloud position data processing system, and point cloud position data processing program |
US9070042B2 (en) * | 2011-01-13 | 2015-06-30 | Panasonic Intellectual Property Management Co., Ltd. | Image processing apparatus, image processing method, and program thereof |
JP2012150636A (en) * | 2011-01-19 | 2012-08-09 | Seiko Epson Corp | Projection type display device and information processing system |
JP5771423B2 (en) * | 2011-03-17 | 2015-08-26 | 株式会社トプコン | Image color correction apparatus and image color correction method |
JP5781353B2 (en) | 2011-03-31 | 2015-09-24 | 株式会社ソニー・コンピュータエンタテインメント | Information processing apparatus, information processing method, and data structure of position information |
JP6102088B2 (en) * | 2011-09-01 | 2017-03-29 | 株式会社リコー | Image projection device, image processing device, image projection method, program for image projection method, and recording medium recording the program |
JP2013079854A (en) * | 2011-10-03 | 2013-05-02 | Topcon Corp | System and method for three-dimentional measurement |
US20140307055A1 (en) | 2013-04-15 | 2014-10-16 | Microsoft Corporation | Intensity-modulated light pattern for active stereo |
ITUB20155646A1 (en) * | 2015-11-18 | 2017-05-18 | Gd Spa | Method of inspection of an elongated element. |
DE102017000908A1 (en) * | 2017-02-01 | 2018-09-13 | Carl Zeiss Industrielle Messtechnik Gmbh | Method for determining the exposure time for a 3D image |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002024807A (en) * | 2000-07-07 | 2002-01-25 | National Institute Of Advanced Industrial & Technology | Object movement tracking technique and recording medium |
JP2005140623A (en) * | 2003-11-06 | 2005-06-02 | Nippon Telegr & Teleph Corp <Ntt> | Image measuring method and instrument, program, and recording medium |
JP2005189203A (en) * | 2003-12-26 | 2005-07-14 | Fuji Xerox Co Ltd | Creation method for entire 3d circumference model and its apparatus |
JP2007064627A (en) * | 2005-08-01 | 2007-03-15 | Topcon Corp | System for three-dimensional measurement and its method |
JP2007212430A (en) * | 2006-08-07 | 2007-08-23 | Kurabo Ind Ltd | Photogrammetry device and system |
JP2007212187A (en) * | 2006-02-07 | 2007-08-23 | Mitsubishi Electric Corp | Stereo photogrammetry system, stereo photogrammetry method, and stereo photogrammetry program |
JP2007327938A (en) * | 2006-05-10 | 2007-12-20 | Topcon Corp | Image processing device and method therefor |
-
2008
- 2008-12-17 JP JP2008321505A patent/JP5430138B2/en active Active
-
2009
- 2009-12-09 WO PCT/JP2009/070940 patent/WO2010071139A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002024807A (en) * | 2000-07-07 | 2002-01-25 | National Institute Of Advanced Industrial & Technology | Object movement tracking technique and recording medium |
JP2005140623A (en) * | 2003-11-06 | 2005-06-02 | Nippon Telegr & Teleph Corp <Ntt> | Image measuring method and instrument, program, and recording medium |
JP2005189203A (en) * | 2003-12-26 | 2005-07-14 | Fuji Xerox Co Ltd | Creation method for entire 3d circumference model and its apparatus |
JP2007064627A (en) * | 2005-08-01 | 2007-03-15 | Topcon Corp | System for three-dimensional measurement and its method |
JP2007212187A (en) * | 2006-02-07 | 2007-08-23 | Mitsubishi Electric Corp | Stereo photogrammetry system, stereo photogrammetry method, and stereo photogrammetry program |
JP2007327938A (en) * | 2006-05-10 | 2007-12-20 | Topcon Corp | Image processing device and method therefor |
JP2007212430A (en) * | 2006-08-07 | 2007-08-23 | Kurabo Ind Ltd | Photogrammetry device and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104380036A (en) * | 2012-06-13 | 2015-02-25 | 株式会社岛精机制作所 | Synthesis-parameter generation device for three-dimensional measurement apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP2010145186A (en) | 2010-07-01 |
JP5430138B2 (en) | 2014-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5430138B2 (en) | Shape measuring apparatus and program | |
JP5156601B2 (en) | Shape measuring apparatus and program | |
JP5297779B2 (en) | Shape measuring apparatus and program | |
US8600192B2 (en) | System and method for finding correspondence between cameras in a three-dimensional vision system | |
WO2014024579A1 (en) | Optical data processing device, optical data processing system, optical data processing method, and optical data processing-use program | |
JP7037876B2 (en) | Use of 3D vision in automated industrial inspection | |
US9454822B2 (en) | Stereoscopic measurement system and method | |
US20130113893A1 (en) | Stereoscopic measurement system and method | |
CN110926330B (en) | Image processing apparatus, image processing method, and program | |
US8179448B2 (en) | Auto depth field capturing system and method thereof | |
US20120147149A1 (en) | System and method for training a model in a plurality of non-perspective cameras and determining 3d pose of an object at runtime with the same | |
WO2010107004A1 (en) | Image capturing device and method for three-dimensional measurement | |
US9286506B2 (en) | Stereoscopic measurement system and method | |
CA3233222A1 (en) | Method, apparatus and device for photogrammetry, and storage medium | |
CA2757313C (en) | Stereoscopic measurement system and method | |
WO2021163406A1 (en) | Methods and systems for determining calibration quality metrics for a multicamera imaging system | |
Siddique et al. | 3d object localization using 2d estimates for computer vision applications | |
CN109902695B (en) | Line feature correction and purification method for image pair linear feature matching | |
CN109410272B (en) | Transformer nut recognition and positioning device and method | |
EP2283314B1 (en) | Stereoscopic measurement system and method | |
CN112066876B (en) | Method for rapidly measuring object size by using mobile phone | |
Su et al. | An automatic calibration system for binocular stereo imaging | |
EP2286297B1 (en) | Stereoscopic measurement system and method | |
Jin et al. | Automatic Registration of Mobile LiDAR Data and Multi-lens Combined Images using Image Initial Poses | |
CN116597000A (en) | Multi-camera fusion method based on CNN model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09833442 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09833442 Country of ref document: EP Kind code of ref document: A1 |