WO2017122640A1 - Measurement device and measurement method - Google Patents

Measurement device and measurement method Download PDF

Info

Publication number
WO2017122640A1
WO2017122640A1 PCT/JP2017/000500 JP2017000500W WO2017122640A1 WO 2017122640 A1 WO2017122640 A1 WO 2017122640A1 JP 2017000500 W JP2017000500 W JP 2017000500W WO 2017122640 A1 WO2017122640 A1 WO 2017122640A1
Authority
WO
WIPO (PCT)
Prior art keywords
geometric
images
image
measurement
geometric region
Prior art date
Application number
PCT/JP2017/000500
Other languages
French (fr)
Japanese (ja)
Inventor
野中 俊一郎
正志 藏之下
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2017122640A1 publication Critical patent/WO2017122640A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an apparatus and method for measuring an object, and more particularly to an apparatus and method for measuring using an image.
  • Patent Document 1 describes that in a robot system, a plane such as a floor is extracted based on a point obtained by sampling three-dimensional data obtained from a stereo image, and an obstacle is detected based on the extracted plane. ing. Further, for example, in Patent Document 2, in a vehicle environment recognition device that recognizes an environment outside the host vehicle, a specific object is specified based on parallax obtained by pattern matching of a stereo image, and a tracking target is determined based on the specified result. The selection is described.
  • Patent Document 1 since the three-dimensional data is obtained by matching the left and right stereo images, it takes time to acquire the three-dimensional data for the entire image. Also, since the sampling data consists of one reference point and two other points existing within a predetermined distance from the reference point, if the image includes a plurality of planes, the points belonging to different planes are sampled. There is a fear. Further, in Patent Document 2, since blocks are grouped by relative distance in specifying a specific object, grouping can be performed only in units of “specific objects”, and it is difficult to specify a “surface” to which a measurement target belongs. As described above, the conventional technology cannot calculate the two-dimensional information or the three-dimensional information to be measured at high speed and with high accuracy.
  • the present invention has been made in view of such circumstances, and an object of the present invention is to provide a measuring apparatus and a measuring method capable of calculating two-dimensional information or three-dimensional information to be measured at high speed and with high accuracy.
  • the measurement apparatus includes an image acquisition unit that acquires a plurality of images obtained by photographing the same subject from a plurality of viewpoints, and a plurality of images.
  • a reduced image generation unit that generates a plurality of reduced images by reducing each, and a geometric region in the plurality of reduced images based on the parallax obtained by block matching the plurality of reduced images and the pixel positions of the plurality of reduced images.
  • a second geometrical region extraction unit for extracting the first geometrical region and a second geometrical region corresponding to the first geometrical region in the plurality of images based on the extracted first geometrical region.
  • a geometric equation deciding unit that decides a geometric equation representing the geometric region, and a measuring unit that calculates two-dimensional information or three-dimensional information of a measurement target included in the subject using the decided geometric equation.
  • the first geometric region is extracted based on block matching of the reduced image, and the second geometric region in the image before reduction is represented based on the extracted first geometric region. Since a geometric equation is determined and measurement is performed using this geometric equation, the processing can be performed at high speed by narrowing the block matching range of the image before reduction, and different geometric regions can be separated and identified with high accuracy. Measurement can be performed. As described above, the measurement apparatus according to the first aspect can calculate the two-dimensional information or the three-dimensional information to be measured at high speed and with high accuracy.
  • the reduction degree of the reduced image can be one type. In this case, it is not necessary to use a plurality of types of reduced images with different reduction degrees, and processing can be performed at high speed.
  • examples of the two-dimensional information and the three-dimensional information include the position, length, width, and area of the measurement target, but are not limited to these examples.
  • the geometric area means an area of a subject belonging to the same plane or curved surface and represented by the same geometric equation. One or more arbitrary geometric regions may exist for the same subject.
  • the geometric region extraction unit determines a geometric equation representing the first geometric region, and the first geometric equation representing the first geometric region is determined based on the first geometric equation representing the first geometric region. Extract geometric regions.
  • the second mode defines an example of a method for extracting the first geometric region from the reduced image.
  • the geometric region extraction unit extracts a pixel whose distance from the first geometric region is equal to or less than a threshold as a pixel belonging to the first geometric region.
  • the third aspect shows a reference for extracting pixels belonging to the first geometric region.
  • the measurement device calculates the parallax for a plurality of representative points extracted from the second geometric region, and the pixel positions and the plurality of representative points are calculated.
  • a geometric equation representing the second geometric region is determined based on the parallax calculated for the representative point.
  • the representative point is extracted and the parallax is calculated for the image before reduction, and the geometric equation is determined based on the extracted representative point. Therefore, the two-dimensional information or the three-dimensional information to be measured is calculated with high accuracy. can do.
  • the number of representative points to be extracted in the fourth mode may differ depending on the type of geometric region (a flat surface or a curved surface, or a curved surface if a curved surface).
  • the geometric equation determination unit sets a block matching range of a plurality of images based on a result of block matching of a plurality of reduced images, and a plurality of pieces within the set range
  • the parallax of a plurality of representative points is obtained by performing block matching of the images.
  • the block matching range of a plurality of images images before reduction
  • the parallax of the representative points can be accurately obtained using the images before reduction. Matching can be performed at high speed while obtaining, and the two-dimensional information or three-dimensional information of the measurement target can be calculated at high speed and with high accuracy.
  • the measurement device determines whether the pixel indicating the measurement object belongs to which geometric area in the second geometric area, Two-dimensional information or three-dimensional information is calculated based on the geometric equation representing the determined geometric region and the pixel position of the pixel indicating the measurement target. As described above, since the measurement apparatus according to the present invention obtains the geometric equation for the second geometric region, in the sixth aspect, it is determined to which geometric region the measurement target belongs in the second geometric region. Above, 2D information or 3D information is calculated.
  • the measurement device is the sixth aspect, wherein the measurement unit calculates the parallax of the pixel indicating the measurement target based on the geometric equation indicating the determined geometric area and the pixel position of the pixel indicating the measurement target, Two-dimensional information or three-dimensional information is calculated based on the calculated parallax.
  • two-dimensional information or three-dimensional information is calculated based on the parallax of the pixel indicating the measurement target, and such a parallax calculation method is specified.
  • the measurement device is any one of the first to seventh aspects, wherein the geometric region extraction unit performs image processing for converting a plurality of images into a grayscale image and image processing for parallelizing the plurality of images. At least one image processing is performed, and a plurality of reduced images are generated for the image subjected to at least one image processing.
  • the eighth mode prescribes the content of so-called “preprocessing”. By generating a reduced image for an image that has undergone such image processing, the two-dimensional information or the three-dimensional information to be measured can be processed at high speed. In addition, it can be calculated with high accuracy.
  • the geometric equation determination unit determines a geometric equation representing one of a plane, a cylindrical surface, and a spherical surface.
  • the ninth aspect prescribes that a geometric equation representing any one of a plane, a cylindrical surface, and a spherical surface, which is typical as a surface constituting the subject, is determined.
  • the measurement device according to the tenth aspect is a concrete structure and the measurement target is damage to the concrete structure.
  • the concrete structure is damaged, and the shape and size of the generated damage change over time.
  • the measuring device according to the tenth aspect is applied to measure the damage of such a concrete structure. By doing so, it is possible to calculate the two-dimensional information or three-dimensional information of the measurement target (ie, damage) at high speed and with high accuracy.
  • Examples of concrete structures include bridges, tunnels, roads, and buildings.
  • Examples of damage include cracks and free lime. Concrete structures to which the tenth aspect can be applied and Damage is not limited to these examples.
  • the measurement apparatus includes an optical system that acquires a plurality of images.
  • the optical system included in the eleventh aspect can be a stereo optical system including a plurality of optical systems including a photographic lens and an imaging element corresponding to each of a plurality of viewpoints.
  • a measurement method includes an image acquisition step of acquiring a plurality of images obtained by photographing the same subject from a plurality of viewpoints, and a plurality of images respectively. Based on the reduced image generation step of generating a plurality of reduced images by reduction, the parallax obtained by block matching the plurality of reduced images, and the pixel positions of the plurality of reduced images, the geometric region in the plurality of reduced images A geometric region extracting step for extracting a certain first geometric region, and a second geometric region corresponding to the first geometric region in the plurality of images based on the extracted first geometric region.
  • the two-dimensional information or the three-dimensional information to be measured can be calculated at high speed and with high accuracy, as in the first aspect.
  • the measurement apparatus and the measurement method of the present invention it is possible to calculate the two-dimensional information or the three-dimensional information of the measurement target at high speed and with high accuracy.
  • FIG. 1 is a diagram showing a bridge which is an example of an application target of the measurement apparatus and measurement method of the present invention.
  • FIG. 2 is a block diagram showing the configuration of the measuring apparatus according to one embodiment of the present invention.
  • FIG. 3 is a flowchart showing processing of the measurement method according to the embodiment of the present invention.
  • FIG. 4 is a conceptual diagram illustrating a state in which a vertical shift exists between the left and right images.
  • FIG. 5 is a conceptual diagram showing how the left and right images are parallelized.
  • FIG. 6 is a diagram showing left and right reduced images.
  • FIG. 7 is a diagram illustrating a state in which the reduced image is subjected to block matching.
  • FIG. 8 is a diagram illustrating a geometric region extracted by block matching.
  • FIG. 9 is a diagram illustrating an example of representative points extracted in the geometric region of the reduced image.
  • FIG. 10 is a diagram illustrating an example of representative points extracted from an image before reduction.
  • FIG. 11 is a diagram
  • FIG. 1 is a perspective view showing a structure of a bridge 1 (concrete structure) which is an example of an application target of a measuring apparatus and a measuring method according to the present invention.
  • the bridge 1 shown in FIG. 1 has a main girder 3, and the main girder 3 is joined by a joint 3A.
  • the main girder 3 is a member that is passed between the abutment and the pier and supports the load of the vehicle on the floor slab 2.
  • a floor slab 2 for driving a vehicle or the like is placed on the main girder 3.
  • the floor slab 2 is generally made of reinforced concrete.
  • the bridge 1 has members such as a horizontal girder, a tilted frame, and a horizontal frame (not shown) in addition to the floor slab 2 and the main girder 3.
  • the inspector uses the digital camera 104 (see FIG. 2) to photograph the bridge 1 from below (direction C in FIG. 1) and obtain an image of the inspection range.
  • the photographing is performed while appropriately moving in the extending direction of the bridge 1 (A direction in FIG. 1) and the orthogonal direction (B direction in FIG. 1).
  • the digital camera 104 may be installed on a movable body that can move along the bridge 1 to perform imaging.
  • a moving body may be provided with a lifting mechanism and / or a pan / tilt mechanism of the digital camera 104. Examples of the moving body include a vehicle, a robot, and a flying body, but are not limited to these.
  • FIG. 2 is a block diagram illustrating a schematic configuration of the measurement apparatus 100 according to the present embodiment.
  • the measurement apparatus 100 includes an image acquisition unit 102, an image processing unit 108 (a reduced image generation unit, a geometric region extraction unit, a representative point extraction unit, a parallax calculation unit, a geometric equation determination unit, a determination unit, a measurement unit), a recording unit 110, A display unit 112 and an operation unit 114 are provided and connected to each other so that necessary information can be transmitted and received.
  • each unit can be realized by a control device such as a CPU (Central Processing Unit) executing a program stored in the memory.
  • the image input unit 106 includes a wireless communication antenna and an input / output interface circuit, and the recording unit 110 includes a non-temporary recording medium such as an HDD (Hard Disk Drive).
  • the display unit 112 includes a display device such as a liquid crystal display, and the operation unit 114 includes an input device such as a keyboard.
  • the digital camera 104 and the image input unit 106 constitute an image acquisition unit 102.
  • the digital camera 104 includes a left image optical system 104L for acquiring a left eye image and a right image optical system 104R for acquiring a right eye image. You can shoot from the viewpoint.
  • the left image optical system 104L and the right image optical system 104R include a photographic lens and an image sensor (not shown). Examples of the image sensor include a CCD (Charge-Coupled Device) type image sensor and a CMOS (Complementary Metal-Oxide Semiconductor) type image sensor.
  • An R (red), G (green), or B (blue) color filter is provided on the light receiving surface of the image sensor, and a color image of the subject can be acquired based on the signals of each color.
  • FIG. 3 is a flowchart showing the procedure of the measurement process according to this embodiment.
  • this embodiment demonstrates the case where the crack which arose in the floor slab 2 of the bridge 1 which is a concrete structure is measured.
  • the stereo image of the bridge 1 photographed by the digital camera 104 as described above is input to the image input unit 106 by wireless communication (step S100; image acquisition step).
  • a plurality of images of the bridge 1 are input in accordance with the inspection range, and information on photographing date and time is added to the input image by the digital camera 104.
  • the shooting date and time of the input image does not necessarily have to be the same for all images, and may be for a plurality of days.
  • a plurality of images may be input at a time, or one image may be input at a time.
  • the image of the bridge 1 may be input via a non-temporary recording medium such as various memory cards instead of wireless communication, or image data that has already been captured and recorded is input via a network. Also good.
  • FIG. 4 shows examples of the left image iL0 and the right image iR0 input in this way.
  • FIG. 4 shows an example of an image when a portion (corner portion) where three planes intersect in the bridge 1 is photographed. The three planes intersect at boundary lines E1, E2, and E3, and these boundary lines E1, E2, and E3 coincide at a point E0.
  • the horizontal direction in FIG. 4 is the horizontal direction u
  • the vertical direction is the vertical direction v.
  • the number of channels and the image size of the left image iL0 and the right image iR0 are not particularly limited.
  • a color image (3 channels) of 4,800 pixels (horizontal direction u) ⁇ 3,200 pixels (vertical direction v) is used.
  • the left image iL0 and the right image iR0 are images obtained by editing and / or combining a plurality of images (for example, an image showing the entire measurement range generated by combining images obtained by capturing a part of the measurement range). Also good.
  • the image processing unit 108 converts the left image iL0 and the right image iR0 into grayscale images in step S110 prior to generation of a reduced image described later.
  • the image processing unit 108 shifts the left image iL0 and / or the right image iR0 in the vertical direction v to correct (parallelize) the shift of the distance ⁇ described above.
  • FIG. 5 shows examples of images (left image iL1 and right image iR1) obtained by performing these preprocessing. In the measurement apparatus 100 and the measurement method according to the present embodiment, highly accurate measurement can be performed by performing such preprocessing.
  • the image processing unit 108 generates a reduced image based on the left image iL1 and the right image iR1 that are images after the preprocessing (step S120; reduced image generation step).
  • the degree of reduction is not particularly limited.
  • the horizontal direction u and the vertical direction v of the left image iL1 and the right image iR1 are respectively reduced to 1/16 to obtain 300 pixels (horizontal direction u) ⁇ 200 pixels (vertical direction v ) Can be generated.
  • this reduced image be the left image iL2 and the right image iR2 (see FIG. 6).
  • one type of reduced image may be generated in step S120. That is, it is not necessary to generate a plurality of images having different reduction degrees (for example, 1/8, 1/4, 1/2, etc.).
  • step S130 the image processing unit 108 extracts the geometric region (first geometric region) of the bridge 1 based on the left image iL2 and the right image iR2 (step S130; geometric region extraction step).
  • the processing in step S130 includes calculation of parallax by block matching, determination of a geometric equation based on the pixel position and parallax, and extraction of a geometric region based on the determined geometric equation.
  • Block matching sets a block including a plurality of pixels in one image (reference image) of the left image iL2 and the right image iR2, and sets a block having the same shape and size as the block in the other image (comparison image) Then, the block in the comparative image is moved in the horizontal direction u pixel by pixel, and the correlation value for the two blocks is calculated at each position.
  • the reference image is the left image iL2
  • the comparison image is the right image iR2
  • the block AR having the same shape and size as the block AL set in the left image iL2 is moved in the horizontal direction u pixel by pixel.
  • step S110 Since the left and right images are parallelized in the preprocessing in step S110, if the position of the block AL is determined, it is only necessary to move the block AR in the horizontal direction u at the time of block matching. When the left and right images are not parallelized in the preprocessing, in the block matching, the movement in the horizontal direction u is repeated while shifting the position in the vertical direction v.
  • the correlation value is calculated while moving the block AR, and when the position of the block AR having the lowest correlation value (that is, the highest correlation) with respect to the position of the block AL is specified, the target pixel in the block AL
  • the distance between (for example, the center pixel) and the corresponding pixel (for example, the center pixel) of the block AR at the specified position is calculated as the parallax.
  • such processing is executed for all the pixels of the left image iL2, which is the reference image, and parallax is obtained for each pixel position to generate a parallax image.
  • examples of the correlation value calculation method include SAD (SumSof Absolute Difference), SSD (Sum of Squared intensity Difference), and NCC (Normalized Cross Correlation).
  • the image processing unit 108 performs plane extraction (extraction of the first geometric region) using a RANSAC (RANdom SAmple Consensus) algorithm.
  • the RANSAC algorithm can obtain an optimum evaluation value by calculating a model parameter (a parameter representing a plane) using randomly sampled data (three if it is a plane) and evaluating the correctness of the calculated parameter. It is an algorithm that repeats until. A specific procedure will be described below.
  • Step S1 Three points are randomly extracted from the parallax image created by the block matching described above. For example, it is assumed that points f1 (u 1 , v 1 , w 1 ), f 2 (u 2 , v 2 , w 2 ), and f 3 (u 3 , v 3 , w 3 ) are extracted in FIG.
  • the points to be extracted here are points for determining the geometric equation of each geometric region, and the number of points to be extracted is changed depending on the assumed geometric region type (plane, cylindrical surface, spherical surface, etc.). Good. For example, in the case of a plane, representative points of 3 points or more (assuming that they are not on the same straight line) are extracted.
  • the horizontal coordinate of the image is represented by u i
  • the vertical coordinate is represented by v i
  • the parallax (distance direction) is represented by w i (i is an integer of 1 or more representing a point number).
  • Step S2 a plane equation (geometric equation) is determined from the extracted points f1, f2, and f3.
  • the plane equation F in the three-dimensional space (u, v, w) is generally expressed by the following (Expression 1) (a, b, c, d are constants).
  • Step S3 For all the pixels (u i , v i , w i ) of the parallax image, the distance to the plane represented by the plane equation F in (Expression 1) is calculated. If the distance is less than or equal to the threshold, it is determined that the pixel is on the plane represented by the plane equation F.
  • Step S4 If the number of pixels existing on the plane represented by the plane equation F is larger than the number of pixels for the current optimal solution, the plane equation F is determined as the optimal solution.
  • Step S5 Steps S1 to S4 are repeated a predetermined number of times.
  • Step S6 One plane is determined by using the obtained plane equation as a solution.
  • Step S7 The pixels on the plane determined up to step S6 are excluded from the processing target (plane extraction target).
  • Step S8 Steps S1 to S7 are repeated, and the process ends when the number of extracted planes exceeds a certain number or the number of remaining pixels is less than a specified number.
  • the geometric region (first geometric region) can be extracted from the reduced image by the above procedure.
  • different geometric regions can be separated and identified as described above, and measurement can be performed with high accuracy.
  • geometric equation when the geometric region is a plane has been described, but a geometric equation representing another type of geometric region, such as a cylindrical surface (cylindrical surface) or a spherical surface, is determined according to the shape of the subject. You may make it do. This is because the shape of a structure such as a bridge pier or a tunnel is often expressed not only by a plane but also by a cylindrical surface or a spherical surface.
  • a cylinder whose center axis is the z-axis and whose radius is r is expressed by the following (formula 2) (z is an arbitrary value), and whose center is the origin of the coordinate system and whose radius is r is the following: (Equation 3).
  • the image processing unit 108 extracts a plurality of points (representative points) belonging to the same geometric area from the image before reduction (step S140).
  • representative points f4, f5, and f6 are extracted from the geometric area G1A (second geometric area) corresponding to the above-described geometric area G1 in the left image iL1 that is an image before reduction.
  • geometric regions G1A, G2A, and G3A correspond to the above-described geometric regions G1, G2, and G3 (first geometric region), respectively.
  • the parallax is calculated for the extracted representative points f4, f5, and f6 (step S150). Since the parallax is calculated by the block matching of the reduced image as described above, the block matching range between the left image iL1 and the right image iR1 that are the images before the reduction is set based on the result of the block matching of the reduced image. Then, block matching of the left image iL1 and the right image iR1 is performed within the set range. Note that the block matching in step S150 may be performed for the extracted representative points, and it is not necessary to target all the pixels. As a result, the parallax can be calculated with high accuracy for the image before reduction while performing high-speed processing by narrowing down the target and range of block matching.
  • the image processing unit 108 estimates a plane equation based on the pixel position of the representative point and the parallax calculated in step S150 (step S160; geometric equation determination step). This process can be performed in the same manner as the plane equation estimation for a reduced image.
  • the processing of steps S140 to S160 is repeated for all geometric regions (planes) to obtain the plane equation.
  • extraction of representative points and estimation of a plane equation may be repeated by the RANSAC algorithm as described for the reduced image processing.
  • cracks are extracted from an image (for example, the left image iL1 or the right image iR1) (step S170).
  • Crack extraction can be performed by various methods. For example, a crack detection method described in Japanese Patent No. 4006007 can be used. This method calculates wavelet coefficients corresponding to the two concentrations to be compared, calculates each wavelet coefficient when each of the two concentrations is changed, creates a wavelet coefficient table, and detects the crack detection target.
  • a wavelet image is created by wavelet transforming an input image of a concrete surface.
  • a crack detection method comprising a step of determining a crack area and a non-crack area by comparing the wavelet coefficient of the target pixel and the threshold value.
  • Crack extraction can also be performed using the method described in Non-Patent Document 1 below.
  • an area composed of pixels having a luminance value less than the threshold value is set as a percolated area (percolation area), and the threshold value is sequentially updated according to the shape of the percolation area.
  • the crack is detected from the surface image.
  • the percolation method is a method in which the region is sequentially enlarged in general imitating water penetration (percolation) in nature.
  • Non-Patent Document 1 Tomoyuki Yamaguchi, “A Study on Image Processing Method for Crack Inspection of Real Concreate Surfaces”, MAJOR IN PURE AND APPLIED PHYSICS, GRADUATE SCHOOL OF SCIENCE AND ENGINEERING, WASEDA UNIVERSITY, February 2008 Describes the mode of extracting cracks after the determination of the geometric equation.
  • the extraction of cracks is performed in parallel with the processing from the preprocessing (step S110) to the determination of the geometric equation (step S160), or in these processes. It may be performed in advance.
  • An example in the case where cracked Cr is extracted is shown in FIG.
  • the crack Cr is a crack from the end point P1 to the end point P2.
  • a geometric region to which the crack Cr belongs is determined (step S180; measurement process).
  • the relationship between the pixel position and the geometric region is specified, and the image processing unit 108 determines the geometric region to which the crack Cr belongs based on this relationship. Can be determined.
  • the crack Cr is determined to belong to the geometric region G1A.
  • an image showing the crack Cr (for example, the left image iL1 or the right image iR1) may be displayed on the display unit 112, and the geometric region may be determined based on the result input by the user via the operation unit 114.
  • the user designates the end points P1 and P2 of the cracked Cr to be measured on the display screen of the display unit 112 with a pointing device such as a mouse, and the image processing unit 108 determines that “the end points P1 and P2 are geometric. It belongs to area
  • the two-dimensional information or three-dimensional information of the crack Cr is calculated using the geometric equation (plane equation) determined in the processing up to step S160 (step S190; measurement). Process).
  • geometric equation plane equation
  • steps S190; measurement). Process examples of two-dimensional information or three-dimensional information can include position, length, width, and area, but items to be calculated are not limited to these examples, and depend on the nature of the subject and the measurement target. Thus, other items such as volume and cross-sectional area may be calculated.
  • the parallax at the end points P1 and P2 is calculated based on the pixel positions of the end points P1 and P2 and the plane equation of the geometric region G1A. Specifically, the parallax (corresponding to the w coordinate) of the end points P1 and P2 can be obtained from the (u, v) coordinates of the end points P1 and P2 and the plane equation of the geometric region G1A without performing block matching.
  • the (u, v, w) coordinates of the end points P1, P2 are converted into (x, y, z) coordinates in the real space based on the position and shooting direction of the digital camera 104, and the following (formula 4 ) To obtain the length L (distance between the end points P1 and P2).
  • the crack Cr is a curved crack
  • the crack Cr is divided into a plurality of sections so that each section can be regarded as a straight line, and the length of each section is calculated to obtain the length of the crack Cr. Can do.
  • the same geometric region is extracted based on the block matching of the reduced image, so that the processing can be performed at a high speed, and different geometric regions can thereby be obtained. And can be measured with high accuracy.
  • processing can be performed with high accuracy.
  • the two-dimensional information or the three-dimensional information of the measurement target can be calculated at high speed and with high accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The purpose of the present invention is to provide a measurement device and a measurement method such that two-dimensional information or three-dimensional information about a measured object can be calculated quickly and with high precision. A measurement device pertaining to an embodiment of the present invention is equipped with: an image acquisition unit for acquiring multiple images obtained by photographing the same photographic subject from multiple viewpoints; a reduced image generation part for generating multiple reduced images by reducing the size of each of the multiple images; a geometric area extraction part for extracting a first geometric area, that is, a geometric area in the multiple reduced images, on the basis of parallax values obtained by means of block-matching of the multiple reduced images and the positions of pixels in the multiple reduced images; a geometric equation determination part for determining a geometric equation representing a second geometric area, that is, a geometric area in the multiple reduced images which corresponds to the first geometric area, on the basis of the extracted first geometric area; and a measurement part for calculating two-dimensional information or three-dimensional information about the measured object included in the photographic subject by using the determined geometric equation.

Description

計測装置及び計測方法Measuring device and measuring method
 本発明は物体の計測を行う装置及び方法に関し、特に画像を用いて計測を行う装置及び方法に関する。 The present invention relates to an apparatus and method for measuring an object, and more particularly to an apparatus and method for measuring using an image.
 近年、撮像装置で取得した画像に基づいて物体の2次元情報または3次元情報を取得する画像計測が行われている。例えば特許文献1には、ロボットシステムにおいて、ステレオ画像から求めた3次元データをサンプリングした点に基づいて床等の平面を抽出して、抽出した平面に基づいて障害物を検知することが記載されている。また例えば特許文献2には、自車両外の環境を認識する車外環境認識装置において、ステレオ画像をパターンマッチングして得られた視差に基づいて特定物を特定し、特定結果に基づいて追跡対象を選択することが記載されている。 In recent years, image measurement for acquiring two-dimensional information or three-dimensional information of an object based on an image acquired by an imaging apparatus has been performed. For example, Patent Document 1 describes that in a robot system, a plane such as a floor is extracted based on a point obtained by sampling three-dimensional data obtained from a stereo image, and an obstacle is detected based on the extracted plane. ing. Further, for example, in Patent Document 2, in a vehicle environment recognition device that recognizes an environment outside the host vehicle, a specific object is specified based on parallax obtained by pattern matching of a stereo image, and a tracking target is determined based on the specified result. The selection is described.
特開2008-9999号公報JP 2008-9999 A 特開2013-178664号公報JP 2013-178664 A
 しかしながら特許文献1では、左右のステレオ画像をマッチング処理することにより3次元データを求めているので、画像全体について3次元データを取得するには時間が掛かってしまう。また、サンプリングデータは1つの基準点と、基準点から所定の距離内に存在する他の2点とからなるため、画像に複数の平面が含まれる場合、異なる平面に属する点をサンプリングしてしまうおそれがある。また特許文献2では、特定物の特定において相対距離によってブロックをグループ化しているため、「特定物」単位でしかグループ化ができず、計測対象の属する「面」の特定が困難である。このように従来の技術は、計測対象の2次元情報または3次元情報を高速かつ高精度に算出できるものではなかった。 However, in Patent Document 1, since the three-dimensional data is obtained by matching the left and right stereo images, it takes time to acquire the three-dimensional data for the entire image. Also, since the sampling data consists of one reference point and two other points existing within a predetermined distance from the reference point, if the image includes a plurality of planes, the points belonging to different planes are sampled. There is a fear. Further, in Patent Document 2, since blocks are grouped by relative distance in specifying a specific object, grouping can be performed only in units of “specific objects”, and it is difficult to specify a “surface” to which a measurement target belongs. As described above, the conventional technology cannot calculate the two-dimensional information or the three-dimensional information to be measured at high speed and with high accuracy.
 本発明はこのような事情に鑑みてなされたもので、計測対象の2次元情報または3次元情報を高速かつ高精度に算出できる計測装置及び計測方法を提供することを目的とする。 The present invention has been made in view of such circumstances, and an object of the present invention is to provide a measuring apparatus and a measuring method capable of calculating two-dimensional information or three-dimensional information to be measured at high speed and with high accuracy.
 上述した目的を達成するため、本発明の第1の態様に係る計測装置は、同一の被写体を複数の視点から撮影して得られた複数の画像を取得する画像取得部と、複数の画像をそれぞれ縮小して複数の縮小画像を生成する縮小画像生成部と、複数の縮小画像をブロックマッチングして得られた視差と複数の縮小画像の画素位置とに基づいて、複数の縮小画像における幾何領域である第1の幾何領域を抽出する幾何領域抽出部と、抽出した第1の幾何領域に基づいて、複数の画像における幾何領域であって第1の幾何領域に対応する幾何領域である第2の幾何領域を表す幾何方程式を決定する幾何方程式決定部と、決定した幾何方程式を用いて、被写体に含まれる計測対象の2次元情報または3次元情報を算出する計測部と、を備える。 In order to achieve the above-described object, the measurement apparatus according to the first aspect of the present invention includes an image acquisition unit that acquires a plurality of images obtained by photographing the same subject from a plurality of viewpoints, and a plurality of images. A reduced image generation unit that generates a plurality of reduced images by reducing each, and a geometric region in the plurality of reduced images based on the parallax obtained by block matching the plurality of reduced images and the pixel positions of the plurality of reduced images. A second geometrical region extraction unit for extracting the first geometrical region and a second geometrical region corresponding to the first geometrical region in the plurality of images based on the extracted first geometrical region. A geometric equation deciding unit that decides a geometric equation representing the geometric region, and a measuring unit that calculates two-dimensional information or three-dimensional information of a measurement target included in the subject using the decided geometric equation.
 第1の態様に係る計測装置では、縮小画像のブロックマッチングに基づいて第1の幾何領域を抽出し、抽出した第1の幾何領域に基づいて、縮小前の画像における第2の幾何領域を表す幾何方程式を決定し、この幾何方程式を用いて計測を行うので、縮小前の画像のブロックマッチングの範囲を狭めて処理を高速に行うことができ、また異なる幾何領域を分離識別して高精度に計測を行うことができる。このように、第1の態様に係る計測装置では計測対象の2次元情報または3次元情報を高速かつ高精度に算出することができる。 In the measurement apparatus according to the first aspect, the first geometric region is extracted based on block matching of the reduced image, and the second geometric region in the image before reduction is represented based on the extracted first geometric region. Since a geometric equation is determined and measurement is performed using this geometric equation, the processing can be performed at high speed by narrowing the block matching range of the image before reduction, and different geometric regions can be separated and identified with high accuracy. Measurement can be performed. As described above, the measurement apparatus according to the first aspect can calculate the two-dimensional information or the three-dimensional information to be measured at high speed and with high accuracy.
 なお第1の態様では、縮小画像の縮小度を1種類とすることができる。この場合縮小度が異なる複数種類の縮小画像を用いる必要が無く、処理を高速に行うことができる。また第1の態様において2次元情報及び3次元情報の例としては計測対象の位置、長さ、幅及び面積を挙げることができるが、これらの例に限定されるものではない。なお、第1の態様において幾何領域とは同一の平面または曲面に属する被写体の領域であって同一の幾何方程式によって表される領域を意味するものとする。幾何領域は同一の被写体について1つ以上の任意の数だけ存在してよい。 In the first aspect, the reduction degree of the reduced image can be one type. In this case, it is not necessary to use a plurality of types of reduced images with different reduction degrees, and processing can be performed at high speed. In the first aspect, examples of the two-dimensional information and the three-dimensional information include the position, length, width, and area of the measurement target, but are not limited to these examples. In the first aspect, the geometric area means an area of a subject belonging to the same plane or curved surface and represented by the same geometric equation. One or more arbitrary geometric regions may exist for the same subject.
 第2の態様に係る計測装置は第1の態様において、幾何領域抽出部は第1の幾何領域を表す幾何方程式を決定し、決定した第1の幾何領域を表す幾何方程式に基づいて第1の幾何領域を抽出する。第2の態様は、縮小画像において第1の幾何領域を抽出する方法の一例を規定するものである。 In the measurement device according to the second aspect, in the first aspect, the geometric region extraction unit determines a geometric equation representing the first geometric region, and the first geometric equation representing the first geometric region is determined based on the first geometric equation representing the first geometric region. Extract geometric regions. The second mode defines an example of a method for extracting the first geometric region from the reduced image.
 第3の態様に係る計測装置は第1または第2の態様において、幾何領域抽出部は、第1の幾何領域との距離が閾値以下である画素を第1の幾何領域に属する画素として抽出する。第3の態様は、第1の幾何領域に属する画素を抽出する基準を示すものである。 In the measurement device according to the third aspect, in the first or second aspect, the geometric region extraction unit extracts a pixel whose distance from the first geometric region is equal to or less than a threshold as a pixel belonging to the first geometric region. . The third aspect shows a reference for extracting pixels belonging to the first geometric region.
 第4の態様に係る計測装置は第1から第3の態様のいずれか1つにおいて、第2の幾何領域から抽出した複数の代表点について視差を算出し、複数の代表点の画素位置と複数の代表点について算出した視差とに基づいて第2の幾何領域を表す幾何方程式を決定する。第4の態様によれば、縮小前の画像について代表点の抽出及び視差の算出を行い、これに基づいて幾何方程式を決定するので、計測対象の2次元情報または3次元情報を高精度に算出することができる。なお、第4の態様において抽出する代表点の数は、幾何領域の種類(平面か曲面か、曲面であればどのような曲面か)に応じて異なっていてよい。 In any one of the first to third aspects, the measurement device according to the fourth aspect calculates the parallax for a plurality of representative points extracted from the second geometric region, and the pixel positions and the plurality of representative points are calculated. A geometric equation representing the second geometric region is determined based on the parallax calculated for the representative point. According to the fourth aspect, the representative point is extracted and the parallax is calculated for the image before reduction, and the geometric equation is determined based on the extracted representative point. Therefore, the two-dimensional information or the three-dimensional information to be measured is calculated with high accuracy. can do. Note that the number of representative points to be extracted in the fourth mode may differ depending on the type of geometric region (a flat surface or a curved surface, or a curved surface if a curved surface).
 第5の態様に係る計測装置は第4の態様において、幾何方程式決定部は、複数の縮小画像のブロックマッチングの結果に基づいて複数の画像のブロックマッチング範囲を設定し、設定した範囲内で複数の画像のブロックマッチングを行うことにより複数の代表点の視差を求める。第5の態様によれば、縮小画像のブロックマッチング結果に基づいて複数の画像(縮小前の画像)のブロックマッチング範囲を設定するので、縮小前の画像を用いて代表点の視差を高精度に求めつつマッチングを高速に行うことができ、計測対象の2次元情報または3次元情報を高速かつ高精度に算出することができる。 In the measurement apparatus according to the fifth aspect, in the fourth aspect, the geometric equation determination unit sets a block matching range of a plurality of images based on a result of block matching of a plurality of reduced images, and a plurality of pieces within the set range The parallax of a plurality of representative points is obtained by performing block matching of the images. According to the fifth aspect, since the block matching range of a plurality of images (images before reduction) is set based on the block matching result of the reduced images, the parallax of the representative points can be accurately obtained using the images before reduction. Matching can be performed at high speed while obtaining, and the two-dimensional information or three-dimensional information of the measurement target can be calculated at high speed and with high accuracy.
 第6の態様に係る計測装置は第1から第5の態様のいずれか1つにおいて、計測部は計測対象を示す画素が第2の幾何領域の内のいずれの幾何領域に属するか判定し、判定した幾何領域を表す幾何方程式と計測対象を示す画素の画素位置とに基づいて2次元情報または3次元情報を算出する。上述のように、本発明に係る計測装置では第2の幾何領域について幾何方程式を求めるので、第6の態様では計測対象が第2の幾何領域の内のいずれの幾何領域に属するかを判定した上で2次元情報または3次元情報を算出する。 In any one of the first to fifth aspects, the measurement device according to the sixth aspect determines whether the pixel indicating the measurement object belongs to which geometric area in the second geometric area, Two-dimensional information or three-dimensional information is calculated based on the geometric equation representing the determined geometric region and the pixel position of the pixel indicating the measurement target. As described above, since the measurement apparatus according to the present invention obtains the geometric equation for the second geometric region, in the sixth aspect, it is determined to which geometric region the measurement target belongs in the second geometric region. Above, 2D information or 3D information is calculated.
 第7の態様に係る計測装置は第6の態様において、計測部は判定した幾何領域を表す幾何方程式と計測対象を示す画素の画素位置とに基づいて計測対象を示す画素の視差を算出し、算出した視差に基づいて2次元情報または3次元情報を算出する。第7の態様は計測対象を示す画素の視差に基づいて2次元情報または3次元情報を算出すること、及びそのような視差の算出手法を規定するものである。 The measurement device according to a seventh aspect is the sixth aspect, wherein the measurement unit calculates the parallax of the pixel indicating the measurement target based on the geometric equation indicating the determined geometric area and the pixel position of the pixel indicating the measurement target, Two-dimensional information or three-dimensional information is calculated based on the calculated parallax. In the seventh aspect, two-dimensional information or three-dimensional information is calculated based on the parallax of the pixel indicating the measurement target, and such a parallax calculation method is specified.
 第8の態様に係る計測装置は第1から第7の態様のいずれか1つにおいて、幾何領域抽出部は複数の画像をグレースケール画像に変換する画像処理と複数の画像を平行化する画像処理とのうち少なくとも1つの画像処理を行い、少なくとも1つの画像処理を行った画像に対して複数の縮小画像を生成する。第8の態様はいわゆる「前処理」の内容を規定するもので、このような画像処理を行った画像に対して縮小画像を生成することで、計測対象の2次元情報または3次元情報を高速かつ高精度に算出することができる。 The measurement device according to an eighth aspect is any one of the first to seventh aspects, wherein the geometric region extraction unit performs image processing for converting a plurality of images into a grayscale image and image processing for parallelizing the plurality of images. At least one image processing is performed, and a plurality of reduced images are generated for the image subjected to at least one image processing. The eighth mode prescribes the content of so-called “preprocessing”. By generating a reduced image for an image that has undergone such image processing, the two-dimensional information or the three-dimensional information to be measured can be processed at high speed. In addition, it can be calculated with high accuracy.
 第9の態様に係る計測装置は第1から第8の態様のいずれか1つにおいて、幾何方程式決定部は、平面、円筒面、及び球面のいずれかを表す幾何方程式を決定する。第9の態様は、被写体を構成する面として典型的である平面、円筒面、及び球面のいずれかを表す幾何方程式を決定することを規定したものである。 In the measurement apparatus according to the ninth aspect, in any one of the first to eighth aspects, the geometric equation determination unit determines a geometric equation representing one of a plane, a cylindrical surface, and a spherical surface. The ninth aspect prescribes that a geometric equation representing any one of a plane, a cylindrical surface, and a spherical surface, which is typical as a surface constituting the subject, is determined.
 第10の態様に係る計測装置は第1から第9の態様のいずれか1つにおいて、被写体はコンクリート構造物であり、計測対象はコンクリート構造物の損傷である。コンクリート構造物には損傷が発生し、発生した損傷の形状及び大きさは時間の経過と共に変化していくが、第10の態様に係る計測装置をそのようなコンクリート構造物の損傷の計測に適用することにより、計測対象(即ち損傷)の2次元情報または3次元情報を高速かつ高精度に算出できる。なおコンクリート構造物の例としては橋梁、トンネル、道路、及びビルを挙げることができ、損傷の例としてはひび割れ及び遊離石灰を挙げることができるが、第10の態様が適用可能なコンクリート構造物及び損傷はこれらの例に限定されるものではない。 In any one of the first to ninth aspects, the measurement device according to the tenth aspect is a concrete structure and the measurement target is damage to the concrete structure. The concrete structure is damaged, and the shape and size of the generated damage change over time. The measuring device according to the tenth aspect is applied to measure the damage of such a concrete structure. By doing so, it is possible to calculate the two-dimensional information or three-dimensional information of the measurement target (ie, damage) at high speed and with high accuracy. Examples of concrete structures include bridges, tunnels, roads, and buildings. Examples of damage include cracks and free lime. Concrete structures to which the tenth aspect can be applied and Damage is not limited to these examples.
 第11の態様に係る計測装置は第1から第10の態様のいずれか1つにおいて、複数の画像を取得する光学系を備える。第11の態様が備える光学系は、撮影レンズ及び撮像素子を含む光学系を複数の視点のそれぞれに対応して複数備えたステレオ光学系とすることができる。 In any one of the first to tenth aspects, the measurement apparatus according to the eleventh aspect includes an optical system that acquires a plurality of images. The optical system included in the eleventh aspect can be a stereo optical system including a plurality of optical systems including a photographic lens and an imaging element corresponding to each of a plurality of viewpoints.
 上述した目的を達成するため、本発明の第12の態様に係る計測方法は同一の被写体を複数の視点から撮影して得られた複数の画像を取得する画像取得工程と、複数の画像をそれぞれ縮小して複数の縮小画像を生成する縮小画像生成工程と、複数の縮小画像をブロックマッチングして得られた視差と複数の縮小画像の画素位置とに基づいて、複数の縮小画像における幾何領域である第1の幾何領域を抽出する幾何領域抽出工程と、抽出した第1の幾何領域に基づいて、複数の画像における幾何領域であって第1の幾何領域に対応する幾何領域である第2の幾何領域を表す幾何方程式を決定する幾何方程式決定工程と、決定した幾何方程式を用いて、被写体に含まれる計測対象の2次元情報または3次元情報を算出する計測工程と、を備える。第12の態様に係る計測方法では、第1の態様と同様に計測対象の2次元情報または3次元情報を高速かつ高精度に算出することができる。 In order to achieve the above-described object, a measurement method according to a twelfth aspect of the present invention includes an image acquisition step of acquiring a plurality of images obtained by photographing the same subject from a plurality of viewpoints, and a plurality of images respectively. Based on the reduced image generation step of generating a plurality of reduced images by reduction, the parallax obtained by block matching the plurality of reduced images, and the pixel positions of the plurality of reduced images, the geometric region in the plurality of reduced images A geometric region extracting step for extracting a certain first geometric region, and a second geometric region corresponding to the first geometric region in the plurality of images based on the extracted first geometric region. A geometric equation determining step for determining a geometric equation representing a geometric region, and a measuring step for calculating two-dimensional information or three-dimensional information of a measurement target included in the subject using the determined geometric equation.In the measurement method according to the twelfth aspect, the two-dimensional information or the three-dimensional information to be measured can be calculated at high speed and with high accuracy, as in the first aspect.
 以上説明したように、本発明の計測装置及び計測方法によれば、計測対象の2次元情報または3次元情報を高速かつ高精度に算出することができる。 As described above, according to the measurement apparatus and the measurement method of the present invention, it is possible to calculate the two-dimensional information or the three-dimensional information of the measurement target at high speed and with high accuracy.
図1は、本発明の計測装置及び計測方法の適用対象の例である橋梁を示す図である。FIG. 1 is a diagram showing a bridge which is an example of an application target of the measurement apparatus and measurement method of the present invention. 図2は、本発明の一実施形態に係る計測装置の構成を示すブロック図である。FIG. 2 is a block diagram showing the configuration of the measuring apparatus according to one embodiment of the present invention. 図3は、本発明の一実施形態に係る計測方法の処理を示すフローチャートである。FIG. 3 is a flowchart showing processing of the measurement method according to the embodiment of the present invention. 図4は、左右画像間に垂直方向のずれが存在する様子を示す概念図である。FIG. 4 is a conceptual diagram illustrating a state in which a vertical shift exists between the left and right images. 図5は、左右画像を平行化する様子を示す概念図である。FIG. 5 is a conceptual diagram showing how the left and right images are parallelized. 図6は、左右の縮小画像を示す図である。FIG. 6 is a diagram showing left and right reduced images. 図7は、縮小画像をブロックマッチングする様子を示す図である。FIG. 7 is a diagram illustrating a state in which the reduced image is subjected to block matching. 図8は、ブロックマッチングにより抽出された幾何領域を示す図である。FIG. 8 is a diagram illustrating a geometric region extracted by block matching. 図9は、縮小画像の幾何領域で抽出された代表点の例を示す図である。FIG. 9 is a diagram illustrating an example of representative points extracted in the geometric region of the reduced image. 図10は、縮小前の画像で抽出された代表点の例を示す図である。FIG. 10 is a diagram illustrating an example of representative points extracted from an image before reduction. 図11は、ひび割れを計測する様子を示す図である。FIG. 11 is a diagram illustrating how a crack is measured.
 以下、添付図面を参照しつつ、本発明に係る計測装置及び計測方法の実施形態について説明する。 Hereinafter, embodiments of a measurement apparatus and a measurement method according to the present invention will be described with reference to the accompanying drawings.
 図1は、本発明に係る計測装置及び計測方法の適用対象の一例である橋梁1(コンクリート構造物)の構造を示す斜視図である。図1に示す橋梁1は主桁3を有し、主桁3は接合部3Aで接合されている。主桁3は橋台や橋脚の間に渡され、床版2上の車輌等の荷重を支える部材である。また、主桁3の上部には車輌等が走行するための床版2が打設されている。床版2は鉄筋コンクリート製のものが一般的である。なお橋梁1は、床版2及び主桁3の他に図示せぬ横桁、対傾構、及び横構等の部材を有する。 FIG. 1 is a perspective view showing a structure of a bridge 1 (concrete structure) which is an example of an application target of a measuring apparatus and a measuring method according to the present invention. The bridge 1 shown in FIG. 1 has a main girder 3, and the main girder 3 is joined by a joint 3A. The main girder 3 is a member that is passed between the abutment and the pier and supports the load of the vehicle on the floor slab 2. In addition, a floor slab 2 for driving a vehicle or the like is placed on the main girder 3. The floor slab 2 is generally made of reinforced concrete. The bridge 1 has members such as a horizontal girder, a tilted frame, and a horizontal frame (not shown) in addition to the floor slab 2 and the main girder 3.
 <画像の取得>
 橋梁1の損傷を計測する場合、検査員はデジタルカメラ104(図2参照)を用いて橋梁1を下方から撮影し(図1のC方向)、検査範囲について画像を取得する。撮影は、橋梁1の延伸方向(図1のA方向)及びその直交方向(図1のB方向)に適宜移動しながら行う。なお橋梁1の周辺状況により検査員の移動が困難な場合は、橋梁1に沿って移動可能な移動体にデジタルカメラ104を設置して撮影を行ってもよい。このような移動体には、デジタルカメラ104の昇降機構及び/またはパン・チルト機構を設けてもよい。なお移動体の例としては車輌、ロボット、及び飛翔体を挙げることができるが、これらに限定されるものではない。
<Acquisition of image>
When measuring damage to the bridge 1, the inspector uses the digital camera 104 (see FIG. 2) to photograph the bridge 1 from below (direction C in FIG. 1) and obtain an image of the inspection range. The photographing is performed while appropriately moving in the extending direction of the bridge 1 (A direction in FIG. 1) and the orthogonal direction (B direction in FIG. 1). If it is difficult for the inspector to move due to the surrounding situation of the bridge 1, the digital camera 104 may be installed on a movable body that can move along the bridge 1 to perform imaging. Such a moving body may be provided with a lifting mechanism and / or a pan / tilt mechanism of the digital camera 104. Examples of the moving body include a vehicle, a robot, and a flying body, but are not limited to these.
 <計測装置の構成>
 図2は、本実施形態に係る計測装置100の概略構成を示すブロック図である。計測装置100は、画像取得部102、画像処理部108(縮小画像生成部、幾何領域抽出部、代表点抽出部、視差算出部、幾何方程式決定部、判定部、計測部)、記録部110、表示部112、及び操作部114を備え、互いに接続されていて、必要な情報を送受信できるようになっている。
<Configuration of measuring device>
FIG. 2 is a block diagram illustrating a schematic configuration of the measurement apparatus 100 according to the present embodiment. The measurement apparatus 100 includes an image acquisition unit 102, an image processing unit 108 (a reduced image generation unit, a geometric region extraction unit, a representative point extraction unit, a parallax calculation unit, a geometric equation determination unit, a determination unit, a measurement unit), a recording unit 110, A display unit 112 and an operation unit 114 are provided and connected to each other so that necessary information can be transmitted and received.
 各部の機能は、例えばCPU(Central Processing Unit)等の制御デバイスがメモリに記憶されたプログラムを実行することで実現できる。また、画像入力部106は無線通信用アンテナ及び入出力インタフェース回路を含み、記録部110はHDD(Hard Disk Drive)等の非一時的記録媒体を含んで構成される。また表示部112は液晶ディスプレイ等の表示デバイスを含み、操作部114はキーボード等の入力デバイスを含む。なおこれらは本発明に係る計測装置の構成の一例を示すものであり、他の構成を適宜採用し得る。 The function of each unit can be realized by a control device such as a CPU (Central Processing Unit) executing a program stored in the memory. The image input unit 106 includes a wireless communication antenna and an input / output interface circuit, and the recording unit 110 includes a non-temporary recording medium such as an HDD (Hard Disk Drive). The display unit 112 includes a display device such as a liquid crystal display, and the operation unit 114 includes an input device such as a keyboard. These show one example of the configuration of the measuring apparatus according to the present invention, and other configurations can be adopted as appropriate.
 上述のようにデジタルカメラ104を用いて撮影された画像は、無線通信により画像入力部106に入力されて、画像処理部108により計測処理(後述)が行われる。デジタルカメラ104及び画像入力部106は画像取得部102を構成する。なお、デジタルカメラ104は左目画像取得用の左画像用光学系104Lと右目画像取得用の右画像用光学系104Rとを備え、これら光学系により同一の被写体(本実施形態では橋梁1)を複数の視点から撮影できる。左画像用光学系104L及び右画像用光学系104Rは、図示せぬ撮影レンズ及び撮像素子を備える。撮像素子の例としてはCCD(Charge Coupled Device)型の撮像素子及びCMOS(Complementary Metal-Oxide Semiconductor)型の撮像素子を挙げることができる。撮像素子の受光面上にはR(赤),G(緑),またはB(青)のカラーフィルタが設けられており、各色の信号に基づいて被写体のカラー画像を取得することができる。 As described above, an image photographed using the digital camera 104 is input to the image input unit 106 by wireless communication, and measurement processing (described later) is performed by the image processing unit 108. The digital camera 104 and the image input unit 106 constitute an image acquisition unit 102. The digital camera 104 includes a left image optical system 104L for acquiring a left eye image and a right image optical system 104R for acquiring a right eye image. You can shoot from the viewpoint. The left image optical system 104L and the right image optical system 104R include a photographic lens and an image sensor (not shown). Examples of the image sensor include a CCD (Charge-Coupled Device) type image sensor and a CMOS (Complementary Metal-Oxide Semiconductor) type image sensor. An R (red), G (green), or B (blue) color filter is provided on the light receiving surface of the image sensor, and a color image of the subject can be acquired based on the signals of each color.
 <計測処理の手順>
 次に、上述した構成の計測装置100を用いた被写体の計測について説明する。図3は本実施形態に係る計測処理の手順を示すフローチャートである。なお、本実施形態ではコンクリート構造物である橋梁1の床版2に生じたひび割れを計測する場合について説明する。
<Measurement procedure>
Next, measurement of a subject using the measurement apparatus 100 having the above-described configuration will be described. FIG. 3 is a flowchart showing the procedure of the measurement process according to this embodiment. In addition, this embodiment demonstrates the case where the crack which arose in the floor slab 2 of the bridge 1 which is a concrete structure is measured.
 <画像取得>
 まず、上述のようにデジタルカメラ104で撮影した橋梁1のステレオ画像を、無線通信により画像入力部106に入力する(ステップS100;画像取得工程)。橋梁1の画像は検査範囲に応じて複数入力され、また入力する画像には、デジタルカメラ104により撮影日時の情報が付加されている。なお入力画像の撮影日時は必ずしも全ての画像において同一である必要はなく、複数日に亘っていてもよい。画像は複数の画像を一括して入力してもよいし、一度に1つの画像を入力するようにしてもよい。なお、橋梁1の画像は無線通信でなく各種メモリカード等の非一時的記録媒体を介して入力するようにしてもよいし、既に撮影され記録されている画像のデータをネットワーク経由で入力してもよい。
<Image acquisition>
First, the stereo image of the bridge 1 photographed by the digital camera 104 as described above is input to the image input unit 106 by wireless communication (step S100; image acquisition step). A plurality of images of the bridge 1 are input in accordance with the inspection range, and information on photographing date and time is added to the input image by the digital camera 104. Note that the shooting date and time of the input image does not necessarily have to be the same for all images, and may be for a plurality of days. As the image, a plurality of images may be input at a time, or one image may be input at a time. The image of the bridge 1 may be input via a non-temporary recording medium such as various memory cards instead of wireless communication, or image data that has already been captured and recorded is input via a network. Also good.
 このようにして入力した左画像iL0及び右画像iR0の例を図4に示す。図4では、橋梁1において3つの平面が交差する部分(隅部分)を撮影した場合の画像の例を示している。3つの平面は境界線E1,E2,及びE3で交差し、これら境界線E1,E2,及びE3は点E0で一致している。なお左画像iL0及び右画像iR0において、図4の左右方向を水平方向uとし、上下方向を垂直方向vとする。なお左画像iL0及び右画像iR0のチャンネル数及び画像サイズは特に限定されるものではないが、例えば4,800ピクセル(水平方向u)×3,200ピクセル(垂直方向v)のカラー画像(3チャンネル)とすることができる。また、左画像iL0及び右画像iR0は複数の画像を編集及び/または合成した画像(例えば、計測範囲の一部を撮影した画像を合成して生成した、計測範囲全体を示す画像)であってもよい。 FIG. 4 shows examples of the left image iL0 and the right image iR0 input in this way. FIG. 4 shows an example of an image when a portion (corner portion) where three planes intersect in the bridge 1 is photographed. The three planes intersect at boundary lines E1, E2, and E3, and these boundary lines E1, E2, and E3 coincide at a point E0. In the left image iL0 and the right image iR0, the horizontal direction in FIG. 4 is the horizontal direction u, and the vertical direction is the vertical direction v. The number of channels and the image size of the left image iL0 and the right image iR0 are not particularly limited. For example, a color image (3 channels) of 4,800 pixels (horizontal direction u) × 3,200 pixels (vertical direction v) is used. ). The left image iL0 and the right image iR0 are images obtained by editing and / or combining a plurality of images (for example, an image showing the entire measurement range generated by combining images obtained by capturing a part of the measurement range). Also good.
 <前処理>
 本実施形態において、左画像iL0及び右画像iR0はカラー画像であり、また図4に示すように垂直方向vに距離δずれているものとする。そこで画像処理部108は、後述する縮小画像の生成等に先立って、ステップS110において左画像iL0及び右画像iR0をグレースケール画像に変換する。また画像処理部108は、ステップS110において左画像iL0及び/または右画像iR0を垂直方向vにシフトして、上述した距離δのずれを補正(平行化)する。これらの前処理を行って得られた画像の例(左画像iL1及び右画像iR1)を図5に示す。本実施形態に係る計測装置100及び計測方法では、このような前処理を行うことで高精度な計測を行うことができる。
<Pretreatment>
In the present embodiment, it is assumed that the left image iL0 and the right image iR0 are color images and are shifted by a distance δ in the vertical direction v as shown in FIG. Therefore, the image processing unit 108 converts the left image iL0 and the right image iR0 into grayscale images in step S110 prior to generation of a reduced image described later. In step S110, the image processing unit 108 shifts the left image iL0 and / or the right image iR0 in the vertical direction v to correct (parallelize) the shift of the distance δ described above. FIG. 5 shows examples of images (left image iL1 and right image iR1) obtained by performing these preprocessing. In the measurement apparatus 100 and the measurement method according to the present embodiment, highly accurate measurement can be performed by performing such preprocessing.
 <縮小画像生成>
 次に、画像処理部108は、前処理後の画像である左画像iL1及び右画像iR1に基づいて縮小画像を生成する(ステップS120;縮小画像生成工程)。縮小の度合いは特に限定されないが、例えば左画像iL1及び右画像iR1の水平方向u及び垂直方向vをそれぞれ16分の1に縮小して、300ピクセル(水平方向u)×200ピクセル(垂直方向v)の縮小画像を生成することができる。この縮小画像を左画像iL2及び右画像iR2とする(図6参照)。なお、ステップS120で生成する縮小画像は1種類でよい。即ち、縮小度の異なる画像(例えば、8分の1、4分の1、及び2分の1等)を複数生成しなくてよい。
<Reduced image generation>
Next, the image processing unit 108 generates a reduced image based on the left image iL1 and the right image iR1 that are images after the preprocessing (step S120; reduced image generation step). The degree of reduction is not particularly limited. For example, the horizontal direction u and the vertical direction v of the left image iL1 and the right image iR1 are respectively reduced to 1/16 to obtain 300 pixels (horizontal direction u) × 200 pixels (vertical direction v ) Can be generated. Let this reduced image be the left image iL2 and the right image iR2 (see FIG. 6). Note that one type of reduced image may be generated in step S120. That is, it is not necessary to generate a plurality of images having different reduction degrees (for example, 1/8, 1/4, 1/2, etc.).
 <幾何領域抽出>
 次に、画像処理部108は左画像iL2及び右画像iR2に基づいて橋梁1の幾何領域(第1の幾何領域)を抽出する(ステップS130;幾何領域抽出工程)。ステップS130の処理には、ブロックマッチングによる視差の算出、画素位置及び視差に基づく幾何方程式決定、及び決定した幾何方程式に基づく幾何領域の抽出が含まれる。以下、これらの処理について説明する。
<Geometric region extraction>
Next, the image processing unit 108 extracts the geometric region (first geometric region) of the bridge 1 based on the left image iL2 and the right image iR2 (step S130; geometric region extraction step). The processing in step S130 includes calculation of parallax by block matching, determination of a geometric equation based on the pixel position and parallax, and extraction of a geometric region based on the determined geometric equation. Hereinafter, these processes will be described.
 <ブロックマッチング>
 ブロックマッチングは、左画像iL2及び右画像iR2のうち一方の画像(基準画像)において複数の画素を含むブロックを設定し、他方の画像(比較画像)においてこのブロックと同じ形状及びサイズのブロックを設定して、比較画像におけるブロックを1画素ずつ水平方向uに移動させて、各位置で2つのブロックについての相関値を算出することにより行う。図7の例では、基準画像を左画像iL2、比較画像を右画像iR2とし、左画像iL2で設定したブロックALと同形状及び同サイズのブロックARを1画素ずつ水平方向uに移動させる。なお、ステップS110の前処理において左右画像を平行化しているので、ブロックALの位置が決まれば、ブロックマッチングの際はブロックARを水平方向uに移動させるだけでよい。前処理において左右画像を平行化していない場合、ブロックマッチングでは、水平方向uへの移動を垂直方向vの位置をずらしながら繰り返す。
<Block matching>
Block matching sets a block including a plurality of pixels in one image (reference image) of the left image iL2 and the right image iR2, and sets a block having the same shape and size as the block in the other image (comparison image) Then, the block in the comparative image is moved in the horizontal direction u pixel by pixel, and the correlation value for the two blocks is calculated at each position. In the example of FIG. 7, the reference image is the left image iL2, the comparison image is the right image iR2, and the block AR having the same shape and size as the block AL set in the left image iL2 is moved in the horizontal direction u pixel by pixel. Since the left and right images are parallelized in the preprocessing in step S110, if the position of the block AL is determined, it is only necessary to move the block AR in the horizontal direction u at the time of block matching. When the left and right images are not parallelized in the preprocessing, in the block matching, the movement in the horizontal direction u is repeated while shifting the position in the vertical direction v.
 このようにブロックARを移動させながら相関値を算出し、ブロックALの位置に対し相関値が最も低くなる(即ち、相関が最も高くなる)ブロックARの位置が特定されたら、ブロックALにおける注目画素(例えば中央の画素)と、特定された位置におけるブロックARの対応画素(例えば中央の画素)との間の距離を視差として算出する。そして、このような処理を基準画像である左画像iL2の全画素について実行して、各画素位置について視差を求め、視差画像を生成する。なお視差の算出において、相関値の算出方法としては、例えばSAD(Sum of Absolute Difference)、SSD(Sum of Squared intensity Difference)、NCC(Normalized Cross Correlation)等の手法を挙げることができる。 In this way, the correlation value is calculated while moving the block AR, and when the position of the block AR having the lowest correlation value (that is, the highest correlation) with respect to the position of the block AL is specified, the target pixel in the block AL The distance between (for example, the center pixel) and the corresponding pixel (for example, the center pixel) of the block AR at the specified position is calculated as the parallax. Then, such processing is executed for all the pixels of the left image iL2, which is the reference image, and parallax is obtained for each pixel position to generate a parallax image. In the calculation of the parallax, examples of the correlation value calculation method include SAD (SumSof Absolute Difference), SSD (Sum of Squared intensity Difference), and NCC (Normalized Cross Correlation).
 <幾何方程式の決定及び幾何領域の抽出>
 本実施形態において、画像処理部108はRANSAC(RANdom SAmple Consensus)アルゴリズムを用いて平面抽出(第1の幾何領域の抽出)を行う。RANSACアルゴリズムは、ランダムにサンプリングしたデータ(平面であれば3個)を用いたモデルパラメータ(平面を表すパラメータ)の算出と、算出したパラメータの正しさの評価とを、最適な評価値が得られるまで繰り返すアルゴリズムである。以下、具体的な手順を説明する。
<Determining geometric equations and extracting geometric regions>
In the present embodiment, the image processing unit 108 performs plane extraction (extraction of the first geometric region) using a RANSAC (RANdom SAmple Consensus) algorithm. The RANSAC algorithm can obtain an optimum evaluation value by calculating a model parameter (a parameter representing a plane) using randomly sampled data (three if it is a plane) and evaluating the correctness of the calculated parameter. It is an algorithm that repeats until. A specific procedure will be described below.
 (ステップS1)
 上述したブロックマッチングにより作成した視差画像から、ランダムに3点抽出する。例えば、図9において点f1(u,v,w),f2(u,v,w),及びf3(u,v,w)が抽出されたものとする。ここで抽出する点は、各幾何領域の幾何方程式を決定するための点であり、想定される幾何領域の種類(平面、円筒面、及び球面等)に応じて抽出する点の数を変えてよい。例えば平面であれば、3点以上(ただし、同一直線上にないものとする)の代表点を抽出する。なお、画像の水平方向座標をu,垂直方向座標をv,視差(距離方向)をwで表す(iは点番号を表す1以上の整数)。
(Step S1)
Three points are randomly extracted from the parallax image created by the block matching described above. For example, it is assumed that points f1 (u 1 , v 1 , w 1 ), f 2 (u 2 , v 2 , w 2 ), and f 3 (u 3 , v 3 , w 3 ) are extracted in FIG. The points to be extracted here are points for determining the geometric equation of each geometric region, and the number of points to be extracted is changed depending on the assumed geometric region type (plane, cylindrical surface, spherical surface, etc.). Good. For example, in the case of a plane, representative points of 3 points or more (assuming that they are not on the same straight line) are extracted. The horizontal coordinate of the image is represented by u i , the vertical coordinate is represented by v i , and the parallax (distance direction) is represented by w i (i is an integer of 1 or more representing a point number).
 (ステップS2)
 次に、抽出した点f1,f2,f3から平面方程式(幾何方程式)を決定する。3次元空間(u,v,w)における平面方程式Fは一般に以下の(式1)で表される(a,b,c,dは定数)。
(Step S2)
Next, a plane equation (geometric equation) is determined from the extracted points f1, f2, and f3. The plane equation F in the three-dimensional space (u, v, w) is generally expressed by the following (Expression 1) (a, b, c, d are constants).
   F=a×u+b×v+c×w+d     …(式1)
 (ステップS3)
 視差画像の全ての画素(u,v,w)に対して、(式1)の平面方程式Fで表される平面までの距離を算出する。距離が閾値以下なら、その画素は平面方程式Fで表される平面上に存在すると判断する。
F = a * u + b * v + c * w + d (Formula 1)
(Step S3)
For all the pixels (u i , v i , w i ) of the parallax image, the distance to the plane represented by the plane equation F in (Expression 1) is calculated. If the distance is less than or equal to the threshold, it is determined that the pixel is on the plane represented by the plane equation F.
 (ステップS4)
 平面方程式Fで表される平面上に存在する画素数が現在の最適解についての画素数よりも多ければ、平面方程式Fを最適解とする。
(Step S4)
If the number of pixels existing on the plane represented by the plane equation F is larger than the number of pixels for the current optimal solution, the plane equation F is determined as the optimal solution.
 (ステップS5)
 ステップS1~S4を決められた回数繰り返す。
(Step S5)
Steps S1 to S4 are repeated a predetermined number of times.
 (ステップS6)
 得られた平面方程式を解として1つの平面を決定する。
(Step S6)
One plane is determined by using the obtained plane equation as a solution.
 (ステップS7)
 ステップS6までで決定した平面上の画素を処理対象(平面の抽出対象)から除外する。
(Step S7)
The pixels on the plane determined up to step S6 are excluded from the processing target (plane extraction target).
 (ステップS8)
 ステップS1~S7を繰り返し、抽出した平面が一定数を超えるか、残った画素が規定数より少なくなれば終了する。
(Step S8)
Steps S1 to S7 are repeated, and the process ends when the number of extracted planes exceeds a certain number or the number of remaining pixels is less than a specified number.
 上述の手順により、縮小画像から幾何領域(第1の幾何領域)を抽出することができる。図8の例では、3つの幾何領域(平面)G1,G2,及びG3が抽出されたものとする。本実施形態に係る計測装置100及び計測方法では、このように異なる幾何領域を分離識別して高精度に計測を行うことができる。 The geometric region (first geometric region) can be extracted from the reduced image by the above procedure. In the example of FIG. 8, it is assumed that three geometric regions (planes) G1, G2, and G3 have been extracted. In the measurement apparatus 100 and the measurement method according to the present embodiment, different geometric regions can be separated and identified as described above, and measurement can be performed with high accuracy.
 なお、ここでは幾何領域が平面の場合の幾何方程式(平面方程式)について説明したが、被写体の形状に応じて円筒面(円柱面)あるいは球面等、他の種類の幾何領域を表す幾何方程式を決定するようにしてもよい。これは、橋脚やトンネル等の構造物の形状は、平面だけでなく円筒面または球面により表されることも多いからである。具体的には、中心軸がz軸であり半径がrの円柱は以下の(式2)で表され(zは任意の値)、中心が座標系の原点であり半径がrの円柱は以下の(式3)で表される。 Here, the geometric equation (plane equation) when the geometric region is a plane has been described, but a geometric equation representing another type of geometric region, such as a cylindrical surface (cylindrical surface) or a spherical surface, is determined according to the shape of the subject. You may make it do. This is because the shape of a structure such as a bridge pier or a tunnel is often expressed not only by a plane but also by a cylindrical surface or a spherical surface. Specifically, a cylinder whose center axis is the z-axis and whose radius is r is expressed by the following (formula 2) (z is an arbitrary value), and whose center is the origin of the coordinate system and whose radius is r is the following: (Equation 3).
   x2+y2   =r2     …(式2)
   x2+y2+z2=r2     …(式3)
 <代表点の抽出及び代表点の視差>
 ステップS130までの処理により縮小画像が幾何領域ごとにグループ化されているが、計測を高精度に行うため、次に、以下の手順により縮小前の画像について平面方程式を求める。
x 2 + y 2 = r 2 (Formula 2)
x 2 + y 2 + z 2 = r 2 (Formula 3)
<Extraction of representative points and parallax of representative points>
Although the reduced images are grouped for each geometric region by the processing up to step S130, in order to perform measurement with high accuracy, a plane equation is obtained for the image before reduction by the following procedure.
 画像処理部108は、まず、縮小前の画像から同一の幾何領域に属する点(代表点)を複数抽出する(ステップS140)。図10の例では、縮小前の画像である左画像iL1において、上述した幾何領域G1に対応する幾何領域G1A(第2の幾何領域)から代表点f4,f5,f6を抽出している。図10において、幾何領域G1A,G2A,及びG3A(第2の幾何領域)は上述した幾何領域G1,G2,及びG3(第1の幾何領域)にそれぞれ対応している。 First, the image processing unit 108 extracts a plurality of points (representative points) belonging to the same geometric area from the image before reduction (step S140). In the example of FIG. 10, representative points f4, f5, and f6 are extracted from the geometric area G1A (second geometric area) corresponding to the above-described geometric area G1 in the left image iL1 that is an image before reduction. In FIG. 10, geometric regions G1A, G2A, and G3A (second geometric region) correspond to the above-described geometric regions G1, G2, and G3 (first geometric region), respectively.
 次に、抽出された代表点f4,f5,f6について視差を算出する(ステップS150)。上述したように縮小画像のブロックマッチングにより視差が算出されているので、縮小画像のブロックマッチングの結果に基づいて、縮小前の画像である左画像iL1と右画像iR1とのブロックマッチング範囲を設定し、設定した範囲内で左画像iL1及び右画像iR1のブロックマッチングを行う。なお、ステップS150でのブロックマッチングは抽出した代表点について行えばよく、全画素を対象とする必要はない。これによりブロックマッチングの対象及び範囲を絞って高速に処理を行いつつ、縮小前の画像について高精度に視差を算出することができる。 Next, the parallax is calculated for the extracted representative points f4, f5, and f6 (step S150). Since the parallax is calculated by the block matching of the reduced image as described above, the block matching range between the left image iL1 and the right image iR1 that are the images before the reduction is set based on the result of the block matching of the reduced image. Then, block matching of the left image iL1 and the right image iR1 is performed within the set range. Note that the block matching in step S150 may be performed for the extracted representative points, and it is not necessary to target all the pixels. As a result, the parallax can be calculated with high accuracy for the image before reduction while performing high-speed processing by narrowing down the target and range of block matching.
 <幾何方程式の決定>
 そして画像処理部108は、代表点の画素位置及びステップS150で算出した視差に基づいて平面方程式を推定する(ステップS160;幾何方程式決定工程)。この処理は、縮小画像についての平面方程式推定と同様に行うことができる。このようにして1つの幾何領域について平面方程式を決定したら、全ての幾何領域(平面)についてステップS140~S160の処理を繰り返し、平面方程式を求める。なおステップS140~S160において、縮小画像の処理について説明したように、RANSACアルゴリズムにより代表点の抽出と平面方程式の推定とを繰り返すようにしてもよい。
<Determination of geometric equation>
Then, the image processing unit 108 estimates a plane equation based on the pixel position of the representative point and the parallax calculated in step S150 (step S160; geometric equation determination step). This process can be performed in the same manner as the plane equation estimation for a reduced image. When the plane equation is determined for one geometric region in this way, the processing of steps S140 to S160 is repeated for all geometric regions (planes) to obtain the plane equation. In steps S140 to S160, extraction of representative points and estimation of a plane equation may be repeated by the RANSAC algorithm as described for the reduced image processing.
 <損傷の抽出>
 本実施形態では橋梁1の損傷(ひび割れ)を計測する場合を想定しているので、画像(例えば左画像iL1または右画像iR1)からひび割れを抽出する(ステップS170)。ひび割れの抽出は種々の手法により行うことができるが、例えば特許4006007号公報に記載されたひび割れ検出方法を用いることができる。この方法は、対比される2つの濃度に対応したウェーブレット係数を算定すると共に、該2つの濃度をそれぞれ変化させた場合のそれぞれのウェーブレット係数を算定してウェーブレット係数テーブルを作成し、ひび割れ検出対象であるコンクリート表面を撮影した入力画像をウェーブレット変換することによってウェーブレット画像を作成する工程と、ウェーブレット係数テーブル内において、局所領域内の近傍画素の平均濃度と注目画素の濃度に対応するウェーブレット係数を閾値として、注目画素のウェーブレット係数と該閾値とを比較することによりひび割れ領域とひび割れでない領域を判定する工程とからなるひび割れ検出方法である。
<Damage extraction>
In this embodiment, since it is assumed that damage (cracking) of the bridge 1 is measured, cracks are extracted from an image (for example, the left image iL1 or the right image iR1) (step S170). Crack extraction can be performed by various methods. For example, a crack detection method described in Japanese Patent No. 4006007 can be used. This method calculates wavelet coefficients corresponding to the two concentrations to be compared, calculates each wavelet coefficient when each of the two concentrations is changed, creates a wavelet coefficient table, and detects the crack detection target. A wavelet image is created by wavelet transforming an input image of a concrete surface. In the wavelet coefficient table, the average density of neighboring pixels in the local area and the wavelet coefficient corresponding to the density of the target pixel are used as threshold values. A crack detection method comprising a step of determining a crack area and a non-crack area by comparing the wavelet coefficient of the target pixel and the threshold value.
 ひび割れの抽出は、下記の非特許文献1に記載の方法を用いて行うこともできる。非特許文献1に記載の方法では、閾値未満の輝度値を有する画素で構成された領域をパーコレイションされた領域(パーコレイション領域)とし、そのパーコレイション領域の形状に応じて閾値を順次更新し、表面画像よりひび割れの検出を行っている。なお、パーコレイション法とは一般に自然界における水の浸透(パーコレイション)を模して領域を順次拡大させる方法である。 Crack extraction can also be performed using the method described in Non-Patent Document 1 below. In the method described in Non-Patent Document 1, an area composed of pixels having a luminance value less than the threshold value is set as a percolated area (percolation area), and the threshold value is sequentially updated according to the shape of the percolation area. The crack is detected from the surface image. Note that the percolation method is a method in which the region is sequentially enlarged in general imitating water penetration (percolation) in nature.
 [非特許文献1]Tomoyuki Yamaguchi、「A Study on Image Processing Method for Crack Inspection of Real Concreate Surfaces」、MAJOR IN PURE AND APPLIED PHYSICS, GRADUATE SCHOOL OF SCIENCE AND ENGINEERING, WASEDA UNIVERSITY、2008年2月
 なお本実施形態では幾何方程式の決定後にひび割れを抽出する態様について説明しているが、ひび割れの抽出は前処理(ステップS110)から幾何方程式の決定(ステップS160)までの処理と並行して、あるいはこれらの処理に先立って行うようにしてもよい。ひび割れCrが抽出された場合の例を図10に示す。ひび割れCrは、端点P1から端点P2へ至るひび割れである。
[Non-Patent Document 1] Tomoyuki Yamaguchi, “A Study on Image Processing Method for Crack Inspection of Real Concreate Surfaces”, MAJOR IN PURE AND APPLIED PHYSICS, GRADUATE SCHOOL OF SCIENCE AND ENGINEERING, WASEDA UNIVERSITY, February 2008 Describes the mode of extracting cracks after the determination of the geometric equation. The extraction of cracks is performed in parallel with the processing from the preprocessing (step S110) to the determination of the geometric equation (step S160), or in these processes. It may be performed in advance. An example in the case where cracked Cr is extracted is shown in FIG. The crack Cr is a crack from the end point P1 to the end point P2.
 <幾何領域の判定>
 ひび割れCrを計測するには、まずひび割れCrが属する幾何領域を判定する(ステップS180;計測工程)。本実施形態ではステップS160までの処理により幾何領域が抽出されているので、画素位置と幾何領域との関係は特定されており、画像処理部108はこの関係に基づいてひび割れCrが属する幾何領域を判定することができる。図11の例では、ひび割れCrは幾何領域G1Aに属すると判定される。また、ひび割れCrを示す画像(例えば左画像iL1または右画像iR1)を表示部112に表示し、ユーザが操作部114を介して入力した結果に基づいて幾何領域を判定するようにしてもよい。この場合具体的には、ユーザはマウス等のポインティングデバイスにより表示部112の表示画面上で計測対象であるひび割れCrの端点P1,P2を指定し、画像処理部108が「端点P1,P2は幾何領域G1Aに属する」と判定する。
<Geometry area determination>
In order to measure the crack Cr, first, a geometric region to which the crack Cr belongs is determined (step S180; measurement process). In this embodiment, since the geometric region is extracted by the processing up to step S160, the relationship between the pixel position and the geometric region is specified, and the image processing unit 108 determines the geometric region to which the crack Cr belongs based on this relationship. Can be determined. In the example of FIG. 11, the crack Cr is determined to belong to the geometric region G1A. Further, an image showing the crack Cr (for example, the left image iL1 or the right image iR1) may be displayed on the display unit 112, and the geometric region may be determined based on the result input by the user via the operation unit 114. Specifically, in this case, the user designates the end points P1 and P2 of the cracked Cr to be measured on the display screen of the display unit 112 with a pointing device such as a mouse, and the image processing unit 108 determines that “the end points P1 and P2 are geometric. It belongs to area | region G1A ".
 <損傷の計測>
 ステップS180でひび割れCrが属する幾何領域が決定されたら、ステップS160までの処理で決定した幾何方程式(平面方程式)を用いて、ひび割れCrの2次元情報または3次元情報を算出する(ステップS190;計測工程)。2次元情報または3次元情報の例としては位置、長さ、幅、及び面積を挙げることができるが、算出する項目はこれらの例に限定されるものではなく、被写体及び計測対象の性質に応じて体積や断面積等他の項目を算出するようにしてもよい。
<Measurement of damage>
When the geometric region to which the crack Cr belongs is determined in step S180, the two-dimensional information or three-dimensional information of the crack Cr is calculated using the geometric equation (plane equation) determined in the processing up to step S160 (step S190; measurement). Process). Examples of two-dimensional information or three-dimensional information can include position, length, width, and area, but items to be calculated are not limited to these examples, and depend on the nature of the subject and the measurement target. Thus, other items such as volume and cross-sectional area may be calculated.
 ひび割れCrの長さを算出する場合、端点P1,P2の画素位置と、幾何領域G1Aの平面方程式とに基づいて、端点P1,P2における視差を算出する。具体的には、端点P1,P2の(u,v)座標及び幾何領域G1Aの平面方程式から、ブロックマッチングを行うことなく端点P1,P2の視差(w座標に対応)を求めることができる。そして、必要に応じ端点P1,P2の(u,v,w)座標をデジタルカメラ104の位置及び撮影方向に基づいて実空間の(x,y,z)座標に変換し、以下の(式4)により長さL(端点P1,P2の距離)を求める。なおひび割れCrが曲線状のひび割れである場合は、ひび割れCrを複数の区間に分割して各区間が直線と見なせるようにし、各区間の長さを積算することでひび割れCrの長さを求めることができる。 When calculating the length of the crack Cr, the parallax at the end points P1 and P2 is calculated based on the pixel positions of the end points P1 and P2 and the plane equation of the geometric region G1A. Specifically, the parallax (corresponding to the w coordinate) of the end points P1 and P2 can be obtained from the (u, v) coordinates of the end points P1 and P2 and the plane equation of the geometric region G1A without performing block matching. Then, if necessary, the (u, v, w) coordinates of the end points P1, P2 are converted into (x, y, z) coordinates in the real space based on the position and shooting direction of the digital camera 104, and the following (formula 4 ) To obtain the length L (distance between the end points P1 and P2). If the crack Cr is a curved crack, the crack Cr is divided into a plurality of sections so that each section can be regarded as a straight line, and the length of each section is calculated to obtain the length of the crack Cr. Can do.
 L={(x-x+(y-y+(z-z1/2 …(式4)
 ただし、端点P1,P2の実空間内の座標はそれぞれ(x,y,z),(x,y,z)であるものとする。
L = {(x 1 −x 2 ) 2 + (y 1 −y 2 ) 2 + (z 1 −z 2 ) 2 } 1/2 (Formula 4)
However, the coordinates in the real space of the end points P1 and P2 are (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ), respectively.
 以上説明したように、本実施形態に係る計測装置100及び計測方法では、縮小画像のブロックマッチングに基づいて同一の幾何領域を抽出するので処理を高速に行うことができ、またこれにより異なる幾何領域を分離識別して高精度に計測を行うことができる。また、縮小前の画像から抽出した代表点について縮小画像のブロックマッチングに基づいて設定した範囲でブロックマッチングを行うことにより視差を算出し、算出した視差に基づいて幾何方程式を決定することで、高速かつ高精度に処理を行うことができる。このように、本実施形態に係る計測装置100及び計測方法によれば、計測対象の2次元情報または3次元情報を高速かつ高精度に算出することができる。 As described above, in the measurement apparatus 100 and the measurement method according to the present embodiment, the same geometric region is extracted based on the block matching of the reduced image, so that the processing can be performed at a high speed, and different geometric regions can thereby be obtained. And can be measured with high accuracy. In addition, it is possible to calculate parallax by performing block matching in the range set based on block matching of the reduced image for the representative points extracted from the image before reduction, and to determine the geometric equation based on the calculated parallax, thereby achieving high speed In addition, processing can be performed with high accuracy. As described above, according to the measuring apparatus 100 and the measuring method according to the present embodiment, the two-dimensional information or the three-dimensional information of the measurement target can be calculated at high speed and with high accuracy.
 以上で本発明の実施形態に関して説明してきたが、本発明は上述した実施形態に限定されず、本発明の精神を逸脱しない範囲で種々の変形が可能である。 Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the spirit of the present invention.
1 橋梁
2 床版
3 主桁
3A 接合部
100 計測装置
102 画像取得部
104 デジタルカメラ
104L 左画像用光学系
104R 右画像用光学系
106 画像入力部
108 画像処理部
110 記録部
112 表示部
114 操作部
AL ブロック
AR ブロック
E1 境界線
E2 境界線
F 平面方程式
G1 幾何領域
G1A 幾何領域
G2 幾何領域
G2A 幾何領域
G3 幾何領域
G3A 幾何領域
u 水平方向
P1 端点
P2 端点
S1~S8 幾何方程式決定及び幾何領域抽出を行うステップ
S100~S190 計測方法を構成するステップ
v 垂直方向
f4 代表点
f5 代表点
f6 代表点
iL0 左画像
iL1 左画像
iL2 左画像
iR0 右画像
iR1 右画像
iR2 右画像
δ 距離
DESCRIPTION OF SYMBOLS 1 Bridge 2 Floor slab 3 Main girder 3A Joint part 100 Measuring apparatus 102 Image acquisition part 104 Digital camera 104L Left image optical system 104R Right image optical system 106 Image input part 108 Image processing part 110 Recording part 112 Display part 114 Operation part AL block AR block E1 boundary line E2 boundary line F plane equation G1 geometric region G1A geometric region G2 geometric region G2A geometric region G3 geometric region G3A geometric region u horizontal direction P1 end point P2 end point S1 to S8 perform geometric equation determination and geometric region extraction Steps S100 to S190 Steps constituting the measuring method v Vertical direction f4 Representative point f5 Representative point f6 Representative point iL0 Left image iL1 Left image iL2 Left image iR0 Right image iR1 Right image iR2 Right image δ Distance

Claims (12)

  1.  同一の被写体を複数の視点から撮影して得られた複数の画像を取得する画像取得部と、
     前記複数の画像をそれぞれ縮小して複数の縮小画像を生成する縮小画像生成部と、
     前記複数の縮小画像をブロックマッチングして得られた視差と前記複数の縮小画像の画素位置とに基づいて、前記複数の縮小画像における幾何領域である第1の幾何領域を抽出する幾何領域抽出部と、
     前記抽出した第1の幾何領域に基づいて、前記複数の画像における幾何領域であって前記第1の幾何領域に対応する幾何領域である第2の幾何領域を表す幾何方程式を決定する幾何方程式決定部と、
     前記決定した幾何方程式を用いて、前記被写体に含まれる計測対象の2次元情報または3次元情報を算出する計測部と、
     を備える計測装置。
    An image acquisition unit for acquiring a plurality of images obtained by photographing the same subject from a plurality of viewpoints;
    A reduced image generator that reduces the plurality of images to generate a plurality of reduced images;
    A geometric region extraction unit that extracts a first geometric region that is a geometric region in the plurality of reduced images based on parallax obtained by block matching the plurality of reduced images and pixel positions of the plurality of reduced images. When,
    Geometric equation determination that determines a geometric equation representing a second geometric region that is a geometric region in the plurality of images and that corresponds to the first geometric region based on the extracted first geometric region. And
    A measurement unit that calculates two-dimensional information or three-dimensional information of a measurement object included in the subject using the determined geometric equation;
    A measuring device comprising:
  2.  前記幾何領域抽出部は前記第1の幾何領域を表す幾何方程式を決定し、前記決定した前記第1の幾何領域を表す幾何方程式に基づいて前記第1の幾何領域を抽出する請求項1に記載の計測装置。 The geometric region extraction unit determines a geometric equation representing the first geometric region, and extracts the first geometric region based on the determined geometric equation representing the first geometric region. Measuring device.
  3.  前記幾何領域抽出部は、前記第1の幾何領域との距離が閾値以下である画素を前記第1の幾何領域に属する画素として抽出する請求項1または2に記載の計測装置。 The measurement device according to claim 1 or 2, wherein the geometric region extraction unit extracts pixels whose distance from the first geometric region is equal to or less than a threshold as pixels belonging to the first geometric region.
  4.  前記幾何方程式決定部は、前記第2の幾何領域から抽出した複数の代表点について視差を算出し、前記複数の代表点の画素位置と前記複数の代表点について算出した視差とに基づいて前記第2の幾何領域を表す幾何方程式を決定する請求項1から3のいずれか1項に記載の計測装置。 The geometric equation determination unit calculates parallax for a plurality of representative points extracted from the second geometric region, and based on the pixel positions of the plurality of representative points and the parallax calculated for the plurality of representative points. The measuring apparatus according to claim 1, wherein a geometric equation representing two geometric regions is determined.
  5.  前記幾何方程式決定部は、前記複数の縮小画像のブロックマッチングの結果に基づいて前記複数の画像のブロックマッチング範囲を設定し、前記設定した範囲内で前記複数の画像のブロックマッチングを行うことにより前記複数の代表点の視差を求める請求項4に記載の計測装置。 The geometric equation determination unit sets a block matching range of the plurality of images based on a block matching result of the plurality of reduced images, and performs block matching of the plurality of images within the set range. The measurement apparatus according to claim 4, wherein parallaxes of a plurality of representative points are obtained.
  6.  前記計測部は前記計測対象を示す画素が前記第2の幾何領域の内のいずれの幾何領域に属するか判定し、前記判定した幾何領域を表す幾何方程式と前記計測対象を示す画素の画素位置とに基づいて前記2次元情報または前記3次元情報を算出する請求項1から5のいずれか1項に記載の計測装置。 The measurement unit determines which one of the second geometric regions the pixel indicating the measurement object belongs to, and a geometric equation representing the determined geometric region and a pixel position of the pixel indicating the measurement object The measurement apparatus according to claim 1, wherein the two-dimensional information or the three-dimensional information is calculated based on the information.
  7.  前記計測部は前記判定した幾何領域を表す幾何方程式と前記計測対象を示す画素の画素位置とに基づいて前記計測対象を示す画素の視差を算出し、前記算出した視差に基づいて前記2次元情報または前記3次元情報を算出する請求項6に記載の計測装置。 The measurement unit calculates a parallax of a pixel indicating the measurement target based on a geometric equation representing the determined geometric region and a pixel position of the pixel indicating the measurement target, and the two-dimensional information based on the calculated parallax Or the measuring apparatus of Claim 6 which calculates the said three-dimensional information.
  8.  前記幾何領域抽出部は前記複数の画像をグレースケール画像に変換する画像処理と前記複数の画像を平行化する画像処理とのうち少なくとも1つの画像処理を行い、前記少なくとも1つの画像処理を行った画像に対して前記複数の縮小画像を生成する請求項1から7のいずれか1項に記載の計測装置。 The geometric region extraction unit performs at least one of image processing for converting the plurality of images into a grayscale image and image processing for parallelizing the plurality of images, and performs the at least one image processing. The measurement apparatus according to claim 1, wherein the plurality of reduced images are generated for an image.
  9.  前記幾何方程式決定部は、平面、円筒面、及び球面のいずれかを表す幾何方程式を決定する請求項1から8のいずれか1項に記載の計測装置。 The measurement apparatus according to any one of claims 1 to 8, wherein the geometric equation determination unit determines a geometric equation representing any one of a plane, a cylindrical surface, and a spherical surface.
  10.  前記被写体はコンクリート構造物であり、前記計測対象は前記コンクリート構造物の損傷である請求項1から9のいずれか1項に記載の計測装置。 10. The measuring apparatus according to claim 1, wherein the subject is a concrete structure, and the measurement target is damage to the concrete structure.
  11.  前記複数の画像を取得する光学系を備える請求項1から10のいずれか1項に記載の計測装置。 The measuring apparatus according to claim 1, further comprising an optical system that acquires the plurality of images.
  12.  同一の被写体を複数の視点から撮影して得られた複数の画像を取得する画像取得工程と、
     前記複数の画像をそれぞれ縮小して複数の縮小画像を生成する縮小画像生成工程と、
     前記複数の縮小画像をブロックマッチングして得られた視差と前記複数の縮小画像の画素位置とに基づいて、前記複数の縮小画像における幾何領域である第1の幾何領域を抽出する幾何領域抽出工程と、
     前記抽出した第1の幾何領域に基づいて、前記複数の画像における幾何領域であって前記第1の幾何領域に対応する幾何領域である第2の幾何領域を表す幾何方程式を決定する幾何方程式決定工程と、
     前記決定した幾何方程式を用いて、前記被写体に含まれる計測対象の2次元情報または3次元情報を算出する計測工程と、
     を備える計測方法。
    An image acquisition step of acquiring a plurality of images obtained by photographing the same subject from a plurality of viewpoints;
    A reduced image generating step of generating a plurality of reduced images by reducing the plurality of images respectively;
    A geometric region extraction step of extracting a first geometric region, which is a geometric region in the plurality of reduced images, based on parallax obtained by block matching the plurality of reduced images and pixel positions of the plurality of reduced images. When,
    Geometric equation determination that determines a geometric equation representing a second geometric region that is a geometric region in the plurality of images and that corresponds to the first geometric region based on the extracted first geometric region. Process,
    A measurement step of calculating two-dimensional information or three-dimensional information of a measurement target included in the subject using the determined geometric equation;
    A measurement method comprising:
PCT/JP2017/000500 2016-01-15 2017-01-10 Measurement device and measurement method WO2017122640A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-005940 2016-01-15
JP2016005940 2016-01-15

Publications (1)

Publication Number Publication Date
WO2017122640A1 true WO2017122640A1 (en) 2017-07-20

Family

ID=59311310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/000500 WO2017122640A1 (en) 2016-01-15 2017-01-10 Measurement device and measurement method

Country Status (1)

Country Link
WO (1) WO2017122640A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001319229A (en) * 2000-05-10 2001-11-16 Toyota Central Res & Dev Lab Inc Correlation calculation method for image
JP2002257744A (en) * 2001-03-02 2002-09-11 Takenaka Komuten Co Ltd Method and device for inspecting defect of concrete
JP2003166804A (en) * 2001-12-03 2003-06-13 Fujimori Gijutsu Kenkyusho:Kk Space coordinate value measuring method, its measuring device and defect inspection device
JP2015125685A (en) * 2013-12-27 2015-07-06 Kddi株式会社 Space structure estimation device, space structure estimation method, and space structure estimation program
JP2015190978A (en) * 2014-03-28 2015-11-02 株式会社Cubic Image measuring device for structure with square or truncated cone shape, and method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001319229A (en) * 2000-05-10 2001-11-16 Toyota Central Res & Dev Lab Inc Correlation calculation method for image
JP2002257744A (en) * 2001-03-02 2002-09-11 Takenaka Komuten Co Ltd Method and device for inspecting defect of concrete
JP2003166804A (en) * 2001-12-03 2003-06-13 Fujimori Gijutsu Kenkyusho:Kk Space coordinate value measuring method, its measuring device and defect inspection device
JP2015125685A (en) * 2013-12-27 2015-07-06 Kddi株式会社 Space structure estimation device, space structure estimation method, and space structure estimation program
JP2015190978A (en) * 2014-03-28 2015-11-02 株式会社Cubic Image measuring device for structure with square or truncated cone shape, and method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FUJIWARA, MASANOBU ET AL.: "Direct Estimation of Multiple Planar Regions and Plane Parameters via Graph-Cuts using Stereo Images", IPSJ SIG NOTES KENKYU HOKOKU COMPUTER VISION AND IMAGE MEDIA, no. 176, 15 April 2011 (2011-04-15), pages 1 - 8 *

Similar Documents

Publication Publication Date Title
US9972067B2 (en) System and method for upsampling of sparse point cloud for 3D registration
US6671399B1 (en) Fast epipolar line adjustment of stereo pairs
JP5297403B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, program, and storage medium
KR101694292B1 (en) Apparatus for matching stereo image and method thereof
CN108074267B (en) Intersection point detection device and method, camera correction system and method, and recording medium
JP2008082870A (en) Image processing program, and road surface state measuring system using this
JP2005037378A (en) Depth measurement method and depth measurement device
JP2007333679A (en) Three-dimensional position correcting apparatus
CN112381847B (en) Pipeline end space pose measurement method and system
Bethmann et al. Semi-global matching in object space
KR102481896B1 (en) System and method for establishing structural exterior map using image stitching
EP4066162A1 (en) System and method for correspondence map determination
KR20110089299A (en) Stereo matching process system, stereo matching process method, and recording medium
JP6285686B2 (en) Parallax image generation device
JP6411188B2 (en) Stereo matching device, stereo matching program, and stereo matching method
JP4394487B2 (en) Stereo image processing device
JP4935769B2 (en) Plane region estimation apparatus and program
KR100792172B1 (en) Apparatus and method for estimating fundamental matrix using robust correspondence point
JP6456084B2 (en) Image processing apparatus, image processing method, and program
WO2017122640A1 (en) Measurement device and measurement method
JP7067479B2 (en) Displacement measuring device, displacement measuring system, displacement measuring method and program
KR101923581B1 (en) Normal vector extraction apparatus and method thereof based on stereo vision for hull underwater inspection using underwater robot
JP6852406B2 (en) Distance measuring device, distance measuring method and distance measuring program
WO2017187950A1 (en) Image processing device and image processing method
Xu et al. Monocular Video Frame Optimization Through Feature-Based Parallax Analysis for 3D Pipe Reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17738394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17738394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP