WO2005040721A1 - 3d automatic measuring apparatus - Google Patents

3d automatic measuring apparatus Download PDF

Info

Publication number
WO2005040721A1
WO2005040721A1 PCT/JP2004/015766 JP2004015766W WO2005040721A1 WO 2005040721 A1 WO2005040721 A1 WO 2005040721A1 JP 2004015766 W JP2004015766 W JP 2004015766W WO 2005040721 A1 WO2005040721 A1 WO 2005040721A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
point
measurement
dimensional
Prior art date
Application number
PCT/JP2004/015766
Other languages
French (fr)
Japanese (ja)
Inventor
Waro Iwane
Original Assignee
Waro Iwane
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Waro Iwane filed Critical Waro Iwane
Priority to JP2005514989A priority Critical patent/JP4545093B2/en
Publication of WO2005040721A1 publication Critical patent/WO2005040721A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Definitions

  • the present invention relates to a surveying device that measures a size of a desired object, a distance between the objects, and the like based on image data captured by a camera.
  • the present invention analyzes a moving image captured from a 360-degree omnidirectional camera ('This enables highly accurate three-dimensional measurement of any object in an image, By arbitrarily designating the two points, the start point and the end point, freely across multiple frame images taken by the camera, the three-dimensional distance between the specified two points is measured, and more than two points are determined.
  • the present invention relates to a 3D automatic surveying device capable of three-dimensionally measuring the area and volume of a desired object or the like by designating it.
  • This type of image surveying is, for example, a technique based on a stereo method of measuring a distance from an image with parallax obtained by two cameras installed in parallel, and is used as a simple surveying technique (Patent Literature 1-2.).
  • Patent Document 1 JP 08-278126 A
  • Patent Document 2 JP-A-2000-283753
  • the present inventor has extracted a sufficient number of feature points from a plurality of frame images of a ning image obtained from a 360-degree omnidirectional camera, thereby obtaining desired feature points.
  • 3D coordinates that indicate the relative position between the camera position and the rotation angle can be determined with high precision.
  • a single 360-degree omnidirectional camera can move and shake
  • high-precision surveying which is not affected by such factors, can be realized.
  • the present invention has been proposed to solve the problems of the above-described conventional technology, and requires a plurality of cameras by analyzing a moving image acquired from a 360-degree omnidirectional camera.
  • Fluthermore by specifying two or more points, any area or volume in the image can be measured in three dimensions. The purpose is to provide a 3D automatic surveying device that can do this. .
  • the 3D automatic surveying device of the present invention uses a moving 360-degree omnidirectional camera to provide a moving or continuous image including a desired measurement point and a predetermined reference point whose three-dimensional absolute coordinates are known.
  • An omnidirectional image capturing unit that captures still images, an image recording unit that records images captured by the omnidirectional image capturing unit, and images recorded in the image recording unit that have image characteristics other than measurement points.
  • a feature point extraction unit that extracts a portion as a feature point
  • a measurement point identification unit that automatically extracts measurement points in the image recorded in the image recording unit
  • a reference point in the image recorded in the image recording unit And a corresponding point tracking unit that tracks measurement points, reference points, and feature points in each frame image and associates them.
  • a vector calculation unit for calculating the three-dimensional relative coordinates of the measurement points, reference points, and feature points associated with the corresponding point tracking unit and, if necessary, the camera vector indicating the position and rotation of the camera; It repeats the operation in the vector operation unit, repeats the overlap operation so as to minimize the error of the obtained three-dimensional relative coordinates, and performs an error minimization processing unit that performs statistical processing, and the vector operation from the known three-dimensional absolute coordinates of the reference point
  • the absolute coordinate acquisition unit converts the three-dimensional relative coordinates obtained by the unit into the absolute coordinate system and attaches the three-dimensional absolute coordinates to the measurement points, reference points, and feature points. It is configured to include a measurement data recording unit that records the final absolute coordinates given thereto, and a display unit that displays the measurement data recorded in the measurement data recording unit.
  • the 3D automatic surveying apparatus of the present invention provides a moving 360-degree omni-directional camera that captures a moving image or a continuous still image including a desired measurement point and a predetermined reference point whose three-dimensional absolute coordinates are known.
  • a surrounding image capturing unit an image recording unit that records images captured by the omnidirectional image capturing unit, and a feature that extracts a portion having a visual feature from the image recorded in the image recording unit as a feature point.
  • a point extraction unit, a reference point identification unit that automatically extracts reference points in the plane image recorded in the image recording unit, and a corresponding point tracking unit that tracks and associates the reference points and feature points in each frame image.
  • a vector operation unit for calculating the three-dimensional relative coordinates of the camera vector indicating the position and rotation of the camera from the reference point and the feature point associated with the corresponding point tracking unit, and an operation in the vector operation unit.
  • An error minimization processing unit that repeats overlapping operations to minimize errors in three-dimensional relative coordinates and performs statistical processing, and the cubic of the camera calculated by the vector operation unit from the known three-dimensional absolute coordinates of the reference point.
  • An absolute coordinate acquisition unit that converts the original relative coordinates into an absolute coordinate system and assigns three-dimensional absolute coordinates
  • a measurement point identification unit that automatically extracts measurement points in the image recorded in the image recording unit
  • a measurement point tracking unit that tracks and associates the measurement points extracted by the measurement point identification unit in each frame image, and calculates the measurement values of the measurement points from the camera vector obtained by the vector calculation unit. It has a configuration including a measurement point measurement calculation unit, a measurement data recording unit that records the absolute coordinates of the measurement point, and a display unit that displays the measurement data recorded in the measurement data recording unit.
  • the 3D automatic surveying device of the present invention is configured such that the reference point is a reference point having a known three-dimensional absolute coordinate. Together with the reference point whose absolute three-dimensional coordinates are known, or a length reference point whose length is known, the vector operation unit calculates the distance between the two length reference points by calculation, and calculates the minimum error.
  • the conversion processing unit is configured to repeat the overlap operation and perform statistical processing so that the distance between the two length reference points obtained by the calculation by the vector calculation unit matches the known length of the length reference point. There is. .
  • the 3D automatic surveying device of the present invention includes a measurement point designation working unit for designating an arbitrary measurement point in an image recorded in the image recording unit, and an image recorded in the image recording unit.
  • a reference point designating unit for designating an arbitrary reference point is provided, and an image ⁇ preparation working unit is provided. The measurement point and the reference point are designated and extracted.
  • the belt calculation unit may include any two frame images Fn and Fn used for three-dimensional relative coordinate calculation of a measurement point, a reference point, a feature point, or a camera vector.
  • the configuration is such that the scale is adjusted and integrated so that the error of each three-dimensional relative coordinate obtained by calculating a point a plurality of times is minimized, and the final three-dimensional relative coordinate is determined.
  • the vector calculation unit sets the frame interval m as the distance of the camera power increases as the distance between the camera power measurement point, the reference point, and the feature point increases.
  • the configuration is such that the unit calculation is performed by setting to be large.
  • the vector calculation unit deletes feature points having a large error distribution of the obtained three-dimensional relative coordinates, and based on other feature points as necessary. The recalculation is performed to improve the accuracy of the measurement point calculation.
  • the 3D automatic surveying device of the present invention starts at an arbitrary measurement point or an arbitrary feature point in an arbitrary image recorded in the image recording unit, and sets the starting point in the image or another different image.
  • the 3D automatic surveying apparatus of the present invention specifies a plurality of points within an arbitrary same image or between different plane images recorded in the image recording unit, and specifies a plurality of points between the start point and the end point obtained by the distance calculation unit.
  • the configuration is such that a plurality of three-dimensional distance measurements are combined, and an area / volume calculator is provided for calculating the area or volume of a desired object within the same image or between different images by calculation. .
  • the 3D automatic surveying device of the present invention includes a traveling direction control unit that fixes or controls an image obtained by the omnidirectional image photographing unit in the traveling direction by using a turtle vector obtained by the vector operation unit, An image vertical plane development unit that spreads the image stabilized in the traveling direction by the traveling direction control unit on a vertical plane, and a road surface that generates a basic shape model of the road surface with undefined parameters of the road surface shape
  • the basic model generation unit, the road surface three-dimensional measurement unit that measures the three-dimensional coordinates of the road surface from the image of the road surface developed in a vertical plane by the image vertical plane development unit, and the road surface three-dimensional measurement unit
  • a road transparent CG generation unit that obtains the parameters of the road surface shape from the road surface measurement data and generates a transparent CG of the road surface, and a transparent CG generated by the road transparent CG generation unit and the traveling direction Stable in the traveling direction by the control unit
  • the synthesized road surface image is synthesized, and the road surface texture is added and average
  • a texture averaging unit that reduces noise in the image, and, if necessary, block the road surface and flexibly combine the characteristic parts of the road surface without changing the texture order, and then texture the output.
  • Road surface texture to be sent to the averaging unit The object area that largely cuts out the area of road surface figures such as road markings and obstacles from the image reduced in noise by the texture averaging unit.
  • Section a road marking recognition and coordinate acquisition section for recognizing a target object from the object area cut out by the object area cutout section and obtaining its coordinates, and an eye for which the coordinates are obtained.
  • 3D map of the road surface by reconstructing the output of the measurement data recording unit that obtained the absolute coordinates by inputting each point of the polygon constituting the object
  • a three-dimensional map generation device having a three-dimensional map generation unit for generating a map.
  • the invention's effect according to the present invention as described above, by using a 360-degree omnidirectional image, a sufficient number of feature points are extracted from a plurality of frame images of a moving image to include a desired measurement point. Three-dimensional relative coordinates indicating the relative positions of many feature points can be obtained with high accuracy. Note that it is possible to treat a normal image as a part of a 360-degree omnidirectional image in the same way as a 360-degree omnidirectional image. However, since the accuracy is lower than that of the 360 ° omnidirectional image, it is preferable to use the 360 ° omnidirectional image as much as possible.
  • the obtained three-dimensional relative coordinates can be converted into an absolute coordinate system based on the known three-dimensional absolute coordinates of the reference point obtained in advance by surveying or the like.
  • the absolute coordinates such as latitude / longitude, etc.
  • the absolute coordinates may be determined in advance based on a known length in surveying or by placing an object with a known length around the measurement point. Even if the coordinate values cannot be obtained, the scale can be corrected and the measurement results can be obtained. Accordingly, in the present invention, in principle, one .360-degree omni-directional camera captures an image with a camera arbitrarily moving in free space, and designates a desired survey point in the image. Alternatively, by taking images of survey points marked in advance and analyzing them, extremely accurate 3D surveying can be performed.
  • three-dimensional absolute coordinates of a desired object or the like can be obtained, and a large number of feature points can be obtained.
  • errors can be minimized as much as possible.
  • High-precision three-dimensional measurement can be performed for any given object. That is, in the present invention, the same measurement point is determined by analyzing a moving image composed of a large number of frame images including a desired measurement point by moving one camera, not by parallax between the two cameras. A large number of included frame images can be used, and calculations with high accuracy can be performed with sufficient information.
  • a three-dimensional distance specifying an arbitrary start point and an end point can be measured.
  • High-precision three-dimensional distance measurement is possible without being restricted by distance, even between any two points specified by using multiple frame images.
  • by specifying three or more points it becomes possible to three-dimensionally measure the area and volume of an arbitrary object or region in an image or over a plurality of images.
  • FIG. 1 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a first embodiment of the present invention. ⁇
  • FIG. 2 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to another embodiment of the first embodiment of the present invention.
  • FIG. 3 is an explanatory view showing a specific camera vector detection method in the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 4 is an explanatory view showing a specific camera vector detection method in the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 5 is an explanatory diagram showing a specific method for detecting a camera vector in the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 6 is an explanatory diagram showing a desirable feature point designation mode in a camera vector detection method by the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 7 is a graph showing an example of three-dimensional coordinates of a feature point and a camera vector obtained by the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 8 is a graph showing an example of three-dimensional coordinates of a feature point and a camera vector obtained by the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 9 is a graph showing an example of three-dimensional coordinates of a feature point and a camera vector obtained by the 3D automatic surveying device according to one embodiment of the present invention.
  • FIG. 10 is an explanatory diagram showing a case where a plurality of feature points are set according to the distance between the camera and the feature points and a plurality of calculations are repeatedly performed in the 3D automatic surveying device according to one embodiment of the present invention. .
  • FIG. 11 A 3D automatic surveying device according to the first embodiment of the present invention shown in FIG. 1 or FIG.
  • FIG. 4 is a block diagram showing a schematic configuration in a case where an arbitrary measurement point is measured based on a camera vector already obtained.
  • FIG. 12 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a second embodiment of the present invention. It is. '
  • FIG. 13 is a flowchart showing a procedure of a surveying process in the 3D automatic surveying device shown in FIG. 12.
  • FIG. 14 is an explanatory diagram showing a procedure for measuring a three-dimensional distance between arbitrary two points in the 3D automatic surveying device shown in FIG. 12. .
  • FIG. 15 is an explanatory diagram showing a procedure for calculating the area or volume of an arbitrary region in the 3D automatic surveying device shown in FIG. 12.
  • FIG. 16 is an explanatory diagram showing an example of an image in which an arbitrary point for obtaining a three-dimensional distance is designated in the same image by the 3D automatic surveying device shown in FIG. 12.
  • FIG. 17 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a third embodiment of the present invention.
  • FIG. 18 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a fourth embodiment of the present invention.
  • FIG. 19 is a diagram showing an example of a three-dimensional map generated by the 3D automatic surveying device according to the fourth embodiment of the present invention, where (a) is a sectional view of a road represented by the three-dimensional map, (B) is a projection of the road shown in (a) taken as an example of the three-dimensional map of the road, and (c) is a projection of the three-dimensional map shown in (b).
  • FIG. 2 is a view showing an operator part used for this.
  • FIG. 20 is a three-dimensional view of the road shown in FIG. 19, in which an operator part (CG part) of a road sign is combined.
  • CG part operator part
  • the 3D. Automatic surveying device of the present invention described below is realized by processing, means, and functions executed by a computer according to instructions of a program (software).
  • the program sends commands to each component of the computer, and performs predetermined processing and functions as described below, for example, automatic extraction of feature points, automatic tracking of extracted feature points, calculation of three-dimensional coordinates of feature points, The calculation of the camera vector is performed.
  • the 3D automatic surveying device and the plan Each process and means in the image stabilizing device is realized by specific means in which a program and a computer cooperate.
  • program is provided, for example, by a magnetic disk, an optical disk, a semiconductor memory, or any other computer-readable recording medium, and the read program is installed in the computer. Executed. Also, programs are loaded directly into a computer via a communication line and executed without using a recording medium.
  • FIGS. 1 and 2 are block diagrams each showing a schematic configuration of a 3D automatic surveying device according to an embodiment of the present invention.
  • the 3D automatic surveying device includes a field preparation work unit 10 for performing preparation work such as designation of survey points in advance and a 3D automatic surveying device 100 for performing surveying processing within a captured camera image. Have.
  • the 3D automatic surveying apparatus of the embodiment shown in FIG. 2 is a means for performing preparation work such as designation of survey points in advance, instead of the on-site preparation work section 10 shown in FIG. It has.
  • the on-site preparation work section 10 is a means for performing on-site preparation work prior to the measurement work, and includes a on-site measurement point designation work section 11 and a on-site reference point designation work section 12, as shown in FIG.
  • the on-site measurement point designation work unit 11 designates all desired measurement points.
  • the designation of the measurement point can be performed, for example, by adding a mark indicating the measurement point at the site, placing an object indicating the measurement point, or the like. With this designation, the 3D automatic surveying device 100 described later can extract and specify measurement points in the captured camera image.
  • the local reference point designation work unit 12 designates a point to be a predetermined reference point prior to the measurement work.
  • the reference point is, as described later, a reference point when converting the three-dimensional relative coordinates into the absolute coordinates, and is known in advance by any method. ) Is total The point to be measured (coordinate reference point>.
  • the reference point can include a reference point having a known length (length reference point) together with a reference point having a known three-dimensional absolute coordinate or in place of a reference point having a known three-dimensional absolute coordinate.
  • the length reference point is a reference point that consists of two or more points and treats the distance between the two points as a known one.For example, the interval between the length reference points is set to 1 meter It can be obtained by setting up a large number of 1-meter sticks and so on. Then, shooting is performed so that at least one length reference point overlaps each image. By providing such a length reference point, a scale calibration can be performed for each image based on the known length of the length reference point, as described later, and the accuracy can be greatly improved. .
  • the length reference point can be considered to be the same as setting a plurality of coordinate reference points.
  • setting a large number of length reference points that are “lengths” means “points”. This is more effective than setting many coordinate reference points. That is, coordinate reference points can be converted to absolute coordinates if only two points are set in the entire measurement range.Coordinate reference points are not necessarily observed from all images, and multiple coordinate reference points are set. Providing more than one length reference point is more advantageous in terms of cost and labor.
  • the measurement of the three-dimensional coordinates and the length of the reference point may be performed by any method, for example, by a conventionally known measurement method such as a trigonometric method. Absolute coordinates and length can be obtained.
  • the local reference point designating unit 12 marks the reference point with a reference point that can be clearly distinguished from the measurement point, or places an object or bar indicating the reference point to specify the reference point. Do. With this designation, in the 3D automatic surveying device 100 to be described later, a predetermined reference point can be extracted and specified in the captured force image.
  • the preparation work in the on-site preparation work unit 10 is a work in which all the target measurement points to be measured are marked at the surveying site so that they can be recognized. This mark etc.
  • the 3D automatic surveying device # 00- it is for automatically extracting by image recognition on an image. Therefore, in order to enable automatic extraction by image recognition, simple figures and the like are good, and features that do not confuse with other measurement points, reference points, other figures, etc. are given.
  • X may be used, or a colored stake may be hit at the measurement point. .
  • a mark or the like is attached in the case of a coordinate reference point, and a bar or the like is provided in the case of a length reference point.
  • the reference point and the measurement point be different marks, and that a plurality of reference points be individually marked differently.
  • the measurement point and the reference point can be clearly distinguished, and a plurality of reference points can also be distinguished from each other, so that the start point and the end point can be easily specified.
  • the work of designating the measurement point and the reference point can be automated by a machine or the like, or can be performed manually by an operator.
  • the on-site work is omitted, and the reference points are detected directly from the image and marked on the image.
  • FIG. 2 shows a 3D automatic surveying apparatus including an in-image preparation work unit 20 for performing a preparation operation for designating a measurement point and a reference point in an image.
  • the in-image preparation work unit 20 omits the on-site preparation work at the site by the ground preparation work unit 10 shown in FIG. It is a means for designating a desired measurement point and a reference point within the device, and includes an in-image measurement point designation work unit 21 and an in-image reference point work unit 22.
  • the in-image measurement point designation work unit 21 designates a desired measurement point in the video imaged by the surrounding image imaging unit 101 of the 3D automatic surveying device 100 described later.
  • the in-image reference point designating unit 22 designates a predetermined reference point whose absolute coordinates are known in advance in the video captured by the surrounding image capturing unit 101.
  • the 3D automatic surveying device described later is specified by the in-image preparation work unit 20.
  • the target measurement point is specified and specified in the image at the 100 measurement point specifying unit 104, and similarly, the target reference point is specified and specified in the image at the reference point specification unit 105. Will be done. '
  • an operation is performed in which an operator marks desired measurement points and reference points in the image.
  • the on-site preparation work can be omitted as much as possible.
  • the 3D automatic surveying device of the present embodiment can be positioned as an in-image 3D measuring device, and all the on-site preparation work for designating a measurement target in the on-site measurement preparation work is omitted. Outside work can be performed only by the surrounding image capturing unit 101.
  • video feature points including measurement points and reference points are automatically extracted from images taken by a 360-degree omnidirectional camera, and the feature points are automatically tracked between frame images.
  • the camera vector can be obtained first.
  • the camera vector is obtained by automatic extraction and tracking of feature points
  • calibration can be performed based on, for example, an object having a known length in the image, and the absolute length can be obtained. Since the camera height at the time of shooting can also be a reference for the absolute length, it is desirable to keep the camera height at the time of shooting constant.
  • the three-dimensional coordinates of an arbitrary point can be obtained from the camera coordinates. Further, if the three-dimensional coordinates of an arbitrary point are obtained, the three-dimensional distance between two points, or the area or volume can be easily obtained.
  • the 3D automatic surveying apparatus 100 includes an omnidirectional image capturing unit 101, an image recording unit 102, a feature point extracting unit 103, a measurement point specifying unit 104, A reference point specifying unit 105, a corresponding point tracking unit 106, a vector calculation unit 107, an error minimization processing unit 108, an absolute coordinate acquisition unit 109, a measurement data recording unit 110, and a measurement data display unit 111 are provided.
  • the omnidirectional image photographing unit 101 receives all 360-degree omnidirectional cameras, such as a vehicle-mounted camera. All measurement points and base-ground-points are photographed as moving images or continuous still images.
  • the omnidirectional image photographing unit 101 for example, one camera is mounted on a vehicle or the like, and a desired survey area is photographed by using the movement of the vehicle.
  • the image captured by the surrounding image capturing unit 101 is subjected to the image analysis according to the present invention, so that the in-image survey is performed at a desired measurement point. .
  • the image recording unit 102 records an image photographed by the surrounding image photographing unit 101.
  • the feature point extracting unit 103 extracts a portion having a visual feature other than the designated measurement point and reference point in the image recorded in the image recording unit 102 as a feature point.
  • the video feature point extraction by the feature point extraction unit 103 enables a required number of feature points to be automatically extracted from an image by an image processing technique. 'For example, if a “corner” portion in an image is designated as a feature point and only the “corner” portion is selectively extracted by image recognition, it becomes a feature point.
  • the measurement point specifying unit 104 automatically extracts measurement points from the image recorded in the image recording unit 102.
  • the reference point specifying unit 105 automatically extracts reference points (coordinate reference points and ⁇ or length reference points) from the image recorded in the image recording unit 102.
  • the extraction of the measurement points and the reference points in the measurement point specifying unit 104 and the reference point specifying unit 105 can be performed by using the marks attached to the actual measurement points and characteristic points by Marks and the like added to the image by the preparation work unit 20 are automatically performed by image recognition.
  • Corresponding point tracking section 106 tracks measurement points, reference points, and feature points in each frame image and associates them.
  • the vector calculation unit 107 calculates the three-dimensional coordinates of the measurement point, the reference point, the feature point, and, if necessary, the camera coordinates and the rotation (camera vector).
  • the error minimizing unit 8 repeats the operation in the vector operation unit 107, thereby repeating the overlap operation so as to minimize the error of the obtained three-dimensional relative coordinates, performs statistical processing, and increases the accuracy of the operation.
  • the absolute coordinate acquisition unit 109 converts the obtained three-dimensional relative coordinates into the absolute coordinate system from the known coordinates of the reference point, and converts the three-dimensional relative coordinates into all of the measurement points, the reference points, the feature points, or necessary predetermined points. Give absolute coordinates.
  • the length can be calibrated for each image using the length reference point indicating the length standard, and the scale can be adjusted. Can be obtained.
  • the vector calculation unit 107 obtains three-dimensional coordinates at both ends of the length reference point, and calculates the distance between the two length reference points from the obtained three-dimensional coordinates. Then, the error minimization processing unit 108 repeats the overlap calculation so that the distance between the two length reference points obtained by the calculation by the vector calculation unit 107 matches the known length of the length reference point. Statistical processing.
  • the coordinate reference point and the length reference point can be used simultaneously, in which case the accuracy can be further improved.
  • the measurement data recording unit 110 calculates and records the final coordinates of the measurement point.
  • the measurement data display section 111 displays the measurement data.
  • the measurement data recorded in the measurement data recording unit 110 and displayed on the measurement data display unit 111 is three-dimensional coordinate information of a measurement point, a reference point, and a feature point.
  • it may be a “table” of numerical values indicating three-dimensional coordinates, or a “point” indicating the position of a measurement point on a map.
  • Numerical values indicating three-dimensional coordinates can be indicated by, for example, values of XYZ coordinates or values of latitude, longitude, and altitude.
  • the 3D automatic surveying apparatus 100 having the above-described configuration reads marks and the like indicating measurement points and feature points in a captured image, and uses the epi-polar geometry together with other visual feature points. The three-dimensional position can be calculated.
  • the accuracy is further improved by using the feature points in the image other than the measurement point. Feature points are automatically extracted from the image. Will be issued.
  • the 3D automatic surveying apparatus 100 employs several methods.
  • the three-dimensional relative coordinates of the feature points and camera position and the camera's three-axis rotation are obtained by epipolar geometry. It is like that.
  • camera vector information is duplicated, and an error can be minimized from the duplicated information, and a more accurate camera vector can be obtained.
  • the camera vector refers to a vector of the degree of freedom of the camera.
  • a stationary three-dimensional object has six degrees of freedom: position coordinates (X, Y, Z) and rotation angles ( ⁇ , ⁇ ⁇ ⁇ ) of each coordinate axis. Therefore, the camera vector refers to a vector with six degrees of freedom of the camera position coordinates (X, ⁇ , ⁇ ) and the rotation angles ( ⁇ ⁇ ⁇ , ⁇ , ⁇ ) of the respective coordinate axes.
  • the force in which the moving direction is included in the degree of freedom. This can be derived by differentiating the above six degrees of freedom force.
  • the camera takes six degrees of freedom values for each frame and determines six different degrees of freedom for each frame. It is to be.
  • the feature point extraction unit 103 automatically extracts a point or a small area image to be a feature point from a properly sampled frame image.
  • the measurement point and the feature point indicated by the mark or the like designated in the image are automatically extracted by the measurement point specifying unit 104 and the reference point specifying unit 105. Extracted
  • the corresponding points of the feature points, the measurement points, and the reference points are automatically determined by a corresponding point tracking unit 106 between a plurality of frame images.
  • the feature points that are sufficiently necessary and serve as a reference for detecting the camera vector are obtained.
  • a desired number of measurement points are specified, and at least two reference points whose absolute coordinates are known are specified.
  • Figures 3 to 5 show examples of correspondences between feature points (or measurement points and reference points) for which correspondence is required between images.
  • “+” is the automatically extracted feature point, and the correspondence is automatically tracked between a plurality of frame images (corresponding points shown in FIG. 5; see! To 4).
  • FIG. 6 it is desirable to extract and extract a sufficiently large number of feature points in each image (refer to the mark “ ⁇ ” in FIG. 6). About 100 feature points are extracted.
  • the vector calculator 107 the three-dimensional relative coordinates of the extracted feature point, measurement point, and reference point are calculated, and the camera vector is calculated based on the three-dimensional relative coordinates. More required. Specifically, the vector calculation unit 107 calculates the positions of a sufficient number of features existing between consecutive frames, the position vectors between moving cameras, the three-axis rotation vector of the cameras, the camera positions and the feature points. Continuously calculate the relative value of each type of three-dimensional vector, such as the vector connecting the (measurement point, reference point), etc.
  • a 360-degree omnidirectional image is used in principle as a camera image, and camera motion (camera position and camera rotation) is calculated by solving an epipolar equation from the epipolar geometry of the 360-degree omnidirectional image. ing.
  • the 360-degree omnidirectional image is, for example, a panoramic image, an omnidirectional image, or a 360-degree omnidirectional image captured by a camera with a wide-angle lens or a fisheye lens, a plurality of cameras, a rotating camera, or the like. Since a wider range is shown than the video taken by the camera, it is preferable because a high-precision camera vector calculation can be calculated easily and quickly. It should be noted that even though it is a 360-degree full-circle image, a part of the 360-degree entire circumference, which is not necessarily only an image including the entire 4 ⁇ space, can be treated as a video for camera vector calculation.
  • t and R can be calculated as solutions by the method of least squares by linear algebra. This operation is applied to a plurality of corresponding frames to perform the operation.
  • any image can be used in principle for the camera vector calculation, but a wide-angle image such as the 360-degree omnidirectional image shown in Fig. 5 makes it easier to select many feature points and enables long tracking. Become. Therefore, in the present embodiment, the 360-degree omnidirectional image is used for the camera vector calculation, whereby the tracking distance of the feature points can be lengthened, a sufficient number of feature points can be selected, and the distance can be increased. It will be possible to select characteristic points that are convenient for distance, medium distance and short distance. In addition, when correcting the rotation vector, the calculation processing can be easily performed by adding the polar rotation conversion processing. From these facts, more accurate calculation results can be obtained.
  • Fig. 5 shows the spherical image around 360 degrees, which is composed of images taken by one camera (or multiple cameras), in a map format in order to make it easier to understand the processing in the 3D automatic surveying device 100.
  • the spherical image around 360 degrees, which is composed of images taken by one camera (or multiple cameras), in a map format in order to make it easier to understand the processing in the 3D automatic surveying device 100.
  • the error minimization processing unit 108 uses the plurality of camera positions corresponding to each frame and the number of feature points to generate a plurality of vectors based on each feature point by using a plurality of operation equations. Then, statistical processing is performed so that the distribution of each feature point (measurement point, reference point) and the camera position is minimized, and the final vector is obtained. For example, for camera positions, camera rotations, and multiple feature points for multiple frames, Levenb erg-Mar quai-dt
  • the optimal solution of the least squares method is estimated by the method, and the errors are converged to obtain the three-dimensional relative coordinates of the camera position, camera rotation matrix, feature points, measurement points, and reference points.
  • FIGS. 7 to 9 show examples of three-dimensional coordinates of feature points (measurement points and reference points) obtained by the 3D automatic surveying device 100 and camera vectors.
  • 7 to 9 are explanatory diagrams showing the vector detection method of the present embodiment, and show the relative positional relationship between the camera and the object obtained from a plurality of frame images obtained by the moving camera.
  • FIG. 7 to 9 are explanatory diagrams showing the vector detection method of the present embodiment, and show the relative positional relationship between the camera and the object obtained from a plurality of frame images obtained by the moving camera.
  • FIG. 7 the three-dimensional coordinates of the feature points 1 to 4 shown in images 1 and 2 of FIG. 5 and the camera vector moving between image 1 and image 2 are shown.
  • the calculation in the 3D automatic surveying apparatus 100 is performed as shown in FIG. 10 in order to obtain three-dimensional information of feature points, measurement points, reference points, and camera positions with higher accuracy. Multiple feature points are set according to the distance between feature points ('measurement points, reference points) from force, and multiple calculations are repeated.
  • the 3D automatic surveying apparatus 100 automatically detects feature points having image characteristics in an image, and calculates feature points, measurement points, Focusing on the n- th and n + m- th frame images Fn and Fn + m used for the reference point or camera vector calculation, the unit calculation is performed, and the unit calculation in which n and m are appropriately set is repeated.
  • m is the frame interval, and the feature points are classified into multiple stages according to the distance to the feature points (measurement points, reference points) in the camera image, and the distance from the camera to the feature points (measurement points, reference points) is long.
  • the distance from the camera to the feature point (measurement point, reference point) is set so that Set so that m becomes smaller as the distance becomes closer. The reason for this is that the farther the distance from the camera to the feature point, the smaller the change in position between images.
  • m is set in a plurality of stages, and as n progresses continuously as the image progresses, the operation is continuously performed. Proceed to Then, in the progress of n and each stage of m, the duplicate operation is performed a plurality of times for the same feature point.
  • both ends of the camera vector of the m frames will be overlapped with the Fn and Fn + m camera vectors that have been subjected to the high precision calculation. Therefore, the m minimum unit frames between Fn and Fn + m were obtained by simple calculation, and both ends of the camera vector of the m minimum unit frames obtained by simple calculation were obtained by high precision calculation.
  • the scale of m continuous camera vectors can be adjusted to match the camera vectors of Fn and Fn + m.
  • the calculation processing can be sped up by combining the simple calculation while obtaining the high-precision three-dimensional relative coordinates without any error.
  • the scale is adjusted so that the error between each feature point and camera position is minimized.
  • the absolute coordinate obtaining unit 109 replaces the three-dimensional relative coordinates with a known reference point whose absolute coordinates are measured in advance.
  • the three-dimensional relative coordinates are converted into an absolute coordinate system, and absolute coordinates are given to all points (or required predetermined points) of the measurement points, the reference points, and the feature points.
  • the measurement data is recorded in the measurement data recording unit 110, and the measurement data display unit It will be displayed and output via 111.
  • the measurement point, the reference point, the feature point, the camera coordinates and the rotation are simultaneously determined by the vector calculation unit 107.
  • the camera vector is determined, a new For the measurement points, feature points, and any specified points in the feature points, two images, that is, vertices whose bases are the two camera positions, are obtained from the already obtained camera vector without recalculation together with the camera vector. It can be easily calculated as one point.
  • the 3D automatic surveying apparatus 100 determines that the measurement point specifying unit 104 determines a desired measurement point in a 360-degree omnidirectional image in which a camera vector is obtained for a predetermined reference point.
  • Specified ⁇ Automatically extracted or manually 'specified' extracted and extracted measurement points
  • Force Measurement point tracking unit 104a tracks and associates each frame image. The tracking of the measurement points in the measurement point tracking unit 104a is performed in the same manner as the corresponding point tracking in the corresponding point tracking unit 106 described above.
  • the measurement point measurement calculation unit 104b uses the two images, that is, two images, based on the camera vectors already obtained.
  • the three-dimensional absolute coordinates can be easily and quickly obtained by calculating one vertex having the bottom of each camera position. Even in this case, since the accuracy of the camera vector does not change, the accuracy of a new measurement point, a feature point, and any designated point does not change. However, if a camera vector is calculated again and recalculated, the accuracy generally improves.
  • the measurement point, the reference point, and the feature point are names that are distinguished in work processing, and are essentially equivalent points in coordinate calculation. There is no. Therefore, in the present invention, not only points, places, and regions designated as measurement points in advance, but also any points (designated points) designated thereafter, the three-dimensional position or the three-dimensional position between any two points. It is possible to measure a distance, an area, and a volume (see a second embodiment described later).
  • a plurality of frame image powers of a moving image obtained by a 360-degree omnidirectional camera are extracted with a sufficient number of feature points.
  • three-dimensional relative coordinates indicating the relative positions of a large number of feature points including a desired measurement point can be obtained with high accuracy.
  • the obtained three-dimensional relative coordinates can be converted into an absolute coordinate system based on the known three-dimensional absolute coordinates with respect to a reference point previously obtained by surveying or the like.
  • the absolute coordinates such as latitude / longitude, etc.
  • the absolute coordinates may be determined in advance based on a known length in surveying or by placing an object with a known length around the measurement point. Even if the coordinate values cannot be obtained, the correctness of the scale and the measurement result can be obtained. Accordingly, in the present embodiment, in principle, one camera captures an image with a camera arbitrarily moving in free space, and specifies a desired survey point in the image, or By capturing images of survey points with landmarks and analyzing them, extremely accurate 3D surveys can be performed.
  • the same measurement point is analyzed by analyzing a moving image composed of a large number of frame images including a desired measurement point by moving one camera rather than by parallax between the two cameras.
  • Many frame images can be used, and calculations with high accuracy can be performed with sufficient information.
  • FIG. 12 is a block diagram showing a schematic configuration of the 3D automatic surveying device according to the second embodiment of the present invention.
  • the 3D automatic surveying device shown in the same drawing is a modified embodiment of the above-described first embodiment, and has the same configuration as the 3D automatic surveying device (see FIGS. 1 and 2) shown in the first embodiment, and further has a distance.
  • the operation unit 112 and the area / volume operation unit 113 are further added.
  • the 3D automatic surveying apparatus 100 of the present embodiment shown in Fig. 12 performs three-dimensional measurement between desired two points based on absolute coordinate data of measurement points, reference points, feature points, or camera vectors.
  • a distance calculation unit 112 for obtaining a distance, and an area / volume calculation unit 113 for obtaining an area or volume of a desired region based on a plurality of distances between two points obtained by the distance calculation unit 112 are provided.
  • the distance calculation unit 112 starts from an arbitrary measurement point or an arbitrary feature point in an arbitrary image recorded in the image recording unit 101, and obtains an arbitrary measurement point or an arbitrary characteristic in the image or another different image.
  • a point is designated as an end point, and a three-dimensional distance between arbitrary start points and end points designated in the same image or between different images is calculated based on the absolute coordinates recorded in the measurement data recording unit 110.
  • the area / volume calculation unit 113 designates a plurality of points within the same image or between different images recorded in the image recording unit 101, and specifies a plurality of points between the start point and end point obtained by the distance calculation unit 112.
  • the three-dimensional distance measurement is combined with a number of neurons, and the area or volume of a desired object within the same image or between different images is obtained by calculation.
  • the camera positions and rotations (camera vectors) of all the captured frame images are determined (
  • an arbitrary measurement point or an arbitrary point is designated as a start point (S003).
  • an arbitrary measurement point or an arbitrary point in the image or another different image is designated as an end point.
  • the designation of the start point and the end point can be performed, for example, with a mouse or the like.
  • the points are regarded as measurement points, and automatic tracking is performed between frame images (S005 to S006).
  • the automatic tracking of the corresponding points is performed by the corresponding tracking unit 106 as in the first embodiment.
  • the coordinate calculation processing of the corresponding point is performed by the vector calculation unit 107, the error minimization processing unit 108, and the absolute coordinate acquisition unit 109, and the data is stored in the measurement data recording unit 110.
  • the distance calculation between the two points is performed assuming that the absolute coordinates of the start point and the end point are known.
  • the calculation that is, the three-dimensional measurement can be performed not only when the start point and the end point are both in the same image but also when the start point and the end point are in different images (see FIG. 14).
  • the obtained three-dimensional distance between the two points can be displayed and output as needed, for example, via the measurement data display unit 111 (S008).
  • the area or volume is calculated based on a desired region or the like by combining the obtained three-dimensional distances. (S010).
  • the area or volume in the three-dimensional coordinate system where the object in the image exists is three-dimensionally measured by calculation, and the result can be displayed and output as necessary (S011).
  • FIG. 16 shows an example of an image in which an arbitrary point for obtaining a three-dimensional distance is designated in the same image by the 3D automatic surveying device of the present embodiment.
  • FIG. 16 (a) is an arbitrary image (camera vector image) for which a camera vector has been obtained, and an arbitrary point can be designated in such an arbitrary image using a mouse or the like. Specifically, as shown in FIG. 16 (b), any two points for obtaining the three-dimensional distance can be specified, and the specified two points are connected by a straight line.
  • the three-dimensional distance between the two designated points is obtained by the above-described calculation, and the result is output and displayed in a predetermined format.
  • the 3D automatic surveying apparatus 100 of the present embodiment utilizes the fact that high-precision absolute coordinates can be obtained for an arbitrary measurement point, and obtains a three-dimensional distance specifying a desired start point and end point. Measurement can be performed.
  • FIG. 17 is a block diagram showing a schematic configuration of the 3D automatic surveying device according to the third embodiment of the present invention.
  • the 3D automatic surveying device shown in the figure is a modified embodiment of the above-described first embodiment, and the 3D automatic surveying device shown in the first embodiment (see FIGS. 1, 2 and 11) has a plano unevenness measuring device. This is the addition of the quantity device 200.
  • the plane unevenness surveying apparatus 200 of the present embodiment includes a plane detailed image acquisition unit 201, a parallel image recording unit 202, a plane unevenness three-dimensional measurement unit 203, and a coordinate integration unit. 204, an integrated measurement coordinate recording unit 205, and a total measurement data display unit 206.
  • the plane detailed image acquisition unit 201 mounts a plurality of cameras on a vehicle or the like, and captures an uneven plane portion such as a road existing along a traveling path or the like.
  • the parallel image recording unit 202 records a plurality of images having parallax captured by the plane detailed image acquisition unit 201.
  • the plane unevenness three-dimensional measuring unit 203 three-dimensionally measures the unevenness of the plane from the image having the parallax recorded in the parallel image recording unit 202.
  • the coordinate integration unit reads out the absolute value three-dimensional data of the plane portion three-dimensionally measured by the three-dimensional unevenness three-dimensional measuring unit 203 from the measurement data recording unit 109 (see the first embodiment) of the 3D automatic surveying device 100. Integrate the three-dimensional data with the coordinates of the plane unevenness. 'The coordinate-integrated data is recorded in the integrated measurement coordinate recording unit 205 and, if necessary, displayed and output via the integrated measurement data display unit 205.
  • the 3D automatic surveying apparatus 100 shown in the above-described first embodiment is basically used, and the unevenness around the road and the road is further improved. It is possible to realize a 3D automatic surveying device capable of performing three-dimensional measurement on.
  • the first embodiment As in the case of the state, it is necessary to acquire the three-dimensional coordinates of the entire surroundings, measure the road surface as a surface, and measure the unevenness.
  • a mark such as a marker is put on the road surface in detail and the three-dimensional coordinates are obtained from the image of the road surface in the same manner as in the case of the measurement points in the first embodiment described above. I can do it.
  • three-dimensional relative coordinates are obtained from an image and given known absolute coordinates, as in the case of the measurement points in the first embodiment. The position coordinates are obtained.
  • the plane detailed image acquisition unit 201 acquires images without time lag by synchronously photographing a desired road surface with a plurality of cameras installed in parallel, and acquires images without time lag. To record. -Then, the three-dimensional coordinates are acquired from the parallax of the recorded video, and the three-dimensional measurement of the road surface is performed in the three-dimensional unevenness measuring unit 203.
  • the irregularities obtained here are only relative values and do not have absolute coordinates, so they are still incomplete as distortions in the entire scale. Therefore, the entire scale is detected by the 3D automatic surveying apparatus 100 of the first embodiment, and only the unevenness of the road surface at a short distance is measured by the parallax in the present embodiment. Then, these are coordinate-integrated in the coordinate integrating unit 204. As a result, the unevenness of the road surface can be accurately described.
  • marking a marker on the road surface as described above is preferable as a method of measuring three-dimensional data with high accuracy. Les ,.
  • adding a marker enables accurate three-dimensional imaging even with the continuous image power of one camera. Measurement can be performed (see the first embodiment).
  • the measurement of unevenness by parallax alone can achieve near-range accuracy but not long-range accuracy due to the limitation of the baseline between cameras.
  • the road surface unevenness is determined by image processing using the two-camera parallax by using the flatness unevenness measuring device 200 only for the ultra-short distance measurement for measuring the unevenness of the road surface.
  • Distance.Medium distance 'Long distance absolute coordinate measurement is performed by a marker method using one camera using the 3D automatic surveying device 100 shown in the first embodiment, and integrating these coordinates, the unevenness of the road surface Three-dimensional measurement is possible for road surface irregularities of all sizes, such as deformation, deflection, and distortion.
  • the deflection of the road surface due to heavy objects can be measured by comparing the measurement twice with and without load.
  • FIG. 18 is a block diagram showing a schematic configuration of the 3D automatic surveying device according to the fourth embodiment of the present invention.
  • the 3D automatic surveying device shown in the figure is a modified embodiment of the first embodiment described above.
  • the 3D automatic surveying device shown in the first embodiment (see FIGS. 1, 2 and 11) has a tertiary road surface. This is the one in which the former map creator 300 is added.
  • the road surface three-dimensional map creating device 300 of the present embodiment automatically extracts a road marking portion from the 360-degree omnidirectional image obtained by the 3D automatic surveying device 100 and performs 3D surveying of the road surface to obtain a desired road surface. It is possible to create a three-dimensional map of.
  • the road surface three-dimensional map creating device 300 includes an image stabilizing unit 301, a traveling direction control unit 302, an image vertical plane developing unit 303, and a basic road surface shape model.
  • Generation unit 304 Road surface 3D measurement unit 305, Road surface parameter determination unit 306, Road transparent CG generation unit 307, Synthetic road surface plane development unit 308, Road surface vector extraction unit 309, Road surface technology It includes a flexible flexible connection unit 310; a texture averaging unit 311; a target area cutout unit 312; a road marking recognition and coordinate acquisition unit 313; and a three-dimensional map generation unit 314.
  • the image stability unit 301 is captured by the omnidirectional image capturing unit 101 based on the camera vector with the error minimized obtained by the error minimizing unit 108 of the 3D automatic surveying apparatus 100.
  • the rotation of the entire surrounding image is corrected, and the shaking is corrected to stabilize the image.
  • the traveling direction control unit 302 fixes the traveling direction of the image subjected to the stabilization processing by the image stabilizing unit 301 to a target direction, or controls movement of the image in the target direction.
  • the image vertical plane development unit 303 develops the image whose traveling direction is controlled by the traveling direction control unit 302 on a vertical plane (antarctic plane development). That is, in order to generate a three-dimensional road surface model, processing is performed using an image developed on a vertical plane.
  • a 360-degree omnidirectional image In a 360-degree omnidirectional image, all directions are equal and there is no optical axis. All directions are the optical axis. Therefore, in order to display a 360-degree omnidirectional image as a plane developed image that has a perspective taken with a normal lens, set a virtual optical axis, determine the plane to be developed, and image that has a perspective on that plane Needs to be converted to;
  • a linear plane that is perpendicular to the coordinate axis is used. It is advantageous to simplify the work by developing the image on a plane that becomes a linear scale on the road surface including the slope of the road surface, and the ability to develop the image so that it becomes a scale.
  • vertical plane development is advantageous for obtaining three-dimensional coordinates of roads
  • road plane development is advantageous for processing road surfaces and road markings.
  • the image is developed in the vertical plane and processed in the image vertical plane development unit 303.
  • this vertical plane development processing is not necessarily required for generating a ⁇ -dimensional map, but since the operation can be simplified, in the present embodiment, the processing is provided with the image vertical plane development unit 303. .
  • the basic road shape model generation unit 304 generates a basic road surface model in which the parameters of the road surface shape are undecided.
  • the road surface three-dimensional measurement unit # 0'5 measures the three-dimensional coordinates of the road surface from the image of the road surface developed on the vertical plane by the image vertical plane development unit. Specifically, several places on the road screen of each frame image are divided into large blocks, and three-dimensional measurement is performed by correlation. Here, since the correlation is obtained over a large area, the accuracy can be improved.
  • the road surface parameter determination unit 306 determines each parameter of the road surface model and automatically determines the three-dimensional shape of the road surface. ..
  • the road transparent CG generation unit 307 acquires each parameter of the shape of the road surface from the road surface measurement data obtained by the road surface three-dimensional measurement unit, and generates a transparent CG of the road surface. . In other words, the road transparent CG generation unit 307 generates a transparent CG of the road surface using the road surface model whose shape is determined by fixing the nomometer.
  • the road plane development unit 308 combines the transparent CG generated by the road transparent CG generation unit 307 with the road surface image stabilized in the traveling direction by the traveling direction control unit 302, and develops the image parallel to the road surface. I do. That is, the road surface plane developing unit 308 slightly modifies the developed surface of the image obtained by the processing up to the previous process and develops the image on the road parallel plane. Since the road is not always a horizontal plane, a plane close to a linear plane (a plane with the same length in the same image has the same length) is selected for correlation processing in the next process.
  • the road surface vector extraction unit 309 extracts only the road surface vector and deletes the others by selecting a vector in the developed image that has been developed in a plane.
  • the road surface can be extracted by selecting the stop vector (smallest movement vector), and the moving object with the movement vector is also deleted. .
  • the road surface texture flexible connection unit 310 acquires the road surface texture using the rubber strap theory of the road surface, and connects the road surface using the rubber strap theory for later processing. That is, the road surface texture flexible coupling unit 310 blocks the road surface as necessary, flexibly combines characteristic portions of the road surface without changing the texture order, and outputs the output to the next stage of the texture. Sent to averaging unit 311.
  • the texture averaging unit 311 extracts the road surface on the road surface of the transparent CG, performs averaging on the transparent CG, and eliminates noise. Since the distance between the camera and the road surface is constant, superposition in a stationary coordinate system is possible, and averaging is possible.
  • the target area cutout unit 322 largely cuts out an outline of an area such as a road surface figure such as a road sign or an obstacle from the image in which the noise is reduced by the texture-averaging unit 311. For example, only an arbitrary road sign is extracted from the road surface histogram on the transparent CG. Here, the purpose is to cut out a large area, and it may be incomplete.
  • the road sign recognition and coordinate acquisition unit 313 recognizes a target object from the object region cut out by the object region cutout unit 312 and obtains its coordinates. For example, the extracted road marking part is recognized by PRM and the coordinates are determined. That is, PRM processing is performed on the region cut by the target region cutout unit 312. Since the three-dimensional shape of the road surface is fixed, determining the coordinates two-dimensionally improves the spirit.
  • the PRM technology prepares in advance all the shapes and attributes of an object to be expected as parts (operator parts), and compares those parts with actual shot images to match. This is a technology for selecting a part to be recognized and recognizing the target.
  • the ⁇ parts '' of the objects required for automatic guidance driving and automatic driving of the vehicle include lanes, white lines, yellow lines, pedestrian crossings, speed signs as road signs, guidance signs, etc. Since these are standard types, their recognition can be easily performed using PRM technology. Also, when an object is searched for in a video (CV image) for which a camera vector has been obtained, the predicted three-dimensional space in which the object exists can be limited to a narrow range, and recognition efficiency can be improved. It becomes possible.
  • the output of the figure whose coordinates have been determined by the road sign recognition and coordinate acquisition unit 313 is regarded as a measurement point and sent to the measurement point identification unit 104 of the 3D automatic surveying device (see FIG. 18).
  • the 3D automatic surveying apparatus 100 acquires the absolute coordinates of the road surface figure such as the road sign and the obstacles on the road surface by the above-described processing, and outputs the acquired absolute coordinates from the measurement data recording unit 110. You. As a result, all the figures are reconstructed by the absolute coordinates and sent to the three-dimensional map generator 314 in the next step.
  • the three-dimensional map generation unit 314 reconstructs the output (absolute coordinates) from the measurement data recording unit, retrieves and rearranges the output as a three-dimensional figure having a predetermined specification, and outputs a three-dimensional map of a desired road surface. Generate the original map.
  • FIG. 19 shows an example of a case where a three-dimensional map is generated based on an image converted so as to be equivalent to an image taken on a road by aerodynamics!
  • the road image shown in the figure is a 360-degree omnidirectional image (CV image) calculated by the camera vector using the 3D automatic surveying device 100, and is a road surface observed from several meters above ground, which is not a complete plan view. .
  • CV image 360-degree omnidirectional image
  • the pattern is displayed by the PRM operator (PRM operator).
  • Fig. 19 shows a three-dimensional map of the road shown in Fig. 19 in a stereoscopic view.
  • the PRM operator is more effective in recognizing the three-dimensional road sign than the road surface display such as the center line shown in FIG.
  • the recognition of road signs as shown in Fig. 20 (a), assuming an expected road sign space on the CV image, the type, position, shape and shape of the target road sign in the limited space The coordinates can be recognized.
  • the expected road sign space can be combined and arranged on the actual image as CG, and the target road sign can be searched only within the limited range.
  • the three-dimensional operator of each road sign prepared in advance is used as a part (see Fig. 20 (b)), and the three-dimensional It is possible to search for a sign of a predetermined size and find it. Then, the type, position, coordinates, and shape of the found sign are recognized.
  • a CV video image can be treated in the same way as an object having three-dimensional coordinates, and is extremely advantageous for searching.
  • a road sign such as a road sign, for which the shape of the object to be searched has already been determined
  • the apparent size at its three-dimensional position can be calculated, so it is advantageous to use the PRM operator.
  • the 3D automatic surveying device 100 shown in the first embodiment described above is provided by including the road surface three-dimensional map creating device 300.
  • a high-precision three-dimensional map of an arbitrary road surface can be generated.
  • the 3D automatic surveying device 100 shown in the above embodiment, the preparation work units 10 and 20, the distance calculating unit 112, the area 'volume calculating unit 113, the planar unevenness measuring device 200, and the Original map creator 300 is implemented in any combination
  • the present invention is not limited to only the combinations shown in the above-described embodiments. Some of them may be omitted as appropriate, or all the devices may be provided at the same time.
  • the present invention can be used, for example, as an image surveying apparatus for obtaining the position, distance, and area of a desired measurement point based on a moving image video captured by a vehicle-mounted camera.

Abstract

A three-dimensional measurement is performed, with a high degree of precision, for any object in a moving image by analyzing the moving image captured by a 360-degree all-round camera. There are included round image capturing part (101) for capturing a moving image including desired points to be measured and reference points whose coordinates have already been known; an image record part (102) for recording captured image; a characteristic point extraction part (103) for extracting image characteristic points in the image; a measured point determination part (104) for automatically extracting measured points in the image; a reference point determination part (105) for automatically extracting reference points in the image; an associated point trace part (106) for associating the measured points, reference points and characteristic points within each frame image; a vector calculation part (107) for calculating three-dimensional relative coordinates of the associated measured points, reference points and characteristic points; an error minimization part (108) for repeating the foregoing calculation to perform statistic processings of the three-dimensional relative coordinates; an absolute coordinate acquisition part (109) for converting, by use of the known coordinates of the reference points, the three-dimensional relative coordinates to absolute coordinate system; a measured data record part (110) for recording ultimate coordinates; and a display part (111) for displaying recorded measured data.

Description

明 細 書  Specification
3D自動測量装置  3D automatic surveying equipment
技術分野  Technical field
[0001] 本発明は、カメラにより撮影した画像データに基づいて所望の対象物の大きさや対 象物間の距離等を測量する測量 置に関する。  The present invention relates to a surveying device that measures a size of a desired object, a distance between the objects, and the like based on image data captured by a camera.
特に、本発明は、 360度全周囲カメラから取得した動画映像を解析すること ('こより、 画像内の任意の対象物の精度の高い三次元計測を可能とし、また、移動する 360度 全周囲カメラから撮影した複数のフレーム画像を自由にまたいで、始点と終点の 2点 を任意に指示することにより、指定した 2点間の三次元距離を計測し、さらに、 2点以 上の点を指定することにより、所望の対象物等についての面積や体積を三次元的に 計測することができる 3D自動測量装置に関する。  In particular, the present invention analyzes a moving image captured from a 360-degree omnidirectional camera ('This enables highly accurate three-dimensional measurement of any object in an image, By arbitrarily designating the two points, the start point and the end point, freely across multiple frame images taken by the camera, the three-dimensional distance between the specified two points is measured, and more than two points are determined. The present invention relates to a 3D automatic surveying device capable of three-dimensionally measuring the area and volume of a desired object or the like by designating it.
背景技術  Background art
[0002] 一般に、複数のカメラにより撮影した画像を分析して対象物の大きさや対象物間の 距離等を測量する画像測量技術が知られてレヽる。  [0002] Generally, an image surveying technique for analyzing an image photographed by a plurality of cameras to measure a size of an object, a distance between the objects, and the like is known.
この種の画像測量は、例えば、並列に設置された二台のカメラで得られた視差のあ る画像から距離を計測するステレオ法による技術であり、簡易な測量技術として利用 されている(特許文献 1—2参照。)。  This type of image surveying is, for example, a technique based on a stereo method of measuring a distance from an image with parallax obtained by two cameras installed in parallel, and is used as a simple surveying technique (Patent Literature 1-2.).
また、この種の画像測量では、得られた計測データから地図を生成する試みもなさ れており、応用分野が広がっている。  In this type of image surveying, an attempt has been made to generate a map from the obtained measurement data, and the field of application is expanding.
[0003] 特許文献 1 :特開平 08— 278126号公報 [0003] Patent Document 1: JP 08-278126 A
特許文献 2 :特開 2000— 283753号公報  Patent Document 2: JP-A-2000-283753
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0004] しかしながら、二台のカメラによるステレオ法では、設置するカメラ間距離に制約が あるため、遠く離れた対象物間の距離を測るといった長距離の計測は不可能であつ た。 [0004] However, in the stereo method using two cameras, there is a restriction on the distance between the cameras to be installed, so that it is impossible to measure a long distance such as measuring a distance between distant objects.
また、二台のカメラの視差を利用した測量のため、計測の精度が悪ぐ近距離計測 以外には精密な測量技術としては実用化されるに至ってレ、な!/、。 In addition, short distance measurement where the accuracy of measurement is poor because of surveying using the parallax of two cameras Other than that, it has become practical as a precise surveying technology. /.
従来のステレオ法により高精度な測量を行おうとすれば、二台のカメラには極めて 高い精度の整合性が必要となり、カメラ間距離や角度を高精度に'調整しなければな らず、振動などによって容易に誤差が生じてしまうおそれがあり、現実の実用化は困 難であった。  If a high-precision survey is to be performed using the conventional stereo method, the two cameras must have extremely high-precision consistency, and the distance and angle between the cameras must be adjusted with high precision. There is a risk that errors may easily occur due to such factors, and it has been difficult to actually commercialize the system.
[0005] そこで、本願発明者は、鋭意研究の結果、 360度全周囲カメラから得られた寧画映 像の複数のフレーム画像から充分な数の特徴点を抽出することにより、所望の特徴 点とカメラ位置と回転角の相対位置を示す三次元座標を高精度に求めることができ、 その三次元相対座標を絶対座標に変換することにより、一台の 360度全周囲カメラ によって、移動や振動等にも影響を受けない、高精度な測量が実現し得ることに想 到した。  [0005] Accordingly, as a result of earnest research, the present inventor has extracted a sufficient number of feature points from a plurality of frame images of a ning image obtained from a 360-degree omnidirectional camera, thereby obtaining desired feature points. 3D coordinates that indicate the relative position between the camera position and the rotation angle can be determined with high precision. By converting the 3D relative coordinates into absolute coordinates, a single 360-degree omnidirectional camera can move and shake We came to realize that high-precision surveying, which is not affected by such factors, can be realized.
[0006] すなわち、本発明は、上述した従来技術が有する問題を解決するために提案され たものであり、 360度全周囲カメラから取得した動画映像を解析することにより、複数 のカメラを必要とせず、カメラの振動や揺れ等の影響も受けることなぐ画像内の任意 の対象物についての高精度な三次元計測を可能とし、また、複数のフレーム画像を 跨いで指定した任意の 2点間について、距離的な制約を受けることなく三次元距離 の計測が可能となり、さらに、 2点以上の点を指定することで、映像内の任意の面積 や体積にっレ、ても三次元的に計測することができる 3D自動測量装置の提供を目的 とする。 .  [0006] In other words, the present invention has been proposed to solve the problems of the above-described conventional technology, and requires a plurality of cameras by analyzing a moving image acquired from a 360-degree omnidirectional camera. High-precision three-dimensional measurement of any object in the image without being affected by camera vibration or shaking, etc., and between any two points specified over multiple frame images 3D distance measurement is possible without being restricted by distance.Furthermore, by specifying two or more points, any area or volume in the image can be measured in three dimensions. The purpose is to provide a 3D automatic surveying device that can do this. .
課題を解決するための手段  Means for solving the problem
[0007] 上記目的を達成するため、本発明の 3D自動測量装置は、移動する 360度全周囲 カメラにより、所望の計測点及び三次元絶対座標が既知の所定の基準点を含む動 画又は連続静止画を撮影する全周囲画像撮影部と、全周囲画像撮影部で撮影され た画像を記録する画像記録部と、画像記録部に記録された画像内において、計測 点以外の映像的特徴のある部分を特徴点として抽出する特徴点抽出部と、画像記 録部に記録された画像内において、計測点を自動抽出する計測点特定部と、画像 記録部に記録された画像内において、基準点を自動抽出する基準点特定部と、計 測点,基準点,特徴点を、各フレーム画像内に追跡して対応付ける対応点追跡部と 、対応点追跡部で対 づけられた計測点,基準点,特徴点と、必要に応じてカメラの 位置と回転を示すカメラベクトルについて、三次元相対座標を演算により求めるべク トル演算部と、ベクトル演算部における演算を繰り返し、求められ 三次元相対座標 の誤差を最小にするように重複演算を繰り返し、統計処理する誤差最小化処理部と 、基準点の既知の三次元絶対座標から、ベクトル演算部で求められた三次元相対座 標を絶対座標系に変換し、計測点,基準点,特徴点に三次元絶対座標を付夸する 絶対座標取得部と、計測点,基準点,特徴点に付与された最終の絶対座標を記録 する計測データ記録部と、計測データ記録部に記録された計測データを表示する表 示部とを備える構成としてある。 , [0007] In order to achieve the above object, the 3D automatic surveying device of the present invention uses a moving 360-degree omnidirectional camera to provide a moving or continuous image including a desired measurement point and a predetermined reference point whose three-dimensional absolute coordinates are known. An omnidirectional image capturing unit that captures still images, an image recording unit that records images captured by the omnidirectional image capturing unit, and images recorded in the image recording unit that have image characteristics other than measurement points. A feature point extraction unit that extracts a portion as a feature point, a measurement point identification unit that automatically extracts measurement points in the image recorded in the image recording unit, and a reference point in the image recorded in the image recording unit And a corresponding point tracking unit that tracks measurement points, reference points, and feature points in each frame image and associates them. A vector calculation unit for calculating the three-dimensional relative coordinates of the measurement points, reference points, and feature points associated with the corresponding point tracking unit and, if necessary, the camera vector indicating the position and rotation of the camera; It repeats the operation in the vector operation unit, repeats the overlap operation so as to minimize the error of the obtained three-dimensional relative coordinates, and performs an error minimization processing unit that performs statistical processing, and the vector operation from the known three-dimensional absolute coordinates of the reference point The absolute coordinate acquisition unit converts the three-dimensional relative coordinates obtained by the unit into the absolute coordinate system and attaches the three-dimensional absolute coordinates to the measurement points, reference points, and feature points. It is configured to include a measurement data recording unit that records the final absolute coordinates given thereto, and a display unit that displays the measurement data recorded in the measurement data recording unit. ,
[0008] また、本発明の 3D自動測量装置は、移動する 360度全周囲カメラにより、所望の 計測点及び三次元絶対座標が既知の所定の基準点を含む動画又は連続静止画を 撮影する全周囲画像撮影部と、全周囲画像撮影部で撮影された画像を記録する画 像記録部と、画像記録部に記録された画像内において、映像的特徴のある部分を特 徴点として抽出する特徴点抽出部と、画像記録部に記録された面像内において、基 準点を自動抽出する基準点特定部と、基準点,特徴点を、各フレーム画像内に追跡 して対応付ける対応点追跡部と、対応点追跡部で対応づけられた基準点,特徴点か らカメラの位置と回転を示すカメラベクトルについて、三次元相対座標を演算により求 めるベクトル演算部と、ベクトル演算部における演算を繰り返し、求められる三次元相 対座標の誤差を最小にするように重複演算を繰り返し、統計処理する誤差最小化処 理部と、基準点の既知の三次元絶対座標から、ベクトル演算部で求められたカメラの 三次元相対座標を絶対座標系に変換し、三次元絶対座標を付与する絶対座標取得 部と、画像記録部に記録された画像内におレ、て、計測点を自動抽出する計測点特 定部と、計測点特定部で抽出された計測点を各フレーム画像内に追跡して対応付け る計測点追跡部と、計測点の計測値をベクトル演算部で求められたカメラベクトルか ら演算で求める計測点計測演算部と、計測点の絶対座標を記録する計測データ記 録部と、計測データ記録部に記録された計測データを表示する表示部とを備える構 成としてある。 [0008] Further, the 3D automatic surveying apparatus of the present invention provides a moving 360-degree omni-directional camera that captures a moving image or a continuous still image including a desired measurement point and a predetermined reference point whose three-dimensional absolute coordinates are known. A surrounding image capturing unit, an image recording unit that records images captured by the omnidirectional image capturing unit, and a feature that extracts a portion having a visual feature from the image recorded in the image recording unit as a feature point. A point extraction unit, a reference point identification unit that automatically extracts reference points in the plane image recorded in the image recording unit, and a corresponding point tracking unit that tracks and associates the reference points and feature points in each frame image. And a vector operation unit for calculating the three-dimensional relative coordinates of the camera vector indicating the position and rotation of the camera from the reference point and the feature point associated with the corresponding point tracking unit, and an operation in the vector operation unit. Repeatedly required An error minimization processing unit that repeats overlapping operations to minimize errors in three-dimensional relative coordinates and performs statistical processing, and the cubic of the camera calculated by the vector operation unit from the known three-dimensional absolute coordinates of the reference point. An absolute coordinate acquisition unit that converts the original relative coordinates into an absolute coordinate system and assigns three-dimensional absolute coordinates, and a measurement point identification unit that automatically extracts measurement points in the image recorded in the image recording unit And a measurement point tracking unit that tracks and associates the measurement points extracted by the measurement point identification unit in each frame image, and calculates the measurement values of the measurement points from the camera vector obtained by the vector calculation unit. It has a configuration including a measurement point measurement calculation unit, a measurement data recording unit that records the absolute coordinates of the measurement point, and a display unit that displays the measurement data recorded in the measurement data recording unit.
[0009] また、本発明の 3D自動測量装置は、基準点が、三次元絶対座標が既知の基準点 とともに、又は三次元絶対座標が既知の基準点に換えて、長さが既知の長さ基準点 を含み、ベクトル演算部は、長さ基準点の 2点間の距離を演算により求め、誤差最小 化処理部は、ベクトル演算部で演算により得られる長さ基準点の 2点間の距離力 当 該長さ基準点の既知の長さと一致するように、重複演算を繰り返し、統計処理する構 成としてある。 . [0009] Further, the 3D automatic surveying device of the present invention is configured such that the reference point is a reference point having a known three-dimensional absolute coordinate. Together with the reference point whose absolute three-dimensional coordinates are known, or a length reference point whose length is known, the vector operation unit calculates the distance between the two length reference points by calculation, and calculates the minimum error. The conversion processing unit is configured to repeat the overlap operation and perform statistical processing so that the distance between the two length reference points obtained by the calculation by the vector calculation unit matches the known length of the length reference point. There is. .
[0010] また、本発明の 3D自動測量装置は、画像記録部に記録された画像内において、 任意の計測点を指定する画像内測定点指定作業部と、画像記録部に記録された画 像内において、任意の基準点を指定する画像内基準点指定作業部と、を有する画 像內準備作業部を備え、この画像內準備作業部により、測定点特定部及び基準点 特定部において、任意の測定点及び基準点を指定して抽出させる構成としてある。  [0010] Further, the 3D automatic surveying device of the present invention includes a measurement point designation working unit for designating an arbitrary measurement point in an image recorded in the image recording unit, and an image recorded in the image recording unit. Within the image, a reference point designating unit for designating an arbitrary reference point is provided, and an image 內 preparation working unit is provided. The measurement point and the reference point are designated and extracted.
[0011] また、本発明の 3D自動測量装置は、べ外ル演算部が、計測点,基準点,特徴点 又はカメラベクトルの三次元相対座標演算に用いる任意の二つのフレーム画像 Fn及 び Fn+m (m=フレーム間隔)を単位画像として、所望の三次元相対座標を求める単 位演算を繰り返し、誤差最小化処理部が、画像の進行とともに ηが連続的に進行する ことにより、同一特徴点について複数回演算されて得られる各三次元相対座標の誤 差が最小になるようにスケール調整して統合し、最終の三次元相対座標を決定する 構成としてある。  [0011] Further, in the 3D automatic surveying device of the present invention, the belt calculation unit may include any two frame images Fn and Fn used for three-dimensional relative coordinate calculation of a measurement point, a reference point, a feature point, or a camera vector. The unit operation for finding the desired three-dimensional relative coordinates is repeated with + m (m = frame interval) as the unit image, and the error minimization processing unit has the same characteristics as η progressing continuously as the image progresses. The configuration is such that the scale is adjusted and integrated so that the error of each three-dimensional relative coordinate obtained by calculating a point a plurality of times is minimized, and the final three-dimensional relative coordinate is determined.
[0012] また、本発明の 3D自動測量装置は、ベクトル演算部が、フレーム間隔 mを、カメラ 力 計測点,基準点,特徴点までの距離に応じて、カメラ力 の距離が大きいほど m が大きくなるように設定して単位演算を行う構成としてある。  [0012] Further, in the 3D automatic surveying device of the present invention, the vector calculation unit sets the frame interval m as the distance of the camera power increases as the distance between the camera power measurement point, the reference point, and the feature point increases. The configuration is such that the unit calculation is performed by setting to be large.
[0013] また、本発明の 3D自動測量装置は、ベクトル演算部が、求められた三次元相対座 標の誤差の分布が大きい特徴点を削除し、必要に応じて他の特徴点に基づいて再 演算を行い、測定点演算の精度を上げる構成としてある。  [0013] In the 3D automatic survey device of the present invention, the vector calculation unit deletes feature points having a large error distribution of the obtained three-dimensional relative coordinates, and based on other feature points as necessary. The recalculation is performed to improve the accuracy of the measurement point calculation.
[0014] また、本発明の 3D自動測量装置は、画像記録部に記録された任意の画像内の任 意の計測点又は任意の特徴点を始点とし、当該画像内又は異なる他の画像内の任 意の計測点又は任意の特徴点を終点として指定して、計測データ記録部に記録され た絶対座標に基づき、同一画像内又は異なる画像間で指定された任意の始点終点 間の三次元距離を演算により求める距離 算部を備える構成としてある。 [0015] また、本発明の 3D自動測量装置は、画像記録部に記録された任意の同一画像内 又は異なる面像間で複数の点を指定して、距離演算部で求められる始点終点間の 三次元距離計測を複数組み合わせ、同一画像内又は異なる画像間における所望の 対象物の面積又は体積を演算により求める面積'体積演算部を備える構成としてあ る。 . [0014] Further, the 3D automatic surveying device of the present invention starts at an arbitrary measurement point or an arbitrary feature point in an arbitrary image recorded in the image recording unit, and sets the starting point in the image or another different image. A three-dimensional distance between any start point and end point specified within the same image or between different images based on the absolute coordinates recorded in the measurement data recording part, specifying any measurement point or any feature point as the end point Is provided with a distance calculation unit that obtains by calculation. [0015] Further, the 3D automatic surveying apparatus of the present invention specifies a plurality of points within an arbitrary same image or between different plane images recorded in the image recording unit, and specifies a plurality of points between the start point and the end point obtained by the distance calculation unit. The configuration is such that a plurality of three-dimensional distance measurements are combined, and an area / volume calculator is provided for calculating the area or volume of a desired object within the same image or between different images by calculation. .
[0016] さらに、本発明の 3D自動測量装置は、ベクトル演算部によって得られたカメ ベタト ルにより、全周囲画像撮影部で得られた画像を進行方向に固定又は制御する進行 方向制御部と、進行方向制御部により進行方向に安定化された画像を垂直面に展 開する画像垂直面展開部と、道路面の形状の各パラメータを未定とした道路面の基 本形モデルを生成する道路面基本形モデル生成部と、画像垂直面展開部により垂 直面に展開された路面の画像から道路面の三次元座標を計測する道路面三次元計 測部と、道路面三次元計測部で得られた道路面計測データから、当該道路面の形 状の各パラメータを取得し、当該道路面の透明な CGを生成する道路透明 CG生成 部と、道路透明 CG生成部で生成された透明 CGと進行方向制御部により進行方向 に安定化された道路面画像を合成して、道路面に平行に画像を展開する合成道路 面平面旱開部と、合成道路面平面展開部で展開された画像に道路面テクスチャー を加算平均して、当該画像のノイズを減少させるテクスチャー加算平均部と、必要に 応じて、道路面をブロック化し、道路面の特徴ある部分をテクスチャーの順番を変更 しなレ、ように柔軟に結合し、その出力をテクスチャー加算平均部に送る道路面テクス チヤ一柔軟結合部と、テクスチャー加算平均部でノイズ低減された画像から、道路標 示等の道路面図形や障害物等の領域の概略を大きく切り取る対象物領域切り取り部 と、対象物領域切り取り部で切り取られた対象物領域から、目的の対象物を認識し、 その座標を取得する道路標示等認識及び座標取得部と、座標が取得された目的の 対象物を構成する多角形の各点を計測点として絶対座標を求める計測点特定部に 入力し、絶対座標を取得した計測データ記録部力 の出力を再構成して道路面の三 次元地図を生成する三次元地図生成部とを有する三次元地図生成装置を備える構 成としてある。  [0016] Further, the 3D automatic surveying device of the present invention includes a traveling direction control unit that fixes or controls an image obtained by the omnidirectional image photographing unit in the traveling direction by using a turtle vector obtained by the vector operation unit, An image vertical plane development unit that spreads the image stabilized in the traveling direction by the traveling direction control unit on a vertical plane, and a road surface that generates a basic shape model of the road surface with undefined parameters of the road surface shape The basic model generation unit, the road surface three-dimensional measurement unit that measures the three-dimensional coordinates of the road surface from the image of the road surface developed in a vertical plane by the image vertical plane development unit, and the road surface three-dimensional measurement unit A road transparent CG generation unit that obtains the parameters of the road surface shape from the road surface measurement data and generates a transparent CG of the road surface, and a transparent CG generated by the road transparent CG generation unit and the traveling direction Stable in the traveling direction by the control unit The synthesized road surface image is synthesized, and the road surface texture is added and averaged to the synthesized road surface plane dry section that develops the image parallel to the road surface, and the image developed by the synthesized road surface plane expansion section. A texture averaging unit that reduces noise in the image, and, if necessary, block the road surface and flexibly combine the characteristic parts of the road surface without changing the texture order, and then texture the output. Road surface texture to be sent to the averaging unit-The object area that largely cuts out the area of road surface figures such as road markings and obstacles from the image reduced in noise by the texture averaging unit. Section, a road marking recognition and coordinate acquisition section for recognizing a target object from the object area cut out by the object area cutout section and obtaining its coordinates, and an eye for which the coordinates are obtained. 3D map of the road surface by reconstructing the output of the measurement data recording unit that obtained the absolute coordinates by inputting each point of the polygon constituting the object And a three-dimensional map generation device having a three-dimensional map generation unit for generating a map.
発明の効果 [0017] 以上のような本発明†こよれば、 360度全周囲画像を用いることで、動画映像の複数 のフレーム画像から充分な数の特徴点を抽出することにより、所望の計測点を含む 多数の特徴点の相対位置を示す三次元相対座標を高精度に求 ることができる。な お、通常画像を 360度全周囲画像の一部分として极ぅことで、 360度全周囲画像と同 様に扱うことは可能である。伹し、精度は 360度全周囲画像に比較して低下するため 、可能な限り 360度全周囲画像を用いることが好ましい。 The invention's effect According to the present invention as described above, by using a 360-degree omnidirectional image, a sufficient number of feature points are extracted from a plurality of frame images of a moving image to include a desired measurement point. Three-dimensional relative coordinates indicating the relative positions of many feature points can be obtained with high accuracy. Note that it is possible to treat a normal image as a part of a 360-degree omnidirectional image in the same way as a 360-degree omnidirectional image. However, since the accuracy is lower than that of the 360 ° omnidirectional image, it is preferable to use the 360 ° omnidirectional image as much as possible.
そして、求めた三次元相対座標を、予め測量等で得た基準点についての既知の三 次元絶対座標に基づいて、絶対座標系に変換することができる。  Then, the obtained three-dimensional relative coordinates can be converted into an absolute coordinate system based on the known three-dimensional absolute coordinates of the reference point obtained in advance by surveying or the like.
また、絶対座標を取得する必要のない場合には、予め測量等で既知の長さを基準 として、あるいは長さの既知である物体を計測点の周囲に置くなどして、緯度経度等 の絶対座標値は得られなくても、スケールの正し 、計測結果を得ることができる。 これによつて、本発明では、原則的に一台の.360度全周囲カメラで、自由空間を任 意に移動するカメラによって映像を撮影し、その映像中に所望の測量ポイントを指定 し、あるいは、予め目印等を付した測量ポイントの映像を取り込んで、それを解析する ことで、極めて精度の髙レ、 3D測量が行える。  If it is not necessary to obtain absolute coordinates, the absolute coordinates such as latitude / longitude, etc., may be determined in advance based on a known length in surveying or by placing an object with a known length around the measurement point. Even if the coordinate values cannot be obtained, the scale can be corrected and the measurement results can be obtained. Accordingly, in the present invention, in principle, one .360-degree omni-directional camera captures an image with a camera arbitrarily moving in free space, and designates a desired survey point in the image. Alternatively, by taking images of survey points marked in advance and analyzing them, extremely accurate 3D surveying can be performed.
[0018] このように、一台の 360度全周囲カメラで得られる動画映像を解析することにより、 所望の対象物等についての三次元絶対座標を得ることができ、また、多数の特徴点 を抽出して三次元情報を生成することで、可能な限り誤差を最小化することができ、 複数のカメラを必要とすることなぐかつ、カメラの振 ¾や揺れ等の影響を受けること なぐ画像内の任意の対象物についての高精度な三次元計測が行えるようになる。 すなわち、本発明では、二台のカメラの視差によるのではなく、一台のカメラの移動 によって、所望の計測ポイントを含む多数のフレーム画像からなる動画映像を解析す ることで、同一計測ポイントを含むフレーム画像を多数利用することができ、充分に有 り余る情報によって精度を高めた演算が行える。 As described above, by analyzing a moving image obtained by one 360-degree omnidirectional camera, three-dimensional absolute coordinates of a desired object or the like can be obtained, and a large number of feature points can be obtained. By extracting and generating three-dimensional information, errors can be minimized as much as possible.In images that do not require multiple cameras and are not affected by camera shake or shaking, etc. High-precision three-dimensional measurement can be performed for any given object. That is, in the present invention, the same measurement point is determined by analyzing a moving image composed of a large number of frame images including a desired measurement point by moving one camera, not by parallax between the two cameras. A large number of included frame images can be used, and calculations with high accuracy can be performed with sufficient information.
[0019] また、このように所望の計測点にっレ、て、高精度な絶対座標を得ることができる本 発明では、任意の始点と終点を指定した三次元距離の計測が行え、例えば、複数の フレーム画像を跨レ、で指定した任意の 2点間であっても、距離的な制約を受けること なく高精度な三次元距離計測が可能となる。 さらに、 3点以上の点を指定することにより、画像内や複数の画像に跨る任意の対 象物や領域等の面積や体積についても三次元的に計測することができるようになる。 図面の簡単な説明Further, according to the present invention, in which a highly accurate absolute coordinate can be obtained at a desired measurement point, a three-dimensional distance specifying an arbitrary start point and an end point can be measured. High-precision three-dimensional distance measurement is possible without being restricted by distance, even between any two points specified by using multiple frame images. Furthermore, by specifying three or more points, it becomes possible to three-dimensionally measure the area and volume of an arbitrary object or region in an image or over a plurality of images. Brief Description of Drawings
] [図 1]本発明の第一実施形態に係る 3D自動測量装置の概略構成を示すブロック図 である。 · FIG. 1 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a first embodiment of the present invention. ·
[図 2]本発明の第一実施形態の他の実施形態に係る 3D自動測量装置の概略構成 を示すブロック図である。  FIG. 2 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to another embodiment of the first embodiment of the present invention.
[図 3]本発明の一実施形態に係る 3D自動測量装置おける具体的なカメラベクトルの 検出方法を示す説明図である。  FIG. 3 is an explanatory view showing a specific camera vector detection method in the 3D automatic surveying device according to one embodiment of the present invention.
[図 4]本発明の一実施形態に係る 3D自動測量装置における具体的なカメラベクトル の検出方法を示す説明図である。  FIG. 4 is an explanatory view showing a specific camera vector detection method in the 3D automatic surveying device according to one embodiment of the present invention.
[図 5]本発明の一実施形態に係る 3D自動測量装置における具体的なカメラベクトル の検出方法を示す説明図である。  FIG. 5 is an explanatory diagram showing a specific method for detecting a camera vector in the 3D automatic surveying device according to one embodiment of the present invention.
[図 6]本発明の一実施形態に係る 3D自動測量装置によるカメラベクトルの検出方法 • における望ましい特微点の指定態様を示す説明図である。  FIG. 6 is an explanatory diagram showing a desirable feature point designation mode in a camera vector detection method by the 3D automatic surveying device according to one embodiment of the present invention.
[図 7]本発明の一実施形態に係る 3D自動測量装置により得られる特徴点の三次元 座標とカメラベクトルの例を示すグラフである。  FIG. 7 is a graph showing an example of three-dimensional coordinates of a feature point and a camera vector obtained by the 3D automatic surveying device according to one embodiment of the present invention.
[図 8]本発明の一実施形態に係る 3D自動測量装置により得られる特徴点の三次元 座標とカメラベクトルの例を示すグラフである。  FIG. 8 is a graph showing an example of three-dimensional coordinates of a feature point and a camera vector obtained by the 3D automatic surveying device according to one embodiment of the present invention.
[図 9]本発明の一実施形態に係る 3D自動測量装置により得られる特徴点の三次元 座標とカメラベクトルの例を示すグラフである。  FIG. 9 is a graph showing an example of three-dimensional coordinates of a feature point and a camera vector obtained by the 3D automatic surveying device according to one embodiment of the present invention.
[図 10]本発明の一実施形態に係る 3D自動測量装置において、カメラから特徴点の 距離に応じて複数の特徴点を設定し、複数の演算を繰り返し行う場合を示す説明図 である。 .  FIG. 10 is an explanatory diagram showing a case where a plurality of feature points are set according to the distance between the camera and the feature points and a plurality of calculations are repeatedly performed in the 3D automatic surveying device according to one embodiment of the present invention. .
[図 11]図 1又は図 2に示す本発明の第一実施形態に係る 3D自動測量装置にお!/、て 、すでに得られたカメラベクトルに基づレ、て任意の計測点を計測する場合の概略構 成を示すプロック図である。  [FIG. 11] A 3D automatic surveying device according to the first embodiment of the present invention shown in FIG. 1 or FIG. FIG. 4 is a block diagram showing a schematic configuration in a case where an arbitrary measurement point is measured based on a camera vector already obtained.
[図 12]本発明の第二実施形態に係る 3D自動測量装置の概略構成を示すプロック図 である。 ' FIG. 12 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a second embodiment of the present invention. It is. '
[図 13]図 12に示す 3D自動測量装置における測量処理の手順を示すフローチャート である。  13 is a flowchart showing a procedure of a surveying process in the 3D automatic surveying device shown in FIG. 12.
[図 14]図 12に示す 3D自動測量装置における任意の 2点間の三次元距離の計測処 理手順を示す説明図である。 .  FIG. 14 is an explanatory diagram showing a procedure for measuring a three-dimensional distance between arbitrary two points in the 3D automatic surveying device shown in FIG. 12. .
[図 15]図 12に示す 3D自動測量装置における任意の領域の面積又は体積の痹算処 理手順を示す説明図である。  FIG. 15 is an explanatory diagram showing a procedure for calculating the area or volume of an arbitrary region in the 3D automatic surveying device shown in FIG. 12.
[図 16]図 12に示す 3D自動測量装置により、同一画像内において三次元距離を求め る任意の点を指定する画像例を示す説明図である。  FIG. 16 is an explanatory diagram showing an example of an image in which an arbitrary point for obtaining a three-dimensional distance is designated in the same image by the 3D automatic surveying device shown in FIG. 12.
[図 17]本発明の第三実施形態に係る 3D自動測量装置の概略構成を示すブロック図 である。  FIG. 17 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a third embodiment of the present invention.
[図 18]本発明の第四実施形態に係る 3D自動測量装置の概略構成を示すブロック図 である。  FIG. 18 is a block diagram showing a schematic configuration of a 3D automatic surveying device according to a fourth embodiment of the present invention.
[図 19]本発明の第四実施形態に係る 3D自動測量装置で生成される三次元地図の 一例を示す図であり、(a)は三次元地図で表される道路の断面図であり、(b)は(a) に示す道路の三次元地図の一例で道路上空力 撮影した投影図であり、(c)は (b) に示す三次元地図にぉレ、て三次元座標を取得するために使用されるオペレータ部 品を示す図である。  FIG. 19 is a diagram showing an example of a three-dimensional map generated by the 3D automatic surveying device according to the fourth embodiment of the present invention, where (a) is a sectional view of a road represented by the three-dimensional map, (B) is a projection of the road shown in (a) taken as an example of the three-dimensional map of the road, and (c) is a projection of the three-dimensional map shown in (b). FIG. 2 is a view showing an operator part used for this.
[図 20]図 19に示す道路の立体図であり、道路標識のオペレータ部品(CG部品)が合 成された図を示している。  FIG. 20 is a three-dimensional view of the road shown in FIG. 19, in which an operator part (CG part) of a road sign is combined.
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
以下、本発明に係る 3D自動測量装置の好ましい実施形態について、図面を参照 しつつ説明する。  Hereinafter, a preferred embodiment of the 3D automatic surveying device according to the present invention will be described with reference to the drawings.
ここで、以下に示す本発明の 3D.自動測量装置は、プログラム(ソフトウェア)の命令 によりコンピュータで実行される処理,手段,機能によって実現される。プログラムは、 コンピュータの各構成要素に指令を送り、以下に示すような所定の処理や機能、例え ば、特徴点の自動抽出,抽出した特徴点の自動追跡,特徴点の三次元座標の算出 ,カメラベクトルの演算等を行わせる。このように、本発明の 3D自動測量装置及ぴ画 像安定化装置における各処理や手段は、プログラムとコンピュータとが協働した具体 的手段によって実現されるようになってレ、る。 Here, the 3D. Automatic surveying device of the present invention described below is realized by processing, means, and functions executed by a computer according to instructions of a program (software). The program sends commands to each component of the computer, and performs predetermined processing and functions as described below, for example, automatic extraction of feature points, automatic tracking of extracted feature points, calculation of three-dimensional coordinates of feature points, The calculation of the camera vector is performed. As described above, the 3D automatic surveying device and the plan Each process and means in the image stabilizing device is realized by specific means in which a program and a computer cooperate.
なお、プログラムの全部又は一部は、例えば、磁気ディスク,光 イスク,半導体メ モリ,その他任意のコンピュータで読取り可能な記録媒体により提供され、記録媒体 力 読み出されたプログラムがコン ュータにインストールされて実行される。また、プ ログラムは、記録媒体を介さず、通信回線を通じて直接にコンピュータにロードし実 行することちでさる。  Note that all or a part of the program is provided, for example, by a magnetic disk, an optical disk, a semiconductor memory, or any other computer-readable recording medium, and the read program is installed in the computer. Executed. Also, programs are loaded directly into a computer via a communication line and executed without using a recording medium.
[0022] まず、図 1及び図 2を参照して、本発明に係る 3D自動測量装置の一実施形態につ いて説明する。 '  First, an embodiment of a 3D automatic surveying device according to the present invention will be described with reference to FIG. 1 and FIG. '
図 1及び図 2は、それぞれ、本発明の一実施形態に係る 3D自動測量装置の概略 構成を示すブロック図である。  FIGS. 1 and 2 are block diagrams each showing a schematic configuration of a 3D automatic surveying device according to an embodiment of the present invention.
図 1に示す実施形態の 3D自動測量装置は、予め測量点の指定等の準備作業を行 う現地準備作業部 10と、撮影されたカメラ画像内で測量処理を行う 3D自動測量装 置 100を備えている。  The 3D automatic surveying device according to the embodiment shown in FIG. 1 includes a field preparation work unit 10 for performing preparation work such as designation of survey points in advance and a 3D automatic surveying device 100 for performing surveying processing within a captured camera image. Have.
また、図 2に示す実施形態の 3D自動測量装置は、予め測量点の指定等の準備作 業を行う手段として、図 1に示す現地準備作業部 10に換えて、画像内準備作業部 2 0を備えている。  In addition, the 3D automatic surveying apparatus of the embodiment shown in FIG. 2 is a means for performing preparation work such as designation of survey points in advance, instead of the on-site preparation work section 10 shown in FIG. It has.
[0023] 現地準備作業部 10は、計測作業に先立つ現地準備作業を行う手段であり、図 1に 示すように、現地計測点指定作業部 11と現地基準点指定作業部 12を備えている。 現地計測点指定作業部 11は、計測作業に先立ち、所望のすべての計測点につい ての指定を行う。この計測点の指定は、例えば、現場の計測地点を示す印を付したり 、あるいは、計測地点を示す物体を置く等によって行うことができる。この指定により、 後述する 3D自動測量装置 100において、撮影されたカメラ画像内における計測点 の抽出,特定が行えるようになる。  The on-site preparation work section 10 is a means for performing on-site preparation work prior to the measurement work, and includes a on-site measurement point designation work section 11 and a on-site reference point designation work section 12, as shown in FIG. Prior to the measurement work, the on-site measurement point designation work unit 11 designates all desired measurement points. The designation of the measurement point can be performed, for example, by adding a mark indicating the measurement point at the site, placing an object indicating the measurement point, or the like. With this designation, the 3D automatic surveying device 100 described later can extract and specify measurement points in the captured camera image.
現地基準点指定作業部 12は、計測作業に先立ち、所定の基準点となる地点につ いての指定を行う。  The local reference point designation work unit 12 designates a point to be a predetermined reference point prior to the measurement work.
[0024] ここで、基準点は、後述するように、三次元相対座標を絶対座標に変換する際の基 準となる点であり、予め、任意の方法により既知の基準座標 (三次元絶対座標)が計 測される点 (座標基準点 >である。 Here, the reference point is, as described later, a reference point when converting the three-dimensional relative coordinates into the absolute coordinates, and is known in advance by any method. ) Is total The point to be measured (coordinate reference point>.
また、基準点は、三次元絶対座標が既知の基準点とともに、又は三次元絶対座標 が既知の基準点に換えて、長さが既知の基準点 (長さ基準点)を舍むことができる。 長さ基準点は、 2点以上の点からなり、 2点間の距離を既知のものとして扱う基準点 であり、例えば、長さ基準点の間隔を 1メートノレというように設定し、画像内に 1メート ルの棒等を多数映るように設置することで得られる。そして、各画像に少なくとも一つ の長さ基準点が重複するように撮影する。このような長さ基準点を設けることで、長さ 基準点の既知の長さを基準として、後述するように、画像毎にスケールのキヤリブレ —シヨンができ、精度を大幅に向上させることができる。  In addition, the reference point can include a reference point having a known length (length reference point) together with a reference point having a known three-dimensional absolute coordinate or in place of a reference point having a known three-dimensional absolute coordinate. . The length reference point is a reference point that consists of two or more points and treats the distance between the two points as a known one.For example, the interval between the length reference points is set to 1 meter It can be obtained by setting up a large number of 1-meter sticks and so on. Then, shooting is performed so that at least one length reference point overlaps each image. By providing such a length reference point, a scale calibration can be performed for each image based on the known length of the length reference point, as described later, and the accuracy can be greatly improved. .
[0025] ここで、長さ基準点は、座標基準点を複数設定するのと同様と捉えることもできるが 、「長さ」である長さ基準点を多数設定することは、「点」である座標基準点を多数設 定するよりも有効である。すなわち、座標基準点は全計測範囲に 2点のみ設定すれ ば絶対座標に変換でき、また、座標基準点がすべての画像から観察されるとは限ら ず、さらに、複数の座標基準点を設定するよりも複数の長さ基準点を設ける方が費用 や手間の点で有利となる。  Here, the length reference point can be considered to be the same as setting a plurality of coordinate reference points. However, setting a large number of length reference points that are “lengths” means “points”. This is more effective than setting many coordinate reference points. That is, coordinate reference points can be converted to absolute coordinates if only two points are set in the entire measurement range.Coordinate reference points are not necessarily observed from all images, and multiple coordinate reference points are set. Providing more than one length reference point is more advantageous in terms of cost and labor.
従って、例えば全計測範囲において、座標基準点は 2点のみとし、長さの基準を示 す所定長 (例えば 1メートル)の棒を計測範囲に多数、しかもランダムに設置するだけ で、本発明に係る自動測量を実施でき、計測作業の手間も費用も大幅に削減するこ とができる。 '  Therefore, for example, in the entire measurement range, only two coordinate reference points are used, and a number of rods of a predetermined length (for example, 1 meter) indicating the length reference are simply and randomly set in the measurement range. Such an automatic survey can be performed, and the labor and cost of the measurement work can be greatly reduced. '
[0026] なお、基準点 (座標基準点又は長さ基準点)についての三次元座標や長さの測量 はどのような方法であってもよぐ例えば、三角法等の従来公知の測量方法によって 絶対座標や長さを取得しておくことができる。  The measurement of the three-dimensional coordinates and the length of the reference point (the coordinate reference point or the length reference point) may be performed by any method, for example, by a conventionally known measurement method such as a trigonometric method. Absolute coordinates and length can be obtained.
現地基準点指定作業部 12は、この基準点に、計測地点とは明確に区別できる基 準地点の印を付し、あるいは、基準地点を示す物体や棒等を置くことにより基準点の 指定を行う。この指定により、後述する 3D自動測量装置 100において、撮影された力 メラ画像内において、所定の基準点の抽出,特定が行えるようになる。  The local reference point designating unit 12 marks the reference point with a reference point that can be clearly distinguished from the measurement point, or places an object or bar indicating the reference point to specify the reference point. Do. With this designation, in the 3D automatic surveying device 100 to be described later, a predetermined reference point can be extracted and specified in the captured force image.
[0027] 具体的には、現地準備作業部 10での準備作業は、測量現場において、計測しょう とする目的の計測点のすべてに、それと分かる印等を付ける作業をする。この印等は 、 3D自動測量装置 Γ00—の処理において、画像上で画像認識により自動抽出するた めのものである。従って、画像認識により自動抽出できるようにするため、単純な図形 等が良ぐまた、他の計測点や基準点,他の図形等と混乱しないような特徴を与える ことがよぐ例えば、〇や Xなどでも良いし、その計測地点に色を付けた杭を打つな どしても良い。 . [0027] Specifically, the preparation work in the on-site preparation work unit 10 is a work in which all the target measurement points to be measured are marked at the surveying site so that they can be recognized. This mark etc. In the processing of the 3D automatic surveying device # 00-, it is for automatically extracting by image recognition on an image. Therefore, in order to enable automatic extraction by image recognition, simple figures and the like are good, and features that do not confuse with other measurement points, reference points, other figures, etc. are given. X may be used, or a colored stake may be hit at the measurement point. .
[0028] 同様に、所定の基準点についても、座標基準点の場合には印等を付け、長さ基準 点の場合には棒等を設置する。 Similarly, for a predetermined reference point, a mark or the like is attached in the case of a coordinate reference point, and a bar or the like is provided in the case of a length reference point.
なお、基準点と計測点は異なる印等とし、また、複数の基準点は、それぞれ個別に 異なる印を付けることが望ましい。これにより、計測点と基準点が明確に区別でき、ま た、複数の基準点もそれぞれ区別することができ、始点と終点の指定等が容易に行 えるようになる。  It is desirable that the reference point and the measurement point be different marks, and that a plurality of reference points be individually marked differently. As a result, the measurement point and the reference point can be clearly distinguished, and a plurality of reference points can also be distinguished from each other, so that the start point and the end point can be easily specified.
そして、絶対座標が既知である基準点は二力所以上指定することで、後述するよう に、三次元相対座標を絶対座標に変換できるようになる。  By specifying two or more reference points whose absolute coordinates are known, three-dimensional relative coordinates can be converted into absolute coordinates as described later.
[0029] ここで、計測点と基準点の指定作業は、機械等により自動化することも、作業者の 手作業により行うこともできる。また、例えば、基準点等の数が少ないような場合等に は、この現地作業を省略し、画像内から直接基準点等を検出し、画像上で印を付け ることちでさる。 Here, the work of designating the measurement point and the reference point can be automated by a machine or the like, or can be performed manually by an operator. In addition, for example, when the number of reference points is small, the on-site work is omitted, and the reference points are detected directly from the image and marked on the image.
図 2は、計測点と基準点を指定する準備作業を画像内で行う画像内準備作業部 20 を備えた 3D自動測量装置である。  FIG. 2 shows a 3D automatic surveying apparatus including an in-image preparation work unit 20 for performing a preparation operation for designating a measurement point and a reference point in an image.
同図に示すように、画像内準備作業部 20は、図 1で示した 地準備作業部 10によ る現場における現地準備作業を省略し、 3D自動測量装置 100で撮影,取得された カメラ画像内において所望の計測点と基準点の指定を行う手段であり、画像内計測 点指定作業部 21と、画像内基準点作業部 22を備えている。  As shown in the figure, the in-image preparation work unit 20 omits the on-site preparation work at the site by the ground preparation work unit 10 shown in FIG. It is a means for designating a desired measurement point and a reference point within the device, and includes an in-image measurement point designation work unit 21 and an in-image reference point work unit 22.
[0030] 画像内計測点指定作業部 21は、後述する 3D自動測量装置 100の周囲画像撮影 部 101で撮影された映像内において、所望の計測点を指定する。 [0030] The in-image measurement point designation work unit 21 designates a desired measurement point in the video imaged by the surrounding image imaging unit 101 of the 3D automatic surveying device 100 described later.
同様に、画像内基準点指定作業部 22は、周囲画像撮影部 101で撮影された映像 内において、予め絶対座標が既知の所定の基準点を指定する。 - そして、この画像内準備作業部 20における指定により、後述する 3D自動測量装置 100の計測点特定部 104におレ、て、目的の計測点が画像内で特定されて指定され 、同様に、基準点特定部 105おいて、目的の基準点が画像内で特定されて指定され ることになる。 ' Similarly, the in-image reference point designating unit 22 designates a predetermined reference point whose absolute coordinates are known in advance in the video captured by the surrounding image capturing unit 101. -The 3D automatic surveying device described later is specified by the in-image preparation work unit 20. The target measurement point is specified and specified in the image at the 100 measurement point specifying unit 104, and similarly, the target reference point is specified and specified in the image at the reference point specification unit 105. Will be done. '
[0031] 具体的には、画像内準備作業部 20では、例えば、作業者が画像内に所望の計測 点及び基準点に印を付していく作業が行われる。 Specifically, in the in-image preparation work section 20, for example, an operation is performed in which an operator marks desired measurement points and reference points in the image.
このように、画像内準備作業部 20を備えることにより、現地準備作業を可能な限り 省略することができる。これにより、本実施形態の 3D自動測量装置を、画像内 3D計 測装置として位置づけることもでき、現地計測準備作業の内、計測対象を指定する現 場での準備作業をすベて省略し、外で行う作業を周囲画像撮影部 101による撮影の みとすることがでさる。  By providing the in-image preparation work section 20 in this way, the on-site preparation work can be omitted as much as possible. As a result, the 3D automatic surveying device of the present embodiment can be positioned as an in-image 3D measuring device, and all the on-site preparation work for designating a measurement target in the on-site measurement preparation work is omitted. Outside work can be performed only by the surrounding image capturing unit 101.
[0032] そして、以上のような現地準備作業部 10又は画像内準備作業部 20における計測 点及び基準点の指定準備作業を経て、 3D自動測量装置 100において、具体的な 測量処理が行われる。  [0032] Then, through the above-described preparation work for designating measurement points and reference points in the on-site preparation work unit 10 or the in-image preparation work unit 20, specific survey processing is performed in the 3D automatic surveying apparatus 100.
3D自動測量装置 100では、 360度全周囲カメラで撮影された画像内から、計測点 ,基準点を含む映像的な特徴点が自動抽出され、その特徴点がフレーム画像間で 自動追跡される。これにより、先ずカメラべクトノレを求めることができる。  In the 3D automatic surveying device 100, video feature points including measurement points and reference points are automatically extracted from images taken by a 360-degree omnidirectional camera, and the feature points are automatically tracked between frame images. Thus, the camera vector can be obtained first.
そして、特徴点の自動抽出、自動追跡により、カメラベクトルが求まれば、画像中の 例えば長さが既知の物体を基準にしてキャリブレーションし、絶対長を取得することが できる。なお、撮影時のカメラ高も絶対長の基準となり得るので、撮影時のカメラ高を 一定とすることが望ましい。  Then, if the camera vector is obtained by automatic extraction and tracking of feature points, calibration can be performed based on, for example, an object having a known length in the image, and the absolute length can be obtained. Since the camera height at the time of shooting can also be a reference for the absolute length, it is desirable to keep the camera height at the time of shooting constant.
—且カメラベクトルが求まれば、カメラ座標から任意の点の三次元座標を求めること ができる。また、任意の点の三次元座標が求まれば、二点間の三次元距離も、あるい は、面積や体積も、容易に求めることが可能となる。  If the camera vector is obtained, the three-dimensional coordinates of an arbitrary point can be obtained from the camera coordinates. Further, if the three-dimensional coordinates of an arbitrary point are obtained, the three-dimensional distance between two points, or the area or volume can be easily obtained.
[0033] 具体的には、 3D自動測量装置 100は、図 1又は図 2に示すように、全周囲画像撮 影部 101,画像記録部 102,特徴点抽出部 103,計測点特定部 104,基準点特定 部 105,対応点追跡部 106,ベクトル演算部 107,誤差最小化処理部 108,絶対座 標取得部 109,計測データ記録部 110,計測データ表示部 111を備えている。  [0033] Specifically, as shown in FIG. 1 or 2, the 3D automatic surveying apparatus 100 includes an omnidirectional image capturing unit 101, an image recording unit 102, a feature point extracting unit 103, a measurement point specifying unit 104, A reference point specifying unit 105, a corresponding point tracking unit 106, a vector calculation unit 107, an error minimization processing unit 108, an absolute coordinate acquisition unit 109, a measurement data recording unit 110, and a measurement data display unit 111 are provided.
[0034] 全周囲画像撮影部 101は、車載カメラ等の移動する 360度全周囲カメラから、すべ ての計測地点及び基-準地-点を動画像又は連続静止画として撮影する。 [0034] The omnidirectional image photographing unit 101 receives all 360-degree omnidirectional cameras, such as a vehicle-mounted camera. All measurement points and base-ground-points are photographed as moving images or continuous still images.
この全周囲画像撮影部 101での撮影は、例えば、一台のカメラを車両等に搭載し、 車両の移動を利用して所望の測量地域を撮影する。この周囲画 ½撮影部 101で撮 影された画像が、本発明に係る画像解析されることにより所望の計測点について画 像内測量が行われる。 .  In the photographing by the omnidirectional image photographing unit 101, for example, one camera is mounted on a vehicle or the like, and a desired survey area is photographed by using the movement of the vehicle. The image captured by the surrounding image capturing unit 101 is subjected to the image analysis according to the present invention, so that the in-image survey is performed at a desired measurement point. .
なお、周囲画像撮影部 101における撮影の際には、必要に応じて、車両の夢動範 When photographing by the surrounding image photographing unit 101, the driving range of the vehicle
¾を大きくすることにより、長レ、ベースラインを確保することができる。 By increasing ¾, it is possible to secure a long line and a baseline.
また、長距離,中距離,短距離で、フレーム間距離を使い分けて計測することもでき る。  It is also possible to measure by using the distance between frames for long distance, medium distance and short distance.
[0035] 画像記録部 102は、周囲画像撮影部 101で撮影された画像を記録する。  The image recording unit 102 records an image photographed by the surrounding image photographing unit 101.
特徴点抽出部 103は、画像記録部 102に記録された画像内において、指定される 計測点と基準点以外の、映像的特徴のある部分を特徴点として抽出する。  The feature point extracting unit 103 extracts a portion having a visual feature other than the designated measurement point and reference point in the image recorded in the image recording unit 102 as a feature point.
この特 ί敫点抽出部 103による映像的な特徴点抽出は、画像処理技術により、画像 の中から必要な数の特徴点を自動的に抽出できるようにする。 ' 例えば、画像内の「角(かど)」の部分を特徴点と指定して、画像認識により「角」部 分だけを選択的に抽出すればそれが特徴点となる。  The video feature point extraction by the feature point extraction unit 103 enables a required number of feature points to be automatically extracted from an image by an image processing technique. 'For example, if a “corner” portion in an image is designated as a feature point and only the “corner” portion is selectively extracted by image recognition, it becomes a feature point.
[0036] 計測点特定部 104は、画像記録部 102に記録された画像内において、計測点を自 動抽出する。 The measurement point specifying unit 104 automatically extracts measurement points from the image recorded in the image recording unit 102.
. 基準点特定部 105は、画像記録部 102に記録された画像内において、基準点 (座 標基準点及び Ζ又は長さ基準点)を自動抽出する。  The reference point specifying unit 105 automatically extracts reference points (coordinate reference points and Ζ or length reference points) from the image recorded in the image recording unit 102.
これら計測点特定部 104及び基準点特定部 105における計測点及び基準点の抽 出は、上述したように、現地準備作業部 10により実際の計測点及び特徴点に付され た印や、画像內準備作業部 20により画像上に付加された印等を、画像認識により自 動的に行われるようになってレ、る。  As described above, the extraction of the measurement points and the reference points in the measurement point specifying unit 104 and the reference point specifying unit 105 can be performed by using the marks attached to the actual measurement points and characteristic points by Marks and the like added to the image by the preparation work unit 20 are automatically performed by image recognition.
[0037] 対応点追跡部 106は、計測点、基準点、特徴点を各フレーム画像内に追跡して対 応付ける。 [0037] Corresponding point tracking section 106 tracks measurement points, reference points, and feature points in each frame image and associates them.
ベクトル演算部 107は、計測点,基準点,特徴点と、必要に応じてカメラ座標と回転 (カメラベクトル)の、それぞれの三次元座標を演算により求める。 誤差最小化処理部 ΙΌ8は、ベクトル演算部 107における演算を繰り返すことにより 、求められる三次元相対座標の誤差を最小にするように重複演算を繰り返して、統計 処理し、演算の精度を高める。 The vector calculation unit 107 calculates the three-dimensional coordinates of the measurement point, the reference point, the feature point, and, if necessary, the camera coordinates and the rotation (camera vector). The error minimizing unit 8 repeats the operation in the vector operation unit 107, thereby repeating the overlap operation so as to minimize the error of the obtained three-dimensional relative coordinates, performs statistical processing, and increases the accuracy of the operation.
絶対座標取得部 109は、基準点の既知の座標から、求められた三次元相対座標を 絶対座標系に変換し、計測点、基準点、特徴点のすべての点、又は必要な所定の 点にっレヽて絶対座標を与える。  The absolute coordinate acquisition unit 109 converts the obtained three-dimensional relative coordinates into the absolute coordinate system from the known coordinates of the reference point, and converts the three-dimensional relative coordinates into all of the measurement points, the reference points, the feature points, or necessary predetermined points. Give absolute coordinates.
[0038] また、緯度経度等の絶対座標を必要としないときは、長さの基準を示す長さ基準点 により、各画像で長さ校正をし、スケール合わせができて、正しいスケールの座標を 取得できる。 [0038] When absolute coordinates such as latitude and longitude are not required, the length can be calibrated for each image using the length reference point indicating the length standard, and the scale can be adjusted. Can be obtained.
この場合には、ベクトル演算部 107は、長さ基準点の両端の三次元座標を求め、得 られた三次元座標から長さ基準点の 2点間の距離を演算により求める。そして、誤差 最小化処理部 108において、ベクトル演算部 107で演算により得られた長さ基準点 の 2点間の距離が、長さ基準点の既知の長さと一致するように、重複演算を繰り返し 、統計処理する。  In this case, the vector calculation unit 107 obtains three-dimensional coordinates at both ends of the length reference point, and calculates the distance between the two length reference points from the obtained three-dimensional coordinates. Then, the error minimization processing unit 108 repeats the overlap calculation so that the distance between the two length reference points obtained by the calculation by the vector calculation unit 107 matches the known length of the length reference point. Statistical processing.
勿論、座標基準点と長さ基準点を同時に用いることもでき、その場合には、更に精 度を向上させることができる。  Of course, the coordinate reference point and the length reference point can be used simultaneously, in which case the accuracy can be further improved.
[0039] 計測データ記録部 110は、計測点の最終座標を演算し、記録する。 [0039] The measurement data recording unit 110 calculates and records the final coordinates of the measurement point.
そして、計測データ表示部 111は、計測データを表示する。  Then, the measurement data display section 111 displays the measurement data.
ここで、計測データ記録部 110に記録され、計測データ表示部 111で表示される計 測データは、計測点,基準点,特徴点の三次元座標情報であり、表示される態様とし ては、例えば、三次元座標を示す数値の「表」であっても良ぐまた、地図上で計測点 の位置を示す「点」であっても良い。また、三次元座標を示す数値は、例えば、 XYZ 座標の値や、緯度経度高度の値によって示すことができる。  Here, the measurement data recorded in the measurement data recording unit 110 and displayed on the measurement data display unit 111 is three-dimensional coordinate information of a measurement point, a reference point, and a feature point. For example, it may be a “table” of numerical values indicating three-dimensional coordinates, or a “point” indicating the position of a measurement point on a map. Numerical values indicating three-dimensional coordinates can be indicated by, for example, values of XYZ coordinates or values of latitude, longitude, and altitude.
[0040] 以上のような構成からなる 3D自動測量装置 100では、撮影された画像内で計測点 及び特徴点を示す印等を読み取り、他の映像的な特徴点とともに、ェピポーラ幾何 学により、その三次元位置を演算で求めることができる。 [0040] The 3D automatic surveying apparatus 100 having the above-described configuration reads marks and the like indicating measurement points and feature points in a captured image, and uses the epi-polar geometry together with other visual feature points. The three-dimensional position can be calculated.
計測点と基準点のみを対象として演算しても結果は得られるが、計測点以外の画 像内の特徴点を用いることで、さらに精度が向上する。特徴点は画像内から自動抽 出される。 Although the result can be obtained even if the calculation is performed only for the measurement point and the reference point, the accuracy is further improved by using the feature points in the image other than the measurement point. Feature points are automatically extracted from the image. Will be issued.
また、必ずしもカメラ位置を求めなくても良レ、が、カメラ位置を先に求めておくことで 、計測点や基準点の増加に対して、演算が単純化され、演算が容易になる。  In addition, it is not always necessary to obtain the camera position, but by obtaining the camera position first, the calculation is simplified and the calculation becomes easy with respect to the increase of the measurement points and the reference points.
以下、図 3以下を参照しつつ、 3D自動測量装置 100における特徴点の抽出処理と 、抽出された特徴点に基づく特徴点とカメラ位置の三次元相対座標の演算処理につ いて、より詳細に説明する。 ..  Hereinafter, with reference to FIG. 3 and subsequent figures, the process of extracting feature points in the 3D automatic surveying apparatus 100 and the process of calculating the three-dimensional relative coordinates of the feature points and camera positions based on the extracted feature points will be described in more detail. explain. ..
[0041] 複数の画像 (動画又は連続静止画)内で、特徴点とカメラベクトルの三次元相対座 標を検出するには幾つかの方法があるが、本実施形態の 3D自動測量装置 100では 、画像内に十分に多くの数の特徴点を自動抽出し、それを自動追跡することで、ェピ ポーラ幾何学により、特徴点とカメラ位置及びカメラの 3軸回転の三次元相対座標を 求めるようにしてある。 There are several methods for detecting a three-dimensional relative coordinate between a feature point and a camera vector in a plurality of images (moving images or continuous still images), but the 3D automatic surveying apparatus 100 according to the present embodiment employs several methods. By extracting a sufficiently large number of feature points in an image and automatically tracking them, the three-dimensional relative coordinates of the feature points and camera position and the camera's three-axis rotation are obtained by epipolar geometry. It is like that.
特徴点を充分に多くとることにより、カメラベクトル情報が重複することになり、重複 する情報から誤差を最小化させて、より精度の高いカメラベクトルを求めることができ る。  By taking a sufficient number of feature points, camera vector information is duplicated, and an error can be minimized from the duplicated information, and a more accurate camera vector can be obtained.
[0042] ここで、カメラベクトルとは、カメラの持つ自由度のベクトルをいう。  Here, the camera vector refers to a vector of the degree of freedom of the camera.
一般に、静止した三次元物体は、位置座標 (X, Y, Z)と、それぞれの座標軸の回 転角(Φχ, ΦΥί Φζ)の六個の自由度を持つ。従って、カメラベクトルは、カメラの位 置座標 (X, Υ, Ζ)とそれぞれの座標軸の回転角(Φχ, Φγ, Φζ)の六個の自由度の ベクトルをいう。なお、カメラが移動する場合は、自由度に移動方向も入る力 これは 上記の六個の自由度力 微分して導き出すことができる。 In general, a stationary three-dimensional object has six degrees of freedom: position coordinates (X, Y, Z) and rotation angles (Φχ, Φ Υί Φζ) of each coordinate axis. Therefore, the camera vector refers to a vector with six degrees of freedom of the camera position coordinates (X, Υ, Ζ) and the rotation angles (Φ 回 転, Φγ, Φζ) of the respective coordinate axes. In addition, when the camera moves, the force in which the moving direction is included in the degree of freedom. This can be derived by differentiating the above six degrees of freedom force.
このように、本実施形態の 3D自動測量装置 100によるカメラべクトノレの検出は、カメ ラは各フレーム毎に六個の自由度の値をとり、各フレーム毎に異なる六個の自由度 を決定することである。  As described above, in the detection of the camera vector by the 3D automatic survey device 100 of the present embodiment, the camera takes six degrees of freedom values for each frame and determines six different degrees of freedom for each frame. It is to be.
[0043] 以下、 3D自動測量装置 100における具体的なカメラベクトルの検出方法について 、図 3以下を参照しつつ説明する。  Hereinafter, a specific method of detecting a camera vector in the 3D automatic surveying apparatus 100 will be described with reference to FIG.
まず、特徴点抽出部 103で、適切にサンプリングしたフレーム画像中に、特徴点と なるべき点又は小領域画像を自動抽出する。また、画像内に指定した印等が示す計 測点と特徴点を、計測点特定部 104と基準点特定部 105で自動抽出する。抽出され た特徴点,計測点,基-準点は、対応点追跡部 106で、複数のフレーム画像間で対応 関係を自動的に求める。 First, the feature point extraction unit 103 automatically extracts a point or a small area image to be a feature point from a properly sampled frame image. The measurement point and the feature point indicated by the mark or the like designated in the image are automatically extracted by the measurement point specifying unit 104 and the reference point specifying unit 105. Extracted The corresponding points of the feature points, the measurement points, and the reference points are automatically determined by a corresponding point tracking unit 106 between a plurality of frame images.
具体的には、カメラベクトルの検出の基準となる、十分に必要な 以上の特徴点を 求める。また、計測点は所望の数だけ指定し、絶対座標が既知の基準点は少なくとも 二点指定する。画像間で対応関係が求められる特徴点 (又は計測点,基準点)の対 応関係の一例を、図 3〜図 5に示す。図中「 +」が自動抽出された特徴点であり、複 数のフレーム画像間で対応関係が自動追跡される(図 5に示す対応点;!〜 4参照)。 ここで、演算の基礎となる特徴点の抽出は、図 6に示すように、各画像中に充分に 多くの特徴点を指定,抽出することが望ましく(図 6の〇印参照)、例えば、 100点程 度の特徴点を抽出する。  More specifically, the feature points that are sufficiently necessary and serve as a reference for detecting the camera vector are obtained. Also, a desired number of measurement points are specified, and at least two reference points whose absolute coordinates are known are specified. Figures 3 to 5 show examples of correspondences between feature points (or measurement points and reference points) for which correspondence is required between images. In the figure, “+” is the automatically extracted feature point, and the correspondence is automatically tracked between a plurality of frame images (corresponding points shown in FIG. 5; see! To 4). Here, as shown in FIG. 6, it is desirable to extract and extract a sufficiently large number of feature points in each image (refer to the mark “〇” in FIG. 6). About 100 feature points are extracted.
[0044] 続!/、て、ベクトル演算部 107で、抽出された特徴点と計測点,基準点の三次元相対 座標が演算により求められ、その三次元相対座標に基づいてカメラベクトルが演算に より求められる。具体的には、ベクトル演算部 107は、連続する各フレーム間に存在 する、十分な数の特徴の位置と、移動するカメラ間の位置ベクトル、カメラの 3軸回転 ベクトル、各カメラ位置と特徴点(計測点,基準点)をそれぞれ結んだベクトノレ等、各 種三次元ベクトルの相対値を演算により連続的に算出する In the vector calculator 107, the three-dimensional relative coordinates of the extracted feature point, measurement point, and reference point are calculated, and the camera vector is calculated based on the three-dimensional relative coordinates. More required. Specifically, the vector calculation unit 107 calculates the positions of a sufficient number of features existing between consecutive frames, the position vectors between moving cameras, the three-axis rotation vector of the cameras, the camera positions and the feature points. Continuously calculate the relative value of each type of three-dimensional vector, such as the vector connecting the (measurement point, reference point), etc.
本実施形態では、カメラ映像として原則的に 360度全周囲映像を使用し、 360度全 周囲映像のェピポーラ幾何からェピポーラ方程式を解くことによりカメラ運動(カメラ 位置とカメラ回転)を計算するようになっている。  In this embodiment, a 360-degree omnidirectional image is used in principle as a camera image, and camera motion (camera position and camera rotation) is calculated by solving an epipolar equation from the epipolar geometry of the 360-degree omnidirectional image. ing.
[0045] 360度全周囲映像は、例えば広角レンズや魚眼レンズ付きカメラや複数台のカメラ 、あるいは回転カメラなどで撮影されるパノラマ映像や全方位映像, 360度全周囲映 像であり、通常のカメラで撮影される映像より広い範囲が示されるため、高精度なカメ ラベクトル演算をより簡易,迅速に算出することができ好ましい。なお、 360度全周映 像といっても、必ずしも 4 π全空間を含む映像のみでなぐ 360度全周囲の一部分を カメラベクトル演算用の映像として扱うこともできる。その意味では、通常のカメラで撮 影される映像も、 360度全周囲映像の一部と捉えることができ、本実施形態における ような優れた効果は少なレ、ものの、本質的に差異はな 本発明の 360度全周囲映 像 (4 %映像)と同様に扱うことができる。 [0046] 図 5に示す画像 1 , 2は-、 360度全周囲画像をメルカトール展開した画像であり、緯 度 Φ、経度 Θとすると、画像 1上の点は(0 1, φ 1)、画像 2上の点は(0 2, φ 2)とな る。そして、それぞれのカメラでの空間座標は、 zl = (cos φ lcos' e 1, cos φ lsin Θ 1 , βϊη 1) , z2= (cos φ 2cos Θ 2, cos φ 2sin Θ 2, sin φ 2)である。カメラの移動 ベクトルを t、カメラの回転行列を R.、とすると、 zlT[t] XRz2 = 0がェピポーラ方程式 である。 [0045] The 360-degree omnidirectional image is, for example, a panoramic image, an omnidirectional image, or a 360-degree omnidirectional image captured by a camera with a wide-angle lens or a fisheye lens, a plurality of cameras, a rotating camera, or the like. Since a wider range is shown than the video taken by the camera, it is preferable because a high-precision camera vector calculation can be calculated easily and quickly. It should be noted that even though it is a 360-degree full-circle image, a part of the 360-degree entire circumference, which is not necessarily only an image including the entire 4π space, can be treated as a video for camera vector calculation. In this sense, the video taken by a normal camera can be regarded as a part of the 360-degree omnidirectional video, and the excellent effect as in this embodiment is small, but there is essentially no difference. It can be handled in the same way as the 360-degree omnidirectional image (4% image) of the present invention. [0046] Images 1 and 2 shown in Fig. 5 are-, 360 degree omnidirectional images are Mercator-expanded, and if latitude Φ and longitude Θ, points on image 1 are (0 1, φ 1), The point on image 2 is (0 2, φ 2). And the spatial coordinates of each camera are zl = (cos φ lcos' e1, cos φ lsin Θ 1, βϊη 1), z2 = (cos φ 2cos Θ 2, cos φ 2sin Θ 2, sin φ 2) It is. Assuming that the camera movement vector is t and the camera rotation matrix is R. zl T [t] XRz2 = 0 is the epipolar equation.
十分な数の特徴点を与えることにより、線形代数演算により最小自乗法による解とし て t及び Rを計算することができる。この演算を対応する複数フレームに適用し演算す る。  By giving a sufficient number of feature points, t and R can be calculated as solutions by the method of least squares by linear algebra. This operation is applied to a plurality of corresponding frames to perform the operation.
[0047] ■ここで、カメラべタトノレの演算に利用する画像は、上述したように、原則として 360度 全周画像を用いる。  (2) Here, as described above, a 360-degree full-circumference image is used as an image to be used for calculating the camera stickiness.
カメラベクトル演算に用いる画像としては、原理的にはどのような画像でも良いが、 図 5に示す 360度全周囲画像のような広角画像の方が特徴点を数多く選択し易ぐ 長く追跡可能となる。そこで、本実施形態では、カメラベクトル演算に 360度全周囲 画像を用いるようにしてあり、これによつて、特徴点の追跡距離を長くでき、特徴点を 十分に多く選択することができ、遠距離、中距離、短距離それぞれに都合の良い特 徴点を選択することができるようになる。また、回転ベクトルを捕正する場合には、極 回転変換処理を加えることで、演算処理も容易に行えるようになる。これらのことから 、より精度の高い演算結果が得られるようになる。  Any image can be used in principle for the camera vector calculation, but a wide-angle image such as the 360-degree omnidirectional image shown in Fig. 5 makes it easier to select many feature points and enables long tracking. Become. Therefore, in the present embodiment, the 360-degree omnidirectional image is used for the camera vector calculation, whereby the tracking distance of the feature points can be lengthened, a sufficient number of feature points can be selected, and the distance can be increased. It will be possible to select characteristic points that are convenient for distance, medium distance and short distance. In addition, when correcting the rotation vector, the calculation processing can be easily performed by adding the polar rotation conversion processing. From these facts, more accurate calculation results can be obtained.
なお、図 5は、 3D自動測量装置 100における処理を理解し易くするために、 1台( 又は複数台)のカメラで撮影した画像を合成した 360度全周囲の球面画像を地図図 法でレ、うメルカトール図法で展開したものを示している力 実際の 3D自動測量装置 1 00では、必ずしもメルカトール図法による展開画像である必要はなレ、。  In addition, Fig. 5 shows the spherical image around 360 degrees, which is composed of images taken by one camera (or multiple cameras), in a map format in order to make it easier to understand the processing in the 3D automatic surveying device 100. In the actual 3D automatic surveying apparatus 100, it is not always necessary to use an image developed by the Mercator projection.
[0048] 次に、誤差最小化処理部 108では、各フレームに対応する複数のカメラ位置と複 数の特徴点の数により、複数通り生じる演算方程式により、各特徴点に基づくべクト ルを複数通り演算して求めて、各特徴点 (計測点,基準点)の位置及ぴカメラ位置の 分布が最小になるように統計処理をして、最終的なベクトルを求める。例えば、複数 フレームのカメラ位置、カメラ回転及び複数の特徴点について、 Levenb erg-Mar quai-dt [0048] Next, the error minimization processing unit 108 uses the plurality of camera positions corresponding to each frame and the number of feature points to generate a plurality of vectors based on each feature point by using a plurality of operation equations. Then, statistical processing is performed so that the distribution of each feature point (measurement point, reference point) and the camera position is minimized, and the final vector is obtained. For example, for camera positions, camera rotations, and multiple feature points for multiple frames, Levenb erg-Mar quai-dt
法により最小自乗法の最適解を推定し、誤差を収束してカメラ位置,カメラ回転行列 ,特徴点,計測点,基準点の三次元相対座標を求める。 '  The optimal solution of the least squares method is estimated by the method, and the errors are converged to obtain the three-dimensional relative coordinates of the camera position, camera rotation matrix, feature points, measurement points, and reference points. '
さらに、座標の誤差の分布が大きい特徴点につては削除し、他の特徴点に基づい て再演算することで、各特徴点及びカメラ位置での演算の精度を上げるようにする。 このようにして、特徴点,計測点,基準点の位置とカメラベクトルを示す三次 相対 座標を精度良く求めることができる。  Further, feature points having a large coordinate error distribution are deleted, and recalculation is performed based on other feature points, thereby improving the accuracy of the calculation at each feature point and camera position. In this manner, the tertiary relative coordinates indicating the positions of the feature point, the measurement point, the reference point, and the camera vector can be obtained with high accuracy.
[0049] 図 7〜図 9に、 3D自動測量装置 100により得られる特徴点 (計測点,基準点)の三 次元座標とカメラベクトルの例を示す。図 7〜図 9は、本実施形態のベクトル検出方法 を示す説明図であり、移動するカメラによって取得された複数のフレーム画像によつ て得られるカメラ及び対象物の相対的な位置関係を示す図である。  FIGS. 7 to 9 show examples of three-dimensional coordinates of feature points (measurement points and reference points) obtained by the 3D automatic surveying device 100 and camera vectors. 7 to 9 are explanatory diagrams showing the vector detection method of the present embodiment, and show the relative positional relationship between the camera and the object obtained from a plurality of frame images obtained by the moving camera. FIG.
図 7では、図 5の画像 1, 2に示した特徴点 1〜4の三次元座標と、画像 1と画像 2の 間で移動するカメラベクトルが示されてレ、る。  In FIG. 7, the three-dimensional coordinates of the feature points 1 to 4 shown in images 1 and 2 of FIG. 5 and the camera vector moving between image 1 and image 2 are shown.
図 8及び図 9は、充分に多くの特徴点とフレーム画像により得られた特徴点の位置 と移動するカメラの位置が示されている。同図中、グラフ中央に直線状に連続する〇 印がカメラ位置であり、その周囲に位置する〇印が特徴点の位置と高さを示している  8 and 9 show a sufficiently large number of feature points, the positions of the feature points obtained from the frame images, and the positions of the moving cameras. In the same figure, a mark continuous in a straight line at the center of the graph indicates the camera position, and a mark located around the mark indicates the position and height of the feature point.
[0050] ここで、 3D自動測量装置 100における演算は、より高精度な特徴点,計測点,基 準点及びカメラ位置の三次元情報を高速に得るために、図 10に示すように、カメラ力 ら特徴点 ('計測点,基準点)の距離に応じて複数の特徴点を設定し、複数の演算を り返し行うようにする。 Here, the calculation in the 3D automatic surveying apparatus 100 is performed as shown in FIG. 10 in order to obtain three-dimensional information of feature points, measurement points, reference points, and camera positions with higher accuracy. Multiple feature points are set according to the distance between feature points ('measurement points, reference points) from force, and multiple calculations are repeated.
具体的には、 3D自動測量装置 100は、画像内には映像的に特徴がある特徴点を 自動検出し、各フレーム画像内に特徴点の対応点を求める際に、特徴点,計測点, 基準点又はカメラベクトル演算に用レ、る n番目と n+m番目の二つのフレーム画像 Fn と Fn+mに着目して単位演算とし、 nと mを適切に設定した単位演算を繰り返す。 mはフレーム間隔であり、カメラ力 画像内の特徴点 (計測点,基準点)までの距離 によって特徴点を複数段に分類し、カメラから特徴点 (計測点,基準点)までの距離 が遠いほど mが大きくなるように設定し、カメラから特徴点(計測点,基準点)までの距 離が近いほど mが小ざくなるように設定する。このようにするのは、カメラから特徴点ま での距離が遠ければ遠いほど、画像間における位置の変化が少なレ、からである。 Specifically, the 3D automatic surveying apparatus 100 automatically detects feature points having image characteristics in an image, and calculates feature points, measurement points, Focusing on the n- th and n + m- th frame images Fn and Fn + m used for the reference point or camera vector calculation, the unit calculation is performed, and the unit calculation in which n and m are appropriately set is repeated. m is the frame interval, and the feature points are classified into multiple stages according to the distance to the feature points (measurement points, reference points) in the camera image, and the distance from the camera to the feature points (measurement points, reference points) is long. The distance from the camera to the feature point (measurement point, reference point) is set so that Set so that m becomes smaller as the distance becomes closer. The reason for this is that the farther the distance from the camera to the feature point, the smaller the change in position between images.
[0051] そして、特徴点の m値による分類を、十分にオーバーラップさせ がら、複数段階 の mを設定し、画像の進行とともに nが連続的に進行するのにともなって、演算を連 続的に進行させる。そして、 nの進行と mの各段階で、同一特徴点について複数回重 複演算を行う。 [0051] Then, while the classification of the feature points by the m-values is sufficiently overlapped, m is set in a plurality of stages, and as n progresses continuously as the image progresses, the operation is continuously performed. Proceed to Then, in the progress of n and each stage of m, the duplicate operation is performed a plurality of times for the same feature point.
このようにして、フレーム画像 Fnと Fn+mに着目した単位演算を行うことにより、 m 枚毎にサンプリングした各フレーム間(フレー.ム間は駒落ちしている)では、長時間か けて精密カメラベクトルを演算し、フレーム画像 Fnと Fn+mの間の m枚のフレーム( 最小単位フレーム)では、短時間処理で行える簡易演算とすることができる。  In this way, by performing the unit calculation focusing on the frame images Fn and Fn + m, it takes a long time between each frame sampled every m frames (frames are dropped between frames). A precise camera vector is calculated, and m frames (minimum unit frames) between the frame images Fn and Fn + m can be simplified calculations that can be performed in a short time.
[0052] m枚毎の精密カメラベクトル演算に誤差がないとすれば、 m枚のフレームのカメラべ タトルの両端は、高精度演算をした Fnと Fn+mのカメラベクトルと重なることになる。 従って、 Fnと Fn+mの中間の m枚の最小単位のフレームについては簡易演算で求 め、簡易演算で求めた m枚の最小単位フレームのカメラベクトルの両端を、高精度演 算で求めた Fnと Fn+mのカメラベクトルに一致するように、 m枚の連続したカメラベク トルのスケール調整をすることができる。 If there is no error in the precision camera vector calculation for every m frames, both ends of the camera vector of the m frames will be overlapped with the Fn and Fn + m camera vectors that have been subjected to the high precision calculation. Therefore, the m minimum unit frames between Fn and Fn + m were obtained by simple calculation, and both ends of the camera vector of the m minimum unit frames obtained by simple calculation were obtained by high precision calculation. The scale of m continuous camera vectors can be adjusted to match the camera vectors of Fn and Fn + m.
このようにして、画像の進行とともに nが連続的に進行することにより、同一特徴点に ついて複数回演算されて得られる各特徴点,計測点,基準点及びカメラベクトルの三 次元相対座標の誤差が最小になるようにスケール調整して統合し、最終の三次元相 対座標を決定することができる。  In this way, as n progresses continuously as the image progresses, the error of the three-dimensional relative coordinates of each feature point, measurement point, reference point, and camera vector obtained by calculating the same feature point multiple times is obtained. Can be scaled and merged to minimize, and the final three-dimensional relative coordinates can be determined.
これにより、誤差のない高精度の三次元相対座標を求めつつ、簡易演算を組み合 わせることにより、演算処理を高速化することができるようになる。  Thus, the calculation processing can be sped up by combining the simple calculation while obtaining the high-precision three-dimensional relative coordinates without any error.
[0053] なお、簡易演算としては、精度に応じて種々の方法があるが、例えば、(1)高精度演 算では 100個以上の多くの特徴点を用レ、る場合に、簡易演算では最低限の 10個程 度の特徴点を用いる方法や、(2)同じ特徴点の数としても、特徴点とカメラ位置を同等 に考えれば、そこには無数の三角形が成立し、その数だけの方程式が成立するため 、その方程式の数を減らすことで、簡易演算とすることができる。 [0053] There are various methods for simple calculation according to the accuracy. For example, in (1) high-precision calculation, when many feature points of 100 or more are used, simple calculation is used. A method using a minimum of about 10 feature points, or (2) Even if the number of feature points is the same, if the feature points and the camera position are considered equally, innumerable triangles will be established there, and only the number of triangles will be established. Since the following equation is satisfied, a simple operation can be performed by reducing the number of equations.
これによつて、各特徴点及びカメラ位置の誤差が最小になるようにスケール調整す る形で統合し、距離演算を行い、さらに、誤差の分布が大きい特徴点を削除レ必要 ' に応じて他の特徴点について再演算することで、各特徴点及びカメラ位置での演算 の精度を上げることができる。 In this way, the scale is adjusted so that the error between each feature point and camera position is minimized. By performing the distance calculation and deleting the feature points with a large error distribution, and recalculating the other feature points as necessary, the accuracy of the calculation at each feature point and camera position Can be raised.
[0054] そして、以上のようにして各点の三次元位置相対座標が求められると、絶対座標取 得部 109において、各三次元相対座標に、予め絶対座標が測定された基準点の既 知の座標が与えられ、三次元相対座標が絶対座標系に変換され、計測点、基準点、 特徴点のすべての点 (又は必要な所定の点)について、絶対座標が付与される。こ れにより、所望の計測点や、特徴点中の任意に指定した指定点についての最終的な 絶対座標が得られ、その計測データが計測データ記録部 110に記録され、計測デ ータ表示部 111を介して表示,出力されることになる。  When the three-dimensional position relative coordinates of each point are obtained as described above, the absolute coordinate obtaining unit 109 replaces the three-dimensional relative coordinates with a known reference point whose absolute coordinates are measured in advance. Are given, the three-dimensional relative coordinates are converted into an absolute coordinate system, and absolute coordinates are given to all points (or required predetermined points) of the measurement points, the reference points, and the feature points. As a result, the final absolute coordinates of the desired measurement point and the arbitrarily designated point in the feature point are obtained, the measurement data is recorded in the measurement data recording unit 110, and the measurement data display unit It will be displayed and output via 111.
[0055] なお、以上の説明では、計測点,基準点,特徴点,カメラ座標と回転 (カメラベクトル )をベクトル演算部 107で同時に求めるように説明した力 一度カメラベクトルが求め られれば、新たな計測点,特徴点,特徴点中の任意の指定点については、カメラベク トルとともに再演算することなぐすでに得られたカメラベクトルから、二つの画像、す なわち、二つのカメラ位置を底辺とする頂点の一点として簡単に演算することができ る。  In the above description, the measurement point, the reference point, the feature point, the camera coordinates and the rotation (camera vector) are simultaneously determined by the vector calculation unit 107. Once the camera vector is determined, a new For the measurement points, feature points, and any specified points in the feature points, two images, that is, vertices whose bases are the two camera positions, are obtained from the already obtained camera vector without recalculation together with the camera vector. It can be easily calculated as one point.
すなわち、任意の計測点については、既に得られたカメラベクトルに基づいてその 三次元絶対座標を演算により求めることができる。  That is, for an arbitrary measurement point, its three-dimensional absolute coordinates can be obtained by calculation based on the already obtained camera vector.
この場合には、 3D自動測量装置 100は、図 11に示すように、計測点特定部 104は 、所定の基準点についてカメラベクトルが得られた 360度全周囲画像内において、 所望の計測点を特定 ·自動抽出し、もしくは手動で特定'抽出し、抽出された計測点 力 計測点追跡部 104aにおいて、各フレーム画像内に追跡して対応付けられる。こ の計測点追跡部 104aでの計測点の追跡は、上述した対応点追跡部 106における 対応点追跡と同様にして行われる。  In this case, as shown in FIG. 11, the 3D automatic surveying apparatus 100 determines that the measurement point specifying unit 104 determines a desired measurement point in a 360-degree omnidirectional image in which a camera vector is obtained for a predetermined reference point. Specified · Automatically extracted or manually 'specified' extracted and extracted measurement points Force Measurement point tracking unit 104a tracks and associates each frame image. The tracking of the measurement points in the measurement point tracking unit 104a is performed in the same manner as the corresponding point tracking in the corresponding point tracking unit 106 described above.
そして、各フレーム画像内に追跡して対応付けられて特定された計測点にっレヽて、 計測点計測演算部 104bにおいて、既に得られているカメラベクトルに基づき、二つ の画像、すなわち、二つのカメラ位置を底辺とする頂点の一点とした演算により、三 次元絶対座標が簡易かつ迅速に求められることになる。 [0056] このようにしても、カメラベクトルの精度が変わらないため、新たな計測点,特徴点, 任意の指定点の精度も変わらない。但し、再度カメラベクトルを求めて再演算すれば 、精度は一般に向上することになる。 Then, based on the measurement points that are tracked and associated in each frame image and specified, the measurement point measurement calculation unit 104b uses the two images, that is, two images, based on the camera vectors already obtained. The three-dimensional absolute coordinates can be easily and quickly obtained by calculating one vertex having the bottom of each camera position. Even in this case, since the accuracy of the camera vector does not change, the accuracy of a new measurement point, a feature point, and any designated point does not change. However, if a camera vector is calculated again and recalculated, the accuracy generally improves.
また、計測点と基準点と特徴点は、作業処理上区分される名称であって、座標演算 上は本質的に対等な点であり、特に計測点と基準点及び特徴点に演算上の差異は ない。従って、本発明において、計測点として予め指定した点や場所,領域だ でな く、その後に指定された任意の点 (指定点)についても、三次元位置や、任意の 2点 間の三次元距離、面積や体積 (後述する第二実施形態参照)を計測することができ る。  The measurement point, the reference point, and the feature point are names that are distinguished in work processing, and are essentially equivalent points in coordinate calculation. There is no. Therefore, in the present invention, not only points, places, and regions designated as measurement points in advance, but also any points (designated points) designated thereafter, the three-dimensional position or the three-dimensional position between any two points. It is possible to measure a distance, an area, and a volume (see a second embodiment described later).
すなわち、計測点は初めから現場に設定しておくことで、後に映像から計測点を抽 出して訐測演算することが可能であるが、計測点を現場に設定してレ、なレ、場合であ つても、撮影後に手動又は自動で映像内に計測点を特定できさえすれば、その点を 計測点として計測演算が可能である。  In other words, by setting the measurement points at the site from the beginning, it is possible to extract the measurement points from the video and perform the measurement operation later. However, as long as the measurement point can be specified manually or automatically in the video after shooting, measurement calculation can be performed using that point as the measurement point.
[0057] 以上説明したように、本実施形態に係る 3D自動測量装置によれば、 360度全周囲 カメラで得られる動画映像の複数のフレーム画像力 充分な数の特徴点を抽出するAs described above, according to the 3D automatic surveying apparatus according to the present embodiment, a plurality of frame image powers of a moving image obtained by a 360-degree omnidirectional camera are extracted with a sufficient number of feature points.
. ことにより、所望の計測点を含む多数の特徴点の相対位置を示す三次元相対座標を 高精度に求めることができる。そして、求めた三次元相対座標を、予め測量等で得た 基準点につ 、ての既知の三次元絶対座標に基づレ、て、絶対座標系に変換すること ができる。 As a result, three-dimensional relative coordinates indicating the relative positions of a large number of feature points including a desired measurement point can be obtained with high accuracy. Then, the obtained three-dimensional relative coordinates can be converted into an absolute coordinate system based on the known three-dimensional absolute coordinates with respect to a reference point previously obtained by surveying or the like.
また、絶対座標を取得する必要のない場合には、予め測量等で既知の長さを基準 として、あるいは長さの既知である物体を計測点の周囲に置くなどして、緯度経度等 の絶対座標値は得られなくても、スケールの正しレ、計測結果を得ることができる。 これによつて、本実施形態では、原則的に一台のカメラで、自由空間を任意に移動 するカメラによって映像を撮影し、その映像中に所望の測量ポイントを指定し、あるい は、予め目印等を付した測量ポイントの映像を取り込んで、それを解析することで、極 めて精度の高い 3D測量が行える。  If it is not necessary to obtain absolute coordinates, the absolute coordinates such as latitude / longitude, etc., may be determined in advance based on a known length in surveying or by placing an object with a known length around the measurement point. Even if the coordinate values cannot be obtained, the correctness of the scale and the measurement result can be obtained. Accordingly, in the present embodiment, in principle, one camera captures an image with a camera arbitrarily moving in free space, and specifies a desired survey point in the image, or By capturing images of survey points with landmarks and analyzing them, extremely accurate 3D surveys can be performed.
[0058] このように、一台のカメラで得られる動画映像を解析することにより、所望の対象物 等についての三次元絶対座標を得ることができ、また、多数の特徴点を抽出して三 次元情報を生成することで、' '可能な限り誤差を最小化することができ、複数のカメラを 必要とすることなぐかつ、カメラの振動や揺れ等の影響を受けることなぐ画像内の 任意の対象物にっ 、ての高精度な三次元計測が行えるようになる。 As described above, by analyzing a moving image obtained by one camera, it is possible to obtain three-dimensional absolute coordinates of a desired object or the like. By generating dimensional information, it is possible to minimize the error as much as possible.It is not necessary to use multiple cameras, and any image in the image that is not affected by camera vibration or shaking. High-precision three-dimensional measurement of an object can be performed.
すなわち、本実施形態では、二台のカメラの視差によるのではなぐ一台のカメラの 移動によって、所望の計測ポイントを含む多数のフレーム画像からなる動画映像を解 析することで、同一計測ポイントを含むフレーム画像を多数利用することができ.、充分 に有り余る情報によって精度を高めた演算が行える。  That is, in the present embodiment, the same measurement point is analyzed by analyzing a moving image composed of a large number of frame images including a desired measurement point by moving one camera rather than by parallax between the two cameras. Many frame images can be used, and calculations with high accuracy can be performed with sufficient information.
[0059] [第二実施形態] [Second Embodiment]
次に、本発明の 3D自動測量装置の第二実施形態について、図 12〜図 16を参照 しつつ説明する。  Next, a second embodiment of the 3D automatic surveying device of the present invention will be described with reference to FIGS.
図 12は、本発明の第二実施形態に係る 3D自動測量装置の概略構成を示すブロッ ク図である。  FIG. 12 is a block diagram showing a schematic configuration of the 3D automatic surveying device according to the second embodiment of the present invention.
同図に示す 3D自動測量装置は、上述した第一実施形態の変更実施形態であり、 第一実施形態で示した 3D自動測量装置(図 1及び図 2参照)の構成に、さらに、距 離演算部 112及び面積 ·体積演算部 113を更に付加したものである。  The 3D automatic surveying device shown in the same drawing is a modified embodiment of the above-described first embodiment, and has the same configuration as the 3D automatic surveying device (see FIGS. 1 and 2) shown in the first embodiment, and further has a distance. The operation unit 112 and the area / volume operation unit 113 are further added.
従って、その他の構成部分については第一実施形態の場合と同様であり、同様の 構成部分にっレ、ては同一符号を付して詳細な説明は省略する。  Therefore, other components are the same as those in the first embodiment, and the same components are denoted by the same reference numerals, and detailed description is omitted.
[0060] 具体的には、図 12に示す本実施形態の 3D自動測量装置 100は、計測点,基準 点,特徴点又はカメラベクトルの絶対座標データに基づいて、所望の 2点間の三次 元距離を求める距離演算部 112と、距離演算部 112で求められた複数の 2点間距離 に基づいて所望領域の面積又は体積を求める面積'体積演算部 113を備えている。 距離演算部 112は、画像記録部 101に記録された任意の画像内の任意の計測点 又は任意の特徴点を始点とし、当該画像内又は異なる他の画像内の任意の計測点 又は任意の特徴点を終点として指定して、計測データ記録部 110に記録された絶対 座標に基づき、同一画像内又は異なる画像間で指定された任意の始点終点間の三 次元距離を演算により求める。 [0060] Specifically, the 3D automatic surveying apparatus 100 of the present embodiment shown in Fig. 12 performs three-dimensional measurement between desired two points based on absolute coordinate data of measurement points, reference points, feature points, or camera vectors. A distance calculation unit 112 for obtaining a distance, and an area / volume calculation unit 113 for obtaining an area or volume of a desired region based on a plurality of distances between two points obtained by the distance calculation unit 112 are provided. The distance calculation unit 112 starts from an arbitrary measurement point or an arbitrary feature point in an arbitrary image recorded in the image recording unit 101, and obtains an arbitrary measurement point or an arbitrary characteristic in the image or another different image. A point is designated as an end point, and a three-dimensional distance between arbitrary start points and end points designated in the same image or between different images is calculated based on the absolute coordinates recorded in the measurement data recording unit 110.
面積 ·体積演算部 113は、画像記録部 101に記録された任意の同一画像内又は 異なる画像間で複数の点を指定して、距離演算部 112で求められる始点終点間の 三次元距離計測をネ复数組み合わせ、同一画像内又は異なる画像間における所望の 対象物の面積又は体積を演算により求める。 The area / volume calculation unit 113 designates a plurality of points within the same image or between different images recorded in the image recording unit 101, and specifies a plurality of points between the start point and end point obtained by the distance calculation unit 112. The three-dimensional distance measurement is combined with a number of neurons, and the area or volume of a desired object within the same image or between different images is obtained by calculation.
[0061] 図 13〜図 16を参照して、具体的な三次元距離計測及び面積 ·¼:積演算の方法に ついて説明する。 With reference to FIG. 13 to FIG. 16, a specific method of three-dimensional distance measurement and area · ¼: product operation will be described.
図 13に示すように、本実施形態では、前処理として、上述した第一実施形態の場 合と同様に、撮影された全フレーム画像のカメラ位置と回転 (カメラベクトル)を め( As shown in FIG. 13, in the present embodiment, as the pre-processing, as in the case of the first embodiment described above, the camera positions and rotations (camera vectors) of all the captured frame images are determined (
5001)、一度求められたカメラベクトルをデータとして格納してテーブル化しておく(5001), the camera vector obtained once is stored as data and tabulated (
5002)。このようにすることで、既に得られたカメラベクトルを利用して三次元座標演 算を簡易化,高速化することができる。なお、この処理は、第一実施形態と同様、べク トル演算部 107及び誤差最小化処理部 108により行われる。 5002). In this way, the three-dimensional coordinate calculation can be simplified and speeded up by using the already obtained camera vector. This processing is performed by the vector calculation unit 107 and the error minimization processing unit 108, as in the first embodiment.
次に、 2 間の三次元距離計測を行う場合には、まず、画像記録部 102に記録され た画像をディスプレイ等に表示させ、表示された任意の画像(図 14に示す任意のフ レーム Fn)内で、任意の計測点あるいは任意の一点を、始点とし指定する(S003)。 次いで、その画像内か、あるいは異なる他の画像(図 14に示す任意のフレーム Fn+ m)内の任意の計測点、あるいは任意の一点を、終点として指定する。  Next, when the three-dimensional distance measurement between the two is performed, first, the image recorded in the image recording unit 102 is displayed on a display or the like, and the displayed arbitrary image (the arbitrary frame Fn shown in FIG. 14). In (), an arbitrary measurement point or an arbitrary point is designated as a start point (S003). Next, an arbitrary measurement point or an arbitrary point in the image or another different image (arbitrary frame Fn + m shown in FIG. 14) is designated as an end point.
この始点と終点の指定は、例えば、マウス等で行うことができる。  The designation of the start point and the end point can be performed, for example, with a mouse or the like.
[0062] 任意の始点及ぴ終点が指定されると、その点を計測点と見なして、フレーム画像間 で自動追跡する(S005〜S006)。この対応点の自動追跡は、第一実施形態と同様 、対応追跡部 106により行われる。 [0062] When an arbitrary start point and end point are designated, the points are regarded as measurement points, and automatic tracking is performed between frame images (S005 to S006). The automatic tracking of the corresponding points is performed by the corresponding tracking unit 106 as in the first embodiment.
フレーム画像間で対応が取れた始点及び終点は、 S001〜S002で既に求まって いるカメラベクトルを利用して、各画像で既知のカメラベクトルにより三次元座標演算 が行われ、最終の絶対座標が取得される。この対応点の座標演算処理は、第一実施 形態と同様、ベクトル演算部 107,誤差最小化処理部 108及び絶対座標取得部 109 により行われ、そのデータが計測データ記録部 110に格納される。  For the start point and end point that correspond between the frame images, three-dimensional coordinate calculation is performed on each image using the camera vector already obtained in S001 to S002 using the known camera vector, and the final absolute coordinates are obtained Is done. Similar to the first embodiment, the coordinate calculation processing of the corresponding point is performed by the vector calculation unit 107, the error minimization processing unit 108, and the absolute coordinate acquisition unit 109, and the data is stored in the measurement data recording unit 110.
[0063] そして、既知となった始点及び終点の絶対座標データが読み出されることにより、そ の始点及ぴ終点の 2点間の三次元距離力 S、演算により求められる(S007)。この三次 元距離演算が、距離演算部 112において行われる。 [0063] Then, by reading the absolute coordinate data of the known start point and end point, the three-dimensional distance force S between the two points of the start point and end point is calculated (S007). This three-dimensional distance calculation is performed in the distance calculation unit 112.
ここで、この 2点間距離演算は、始点と終点の絶対座標を既知として行われるため、 始点及ぴ終点がともに同一の画像内にある場合は勿論、始点と終点とがそれぞれ異 なる画像内にある場合でも演算、すなわち、三次元計測が可能となる(図 14参照)。 求められた 2点間の三次元距離は、例えば、計測データ表示部 111を介して、必要 に応じて表示,出力することができる(S008)。 Here, the distance calculation between the two points is performed assuming that the absolute coordinates of the start point and the end point are known. The calculation, that is, the three-dimensional measurement can be performed not only when the start point and the end point are both in the same image but also when the start point and the end point are in different images (see FIG. 14). The obtained three-dimensional distance between the two points can be displayed and output as needed, for example, via the measurement data display unit 111 (S008).
[0064] さらに、上記の始点終点間の三次元距離計測を繰り返すことにより(S009)、求めら れた複数の三次元距離を組み合わせて、所望の領域等にっ 、て面積又は体積を演 算により求めることができる(S010)。 Further, by repeating the three-dimensional distance measurement between the above-mentioned start point and end point (S009), the area or volume is calculated based on a desired region or the like by combining the obtained three-dimensional distances. (S010).
すなわち、図 15に示すように、計測点又は任意の指定点を複数指定することで、各 点の三次元座標と三次元距離が求められ、それらを演算することにより、面積あるい は体積を求めることができる。この面積又は体積演算が、面積'体積演算部 113にお いて行われる。  That is, as shown in Fig. 15, by specifying a plurality of measurement points or arbitrary specified points, the three-dimensional coordinates and three-dimensional distance of each point are obtained, and by calculating them, the area or volume can be calculated. You can ask. This area or volume calculation is performed in the area / volume calculation unit 113.
これにより、画像内の対象物が存在する三次元座標系における面積又は体積を演 算により三次元計測し、その結果は、必要に応じて表示,出力することができる(S01 1)。  As a result, the area or volume in the three-dimensional coordinate system where the object in the image exists is three-dimensionally measured by calculation, and the result can be displayed and output as necessary (S011).
[0065] 図 16に、本実施形態の 3D自動測量装置により、同一画像内において三次元距離 を求める任意の点を指定する画像例を示す。  FIG. 16 shows an example of an image in which an arbitrary point for obtaining a three-dimensional distance is designated in the same image by the 3D automatic surveying device of the present embodiment.
図 16 (a)はカメラベクトルが求められた任意の画像(カメラベクトル画像)であり、こ のような任意の画像中に、マウス等を使用して任意の点を指定することができる。具 体的には、図 16 (b)に示すように、三次元距離を求める任意の 2点を指定することが でき、指定された 2点間は直線で結ばれる。  FIG. 16 (a) is an arbitrary image (camera vector image) for which a camera vector has been obtained, and an arbitrary point can be designated in such an arbitrary image using a mouse or the like. Specifically, as shown in FIG. 16 (b), any two points for obtaining the three-dimensional distance can be specified, and the specified two points are connected by a straight line.
そして、指定した 2点間の三次元距離が上述した演算により求められ、その結果が 所定の形式で出力'表示される。  Then, the three-dimensional distance between the two designated points is obtained by the above-described calculation, and the result is output and displayed in a predetermined format.
[0066] 以上のように、本実施形態の 3D自動測量装置 100では、任意の計測点について 高精度な絶対座標が得られることを利用して、所望の始点と終点を指定した三次元 距離の計測を行うことができる。 As described above, the 3D automatic surveying apparatus 100 of the present embodiment utilizes the fact that high-precision absolute coordinates can be obtained for an arbitrary measurement point, and obtains a three-dimensional distance specifying a desired start point and end point. Measurement can be performed.
これにより、同一画像内の 2点間だけでなぐ複数のフレーム画像を跨いで指定した 任意の 2点間であっても、距離的な制約を受けることなく高精度な三次元距離計測が 可能となる。 さらに、 3点以上の複数め点を指定することにより、画像内や複数の画像に跨る任 意の対象物や領域等の面積や体積についても三次元的に計測することができる。 This makes it possible to perform high-precision three-dimensional distance measurement without any distance restrictions, even between any two points specified across multiple frame images that are not only between two points in the same image. Become. Furthermore, by designating three or more points, it is possible to three-dimensionally measure the area or volume of any object or region within an image or across multiple images.
[0067] [第三実施形態] [Third Embodiment]
次に、図 17を参照して、本発明の 3D自動測量装置の第三実施形態について説明 する。  Next, a third embodiment of the 3D automatic surveying device of the present invention will be described with reference to FIG.
図 17は、本発明の第三実施形態に係る 3D自動測量装置の概略構成を示すブロッ ク図である。  FIG. 17 is a block diagram showing a schematic configuration of the 3D automatic surveying device according to the third embodiment of the present invention.
同図に示す 3D自動測量装置は、上述した第一実施形態の変更実施形態であり、 第一実施形態で示した 3D自動測量装置(図 1,図 2及び図 11参照)に、平面凹凸測 量装置 200を追加したものである。  The 3D automatic surveying device shown in the figure is a modified embodiment of the above-described first embodiment, and the 3D automatic surveying device shown in the first embodiment (see FIGS. 1, 2 and 11) has a plano unevenness measuring device. This is the addition of the quantity device 200.
具体的には、本実施形態の平面凹凸測量装置 200は、図 17に示すように、平面詳 細画像取得部 201と、並列画像記録部 202,平面凹凸三次元計測部 203,座標統 合部 204,統合計測座標記録部 205,総合計測データ表示部 206を備えている。  Specifically, as shown in FIG. 17, the plane unevenness surveying apparatus 200 of the present embodiment includes a plane detailed image acquisition unit 201, a parallel image recording unit 202, a plane unevenness three-dimensional measurement unit 203, and a coordinate integration unit. 204, an integrated measurement coordinate recording unit 205, and a total measurement data display unit 206.
[0068] 平面詳細画像取得部 201は、複数のカメラを車両等に積載し、走行路等に沿って 存在する道路等の凹凸のある平面部分を撮影する。 [0068] The plane detailed image acquisition unit 201 mounts a plurality of cameras on a vehicle or the like, and captures an uneven plane portion such as a road existing along a traveling path or the like.
並列画像記録部 202は、平面詳細画像取得部 201で撮影された視差のある複数 の画像を記録する。  The parallel image recording unit 202 records a plurality of images having parallax captured by the plane detailed image acquisition unit 201.
平面凹凸三次元計測部 203は、並列画像記録部 202に記録された視差のある画 像から、平面の凹凸を三次元計測する。  The plane unevenness three-dimensional measuring unit 203 three-dimensionally measures the unevenness of the plane from the image having the parallax recorded in the parallel image recording unit 202.
座標統合部は、 3D自動測量装置 100の計測データ記録部 109 (第一実施形態参 照)から、平面凹凸三次元計測部 203で三次元計測された平面の部分の絶対値三 次元データを読み出し、平面凹凸三次元データと座標を統合する。 ' 座標統合されたデータは、統合計測座標記録部 205に記録され、また、必要に応 じて、統合計測データ表示部 205.を介して表示,出力される。  The coordinate integration unit reads out the absolute value three-dimensional data of the plane portion three-dimensionally measured by the three-dimensional unevenness three-dimensional measuring unit 203 from the measurement data recording unit 109 (see the first embodiment) of the 3D automatic surveying device 100. Integrate the three-dimensional data with the coordinates of the plane unevenness. 'The coordinate-integrated data is recorded in the integrated measurement coordinate recording unit 205 and, if necessary, displayed and output via the integrated measurement data display unit 205.
[0069] 以上のような本実施形態の平面凹凸測量装置 200によれば、上述した第一実施形 態で示した 3D自動測量装置 100を基本としつつ、さらに、道路及ぴ道路周囲の凹 凸について三次元計測を行うことができる 3D自動測量装置を実現することができる。 車載カメラを搭載した車両等が走行する路面の凹凸を計測するには、第一実施形 態の場合と同様に、周囲全体の三次元座標を取得しながら、さらに、路面を面として 計測し、その凹凸を計測する必要がある。 [0069] According to the planar unevenness surveying apparatus 200 of the present embodiment as described above, the 3D automatic surveying apparatus 100 shown in the above-described first embodiment is basically used, and the unevenness around the road and the road is further improved. It is possible to realize a 3D automatic surveying device capable of performing three-dimensional measurement on. To measure the unevenness of the road surface on which a vehicle equipped with an in-vehicle camera runs, the first embodiment As in the case of the state, it is necessary to acquire the three-dimensional coordinates of the entire surroundings, measure the road surface as a surface, and measure the unevenness.
路面を面として計測するには、上述した第一実施形態における計測点の場合と同 様にして、例えば、路面に詳細にマーカー等の印を付し、その路面の映像から三次 元座標を求めること力できる。マーカ一等を付した路面であれば、第一実施形態にお ける計測点の場合と同様に、映像から三次元相対座標を求め、既知の絶対座標を与 えることにより、凹凸についても三次元位置座標が得られる。  In order to measure a road surface as a surface, for example, a mark such as a marker is put on the road surface in detail and the three-dimensional coordinates are obtained from the image of the road surface in the same manner as in the case of the measurement points in the first embodiment described above. I can do it. On a road surface with a marker or the like, three-dimensional relative coordinates are obtained from an image and given known absolute coordinates, as in the case of the measurement points in the first embodiment. The position coordinates are obtained.
[0070] しかし、路面の場合には、建物等と異なり、交通の障害になるなどの理由で、マー カーを付けられないことが多ぐまた、マーカーを付けられたとしても、凹凸を計測で きる程度に路面全体に亘つてマーカーを付すことは一般に困難である。 [0070] However, in the case of a road surface, unlike a building or the like, it is often not possible to attach a marker due to obstacles to traffic, and even if a marker is attached, unevenness cannot be measured. It is generally difficult to apply a marker over the entire road surface to the extent possible.
従って、実際には、マーカー等の印無しに、路面を撮影した画像に基づいて凹凸 の三次元計測を行う必要がある。  Therefore, actually, it is necessary to perform three-dimensional measurement of unevenness based on an image of a road surface without a mark or the like.
ここで、路面への光の投射による模様を印として捉えることが可能である。 そこで、本実施形態では、路面にマーカー等を付すことができない場合に、精度よ く路面の凹凸を計測する手段として、複数 (例えば二台)のカメラで路面を撮影し、視 差を検出して路面の凹凸を検出するようにしてある。  Here, it is possible to catch a pattern formed by projecting light on the road surface as a mark. Therefore, in this embodiment, when it is not possible to attach a marker or the like to the road surface, as a means for measuring the unevenness of the road surface with high accuracy, the road surface is photographed with a plurality of (for example, two) cameras, and the parallax is detected. To detect the unevenness of the road surface.
[0071] 具体的には、平面詳細画像取得部 201により、並列に設置した複数のカメラで所 望の路面を同期撮影をすることで、時間ズレのない映像を取得し、並列画像記録部 202に記録する。 - そして、記録された映像の視差から三次元座標を取得して、平面凹凸三次元計測 部 203において路面の回凸を三次元計測する。 More specifically, the plane detailed image acquisition unit 201 acquires images without time lag by synchronously photographing a desired road surface with a plurality of cameras installed in parallel, and acquires images without time lag. To record. -Then, the three-dimensional coordinates are acquired from the parallax of the recorded video, and the three-dimensional measurement of the road surface is performed in the three-dimensional unevenness measuring unit 203.
ここで得られた凹凸は相対値に過ぎず、絶対座標を持たないから、全体のスケール の中での歪みとしては未だ不完全である。そこで、全体スケールは第一実施形態の 3 D自動測量装置 100により計測レ近距離の路面の凹凸のみ、本実施形態における 視差によって検出する。そして、それらを座標統合部 204において座標統合する。こ れによって、路面の凹凸を正確に表記できることになる。  The irregularities obtained here are only relative values and do not have absolute coordinates, so they are still incomplete as distortions in the entire scale. Therefore, the entire scale is detected by the 3D automatic surveying apparatus 100 of the first embodiment, and only the unevenness of the road surface at a short distance is measured by the parallax in the present embodiment. Then, these are coordinate-integrated in the coordinate integrating unit 204. As a result, the unevenness of the road surface can be accurately described.
[0072] なお、本実施形態における路面の凹凸計測の準備作業においても、上述したよう な路面にマーカーを印すことは、精度良く三次元データを計測する方法として好まし レ、。例えば、路面の凹凸等のようにテクスチャーが一様で特徴点を見つけにくい平面 の凹凸を計測する場合には、マーカーを付けることで、一台のカメラの連続する画像 力 でも、精度良く三次元計測が行えるようになる (第一実施形態参照)。 [0072] In the preparation work for measuring the unevenness of the road surface in the present embodiment, marking a marker on the road surface as described above is preferable as a method of measuring three-dimensional data with high accuracy. Les ,. For example, when measuring unevenness on a flat surface where the texture is difficult to find feature points, such as unevenness on the road surface, adding a marker enables accurate three-dimensional imaging even with the continuous image power of one camera. Measurement can be performed (see the first embodiment).
視差による凹凸計測は、それだけでは、カメラ間のベースラインの制限から、近距 離の精度は取れるが、遠距離の精度は取れない。  The measurement of unevenness by parallax alone can achieve near-range accuracy but not long-range accuracy due to the limitation of the baseline between cameras.
[0073] そこで、本実施形態では、路面の凹凸を計測する超近距離計測のみ、平面凹凸測 量装置 200を使用して、二台のカメラによる視差によって画像処理で路面凹凸を求 め、近距離.中距離'長距離の絶対座標計測については、第一実施形態で示した 3 D自動測量装置 100を使用した一のカメラによるマーカー方式で行い、それらを座標 統合することで、路面の凹凸,たわみ,歪み等のすべてのサイズの路面の凹凸につ いて三次元計測が可能となるようにしてある。 Therefore, in the present embodiment, the road surface unevenness is determined by image processing using the two-camera parallax by using the flatness unevenness measuring device 200 only for the ultra-short distance measurement for measuring the unevenness of the road surface. Distance.Medium distance 'Long distance absolute coordinate measurement is performed by a marker method using one camera using the 3D automatic surveying device 100 shown in the first embodiment, and integrating these coordinates, the unevenness of the road surface Three-dimensional measurement is possible for road surface irregularities of all sizes, such as deformation, deflection, and distortion.
また、重量物による路面のたわみは、負荷時と無負荷時の二度の計測による比較 により計測できる。  In addition, the deflection of the road surface due to heavy objects can be measured by comparing the measurement twice with and without load.
[0074] [第四実施形態] [Fourth Embodiment]
さらに、図 18を参照して、本発明の 3D自動測量装置の第四実施形態について説 明する。  Further, a fourth embodiment of the 3D automatic surveying device of the present invention will be described with reference to FIG.
図 18は、本発明の第四実施形態に係る 3D自動測量装置の概略構成を示すブロッ ク図である。  FIG. 18 is a block diagram showing a schematic configuration of the 3D automatic surveying device according to the fourth embodiment of the present invention.
同図に示す 3D自動測量装置は、上述した第一実施形態の変更実施形態であり、 第一実施形態で示した 3D自動測量装置(図 1,図 2及び図 11参照)に、道路面三次 元地図作成装置 300を追加したものである。  The 3D automatic surveying device shown in the figure is a modified embodiment of the first embodiment described above. The 3D automatic surveying device shown in the first embodiment (see FIGS. 1, 2 and 11) has a tertiary road surface. This is the one in which the former map creator 300 is added.
本実施形態の道路面三次元地図作成装置 300は、 3D自動測量装置 100で得ら れる 360度全周囲画像から道路標示部分を自動抽出して道路面を 3D測量すること により、所望の道路面についての三次元地図を作成できるようにしたものである。  The road surface three-dimensional map creating device 300 of the present embodiment automatically extracts a road marking portion from the 360-degree omnidirectional image obtained by the 3D automatic surveying device 100 and performs 3D surveying of the road surface to obtain a desired road surface. It is possible to create a three-dimensional map of.
[0075] 具体的には、道路面三次元地図作成装置 300は、図 18に示すように、画像安定化 部 301と、進行方向制御部 302,画像垂直面展開部 303,道路面基本形状モデル 生成部 304,道路面三次元計測部 305,路面パラメータ決定部 306,道路透明 CG 生成部 307,合成道路面平面展開部 308,道路面ベクトル抽出部 309,道路面テク スチヤ一柔軟結合部 310;テクスチャー加算平均部 311,対象領域切り取り部 312, 道路標示等認識及ぴ座標取得部 313及び三次元地図生成部 314を備えている。 Specifically, as shown in FIG. 18, the road surface three-dimensional map creating device 300 includes an image stabilizing unit 301, a traveling direction control unit 302, an image vertical plane developing unit 303, and a basic road surface shape model. Generation unit 304, Road surface 3D measurement unit 305, Road surface parameter determination unit 306, Road transparent CG generation unit 307, Synthetic road surface plane development unit 308, Road surface vector extraction unit 309, Road surface technology It includes a flexible flexible connection unit 310; a texture averaging unit 311; a target area cutout unit 312; a road marking recognition and coordinate acquisition unit 313; and a three-dimensional map generation unit 314.
[0076] 画像安定ィヒ部 301は、 3D自動測量装置 100の誤差最小化処理部 108で得られた 誤差の最小化されたカメラベクトルに基づいて、全周囲画像撮影部 101で撮影され る 360度全周囲画像を回転補正し、揺れを補正して画像を安定化させる。 [0076] The image stability unit 301 is captured by the omnidirectional image capturing unit 101 based on the camera vector with the error minimized obtained by the error minimizing unit 108 of the 3D automatic surveying apparatus 100. The rotation of the entire surrounding image is corrected, and the shaking is corrected to stabilize the image.
進行方向制御部 302は、画像安定化部 301で安定ィ匕処理された画像の進行方向 を、目的の方向に固定し、又は目的方向に移動制御する。  The traveling direction control unit 302 fixes the traveling direction of the image subjected to the stabilization processing by the image stabilizing unit 301 to a target direction, or controls movement of the image in the target direction.
画像垂直面展開部 303は、進行方向制御部 302で進行方向制御された画像につ いて垂直面に展開(南極面展開)する。すなわち、道路面三次元モデルを生成する ために、垂直面展開した画像により処理する。  The image vertical plane development unit 303 develops the image whose traveling direction is controlled by the traveling direction control unit 302 on a vertical plane (antarctic plane development). That is, in order to generate a three-dimensional road surface model, processing is performed using an image developed on a vertical plane.
360度全周囲画像にはすべての方向が対等であり光軸は存在しない。敢えて言え ばすベての方向が光軸となる。そこで 360度全周囲画像を通常のレンズで撮影した パースペクティブを持つような平面展開画像として表示するには、仮想の光軸を設定 して展開する平面を決め、その平面でパースペクティブを持つような画像に変換する 必要; ^ある。  In a 360-degree omnidirectional image, all directions are equal and there is no optical axis. All directions are the optical axis. Therefore, in order to display a 360-degree omnidirectional image as a plane developed image that has a perspective taken with a normal lens, set a virtual optical axis, determine the plane to be developed, and image that has a perspective on that plane Needs to be converted to;
このため、垂直方向を基準として造られている構造物や、垂直方向が基準であって も多少の勾配を持つような道路面及ぴ道路標示を処理するためには座標軸に垂直 な平面でリニアスケールとなるように画像展開する力、、さらに道路面の傾斜を含めた 道路面でリニアスケールとなる平面で画像展開するか、することで作業は単純化され るので有利である。一般には、道路の三次元座標を取得するには垂直面展開、道路 面及び道路標示を処理するには道路面展開が有利である。  Therefore, in order to process structures that are built on the basis of the vertical direction or road surfaces and road markings that have a slight gradient even when the vertical direction is the reference, a linear plane that is perpendicular to the coordinate axis is used. It is advantageous to simplify the work by developing the image on a plane that becomes a linear scale on the road surface including the slope of the road surface, and the ability to develop the image so that it becomes a scale. In general, vertical plane development is advantageous for obtaining three-dimensional coordinates of roads, and road plane development is advantageous for processing road surfaces and road markings.
そこで、本実施形態では、画像垂直面展開部 303において、画像を垂直面に展開 して処理するようにしてある。  Therefore, in the present embodiment, the image is developed in the vertical plane and processed in the image vertical plane development unit 303.
なお、この垂直面展開処理は、 ^次元地図生成には必ずしも必要ではないが、作 業を単純ィ匕できることから、本実施形態では画像垂直面展開部 303を備えて処理を 行うようにしてある。  Note that this vertical plane development processing is not necessarily required for generating a ^ -dimensional map, but since the operation can be simplified, in the present embodiment, the processing is provided with the image vertical plane development unit 303. .
[0077] 道路基本形状モデル生成部 304は、道路面の形状の各パラメータを未定とした道 路面の基本形モデルを生成する。 道路面三次元計測部 Ϊ0'5は、画像垂直面展開部により垂直面に展開された路面 の画像から、道路面の三次元座標を計測する。具体的には、各フレーム画像の道路 画面の数力所を大きくブロック化し、相関により三次元計測する。ここでは、大面積で 相関をとるので精度を高くすることができる。 [0077] The basic road shape model generation unit 304 generates a basic road surface model in which the parameters of the road surface shape are undecided. The road surface three-dimensional measurement unit # 0'5 measures the three-dimensional coordinates of the road surface from the image of the road surface developed on the vertical plane by the image vertical plane development unit. Specifically, several places on the road screen of each frame image are divided into large blocks, and three-dimensional measurement is performed by correlation. Here, since the correlation is obtained over a large area, the accuracy can be improved.
路面パラメータ決定部 306は、道路面モデルの各パラメータを決定し、道路面の三 次元形状を自動決定する。 ..  The road surface parameter determination unit 306 determines each parameter of the road surface model and automatically determines the three-dimensional shape of the road surface. ..
[0078] 道路透明 CG生成部 307は、道路面三次元計測部で得られた道路面計測データか ら、当該道路面の形状の各パラメータを取得し、当該道路面の透明な CGを生成する 。すなわち、道路透明 CG生成部 307では、ノ メータが固定し、形状が決定した道 路面モデルにより、その道路面の透明 CGを作成する。 [0078] The road transparent CG generation unit 307 acquires each parameter of the shape of the road surface from the road surface measurement data obtained by the road surface three-dimensional measurement unit, and generates a transparent CG of the road surface. . In other words, the road transparent CG generation unit 307 generates a transparent CG of the road surface using the road surface model whose shape is determined by fixing the nomometer.
道路面平面展開部 308は、道路透明 CG生成部 307で生成された透明 CGと進行 方向制御部 302により進行方向に安定化された道路面画像を合成して、道路面に 平行に画像を展開する。すなわち、道路面平面展開部 308は、前工程までの処理で 得られた画像について展開面を多少微修正して、道路平行面で画像展開する。道 路は常に水平面とは限らないので、次工程の相関処理のためにリニア平面(同一画 像内で長さが等しいものは同じ長さとなる平面)に近い面を選択する。  The road plane development unit 308 combines the transparent CG generated by the road transparent CG generation unit 307 with the road surface image stabilized in the traveling direction by the traveling direction control unit 302, and develops the image parallel to the road surface. I do. That is, the road surface plane developing unit 308 slightly modifies the developed surface of the image obtained by the processing up to the previous process and develops the image on the road parallel plane. Since the road is not always a horizontal plane, a plane close to a linear plane (a plane with the same length in the same image has the same length) is selected for correlation processing in the next process.
[0079] 道路面ベクトル抽出部 309は、平面展開された展開画像内のベクトル選択により、 道路面ベクトルのみ抽出して、他を消去する。停止ベクトル (最も小さい移動ベクトル )を選択することで道路面を抽出することができ、移動ベクトルを持つ移動体も削除す る。 . . [0079] The road surface vector extraction unit 309 extracts only the road surface vector and deletes the others by selecting a vector in the developed image that has been developed in a plane. The road surface can be extracted by selecting the stop vector (smallest movement vector), and the moving object with the movement vector is also deleted. .
道路面テクスチャー柔軟結合部 310は、道路面のゴム紐理論で道路面テクスチャ 一を取得し、後の処理のために、道路面をゴム紐理論で結合しておく。すなわち、道 路面テクスチャー柔軟結合部 310は、必要に応じて、道路面をブロック化して、道路 面の特徴ある部分をテクスチャーの順番を変更しないように柔軟に結合し、その出力 を次段のテクスチャー加算平均部 311に送る。  The road surface texture flexible connection unit 310 acquires the road surface texture using the rubber strap theory of the road surface, and connects the road surface using the rubber strap theory for later processing. That is, the road surface texture flexible coupling unit 310 blocks the road surface as necessary, flexibly combines characteristic portions of the road surface without changing the texture order, and outputs the output to the next stage of the texture. Sent to averaging unit 311.
[0080] テクスチャー加算平均部 311は、透明 CGの道路面上で道路面を抽出し、透明 CG 上で加算平均して、ノイズを消去する。カメラと道路面との距離は一定であることから 、静止座標系での重ね合わせが可能であり、加算平均が可能となる。 対象領域切り取り部 3 Γ2は、テクスチャ一加算平均部 311でノイズ低減された画像 から、道路標示等の道路面図形や障害物等の領域の概略を大きく切り取る。例えば 、透明 CG上の道路面ヒストグラムから、任意の道路標示部分のみ抜き出す。なお、こ こでは領域を大きく切り取るのが目的であり、それが不完全であってもよい。 [0080] The texture averaging unit 311 extracts the road surface on the road surface of the transparent CG, performs averaging on the transparent CG, and eliminates noise. Since the distance between the camera and the road surface is constant, superposition in a stationary coordinate system is possible, and averaging is possible. The target area cutout unit 322 largely cuts out an outline of an area such as a road surface figure such as a road sign or an obstacle from the image in which the noise is reduced by the texture-averaging unit 311. For example, only an arbitrary road sign is extracted from the road surface histogram on the transparent CG. Here, the purpose is to cut out a large area, and it may be incomplete.
[0081] 道路標示認識及び座標取得部 313は、対象物領域切り取り部 312で切り取られた 対象物領域から、目的の対象物を認識してその座標を取得する。例えば、抜き出し た道路標示部分を PRMで認識し、座標を決定する。すなわち、対象領域切り取り部 3 12で切り取られた領域で PRM処理を行う。道路面の三次元形状は決まっているので 、二次元的に座標を決定することで精虎が向上する。 [0081] The road sign recognition and coordinate acquisition unit 313 recognizes a target object from the object region cut out by the object region cutout unit 312 and obtains its coordinates. For example, the extracted road marking part is recognized by PRM and the coordinates are determined. That is, PRM processing is performed on the region cut by the target region cutout unit 312. Since the three-dimensional shape of the road surface is fixed, determining the coordinates two-dimensionally improves the spirit.
ここで、 PRMとは、 Parts  Where PRM is Parts
Reconstruction Method (3D空間認識方法)の略であり、本願出願人が既に特許出 願してレ、る対象物を認識するための技術である(国際出願 PCT/JP01/05387号参照) 。具体的には、 PRM技術は、前もって予想される対象物の形状と属性を部品(ォペレ ータ部品)としてすベて用意しておき、それら部品と現実の実写映像を対比して、一 致する部品を選択して対象物を認識する技術である。車両の自動案内走行や自動 運転走行のために必要となる対象物の「部品」は、道路標示としての車線、白線、黄 線、横断道、道路標識としての速度標識、案内標識などであり、これらは定形のもの であるので、 PRM技術によりその認識は容易に行える。また対象物をカメラベクトルが 求められた映像 (CV映像)中に検索する場合においても、その対象物の存在する予 想三次元空間を狭い範囲に限定することが可能となり、認識の効率化が可能となる。  It is an abbreviation of Reconstruction Method (3D space recognition method), and is a technology for recognizing an object that the applicant has already applied for a patent for (see International Application No. PCT / JP01 / 05387). Specifically, the PRM technology prepares in advance all the shapes and attributes of an object to be expected as parts (operator parts), and compares those parts with actual shot images to match. This is a technology for selecting a part to be recognized and recognizing the target. The `` parts '' of the objects required for automatic guidance driving and automatic driving of the vehicle include lanes, white lines, yellow lines, pedestrian crossings, speed signs as road signs, guidance signs, etc. Since these are standard types, their recognition can be easily performed using PRM technology. Also, when an object is searched for in a video (CV image) for which a camera vector has been obtained, the predicted three-dimensional space in which the object exists can be limited to a narrow range, and recognition efficiency can be improved. It becomes possible.
[0082] そして、道路標示認識及び座標取得部 313で座標が決定された図形の出力を計 測点と見なして、 3D自動測量装置の計測点特定部 104に送られる(図 18参照)。 3 D自動測量装置 100では、上述した処理により、道路標示等の路面の図形,道路面 上の障害物等の絶対座標が取得されて、取得された絶対座標は計測データ記録部 110から出力される。これによつて、すべての図形が絶対座標により再構成され、次 工程の三次元地図生成部 314に送られる。 Then, the output of the figure whose coordinates have been determined by the road sign recognition and coordinate acquisition unit 313 is regarded as a measurement point and sent to the measurement point identification unit 104 of the 3D automatic surveying device (see FIG. 18). The 3D automatic surveying apparatus 100 acquires the absolute coordinates of the road surface figure such as the road sign and the obstacles on the road surface by the above-described processing, and outputs the acquired absolute coordinates from the measurement data recording unit 110. You. As a result, all the figures are reconstructed by the absolute coordinates and sent to the three-dimensional map generator 314 in the next step.
三次元地図生成部 314は、計測データ記録部からの出力(絶対座標)を再構成し て、決められた仕様の三次元図形として取り出して再配置し、所望の道路面の三次 元地図を生成する。 The three-dimensional map generation unit 314 reconstructs the output (absolute coordinates) from the measurement data recording unit, retrieves and rearranges the output as a three-dimensional figure having a predetermined specification, and outputs a three-dimensional map of a desired road surface. Generate the original map.
[0083] 図 19は、道路上空力 撮影した映像と等価になるように変換した映像に基づいて 三次元地図を生成する場合の一例を示して!/、る。同図に示す道路映像は 3D自動測 量装置 100によりカメラベクトル演算された 360度全周囲画像(CV映像)であり、完 全な平面図ではなぐ地上数メートルから観察した道路面となっている。  FIG. 19 shows an example of a case where a three-dimensional map is generated based on an image converted so as to be equivalent to an image taken on a road by aerodynamics! The road image shown in the figure is a 360-degree omnidirectional image (CV image) calculated by the camera vector using the 3D automatic surveying device 100, and is a road surface observed from several meters above ground, which is not a complete plan view. .
道路の三次元地図を生成する場合には、道路面の近傍の形状が重要であり.、高い 計測精库が求められる。一般に、道路構造は、図 19 (a)の断面図に示すよな構造を していることが前もって分かっているので、その形状を予想して、三次元計測をするこ とがでさる。 '  When generating a three-dimensional map of a road, the shape near the road surface is important, and high measurement accuracy is required. In general, since it is known in advance that the road structure has a structure as shown in the cross-sectional view of FIG. 19 (a), three-dimensional measurement can be performed by predicting the shape. '
[0084] また、 360度全周囲画像の特長を生かして、道路面の直下を視点方向とする道路 面表示に設定することで、広い領域でのマッチング &グリップが可能となる。具体的 には、通常任意方向では 15 * 15ピクセル程度の領域でのマッチング &グリップが限 界であったが、直下表示では視点と道路面が直角に近い形となり、フレーム間画像 は形状を変更することなく移動するので、各フレームによる画像歪みを無視すること ができる。これにより、例えば 50 * 50ピクセル以上の広い領域でのマッチング &グリ ップ (M&G)が可能となり、特徴の少ない道路面であってもマッチング &グリップが行 え、計測精度が向上する。  [0084] Also, by making use of the features of the 360-degree omnidirectional image and setting the road surface display so that the viewpoint direction is directly below the road surface, matching and gripping in a wide area becomes possible. Specifically, matching and gripping in an area of about 15 * 15 pixels was usually limited in arbitrary directions, but in the direct display, the viewpoint and road surface are almost right angles, and the image between frames changes shape Since the image is moved without performing, the image distortion due to each frame can be ignored. As a result, matching and gripping (M & G) can be performed over a wide area of, for example, 50 * 50 pixels or more, and matching and gripping can be performed even on a road surface with few features, thereby improving measurement accuracy.
さらに、道路舗装面には道路標示 (センターライン,路肩ライン等)が決められた基 準で描かれていることから、そのパターンを PRMオペレータ(PRM  Furthermore, since the road markings (center line, shoulder line, etc.) are drawn on the pavement surface according to the determined standard, the pattern is displayed by the PRM operator (PRM operator).
Operator)の部品として予め用意しておき、用意されたオペレータ部品と映像比較す ることで、その三次元的位置を検出することが可能となる。  It is possible to detect its three-dimensional position by preparing it in advance as an operator part and comparing the image with the prepared operator part.
[0085] 具体的には、道路面オペレータとしては、図 19 (c)に示すようなパターンがある。な お、オペレータ部品としては図示しない他のパターンも多数想定される力、三次元地 図におレ、ては道路全面の計測は^要なく、道路面を適切な間隔でサンプリングして 道路断面図を完成させればよいので、図 19に示す程度で十分であると言える。 さらに、三次元の PRMオペレータ部品 (PRM 3D Operator)も用意し、三次元的に マッチングすることで、例えば、道路の縁石部分の段差についても精度良く再現する こと力 Sできる。 [0086] 図 20は、図 19に示 だ道路を立体視した三次元地図を示す。 [0085] Specifically, as a road surface operator, there is a pattern as shown in Fig. 19 (c). Note that many other patterns (not shown) are assumed as operator parts.For example, force on a three-dimensional map is not necessary, so it is not necessary to measure the entire road surface. Since the figure only needs to be completed, the degree shown in Fig. 19 is sufficient. In addition, by preparing a 3D PRM operator part (PRM 3D Operator) and performing three-dimensional matching, it is possible to accurately reproduce, for example, steps on the curb of a road. [0086] Fig. 20 shows a three-dimensional map of the road shown in Fig. 19 in a stereoscopic view.
同図に示すように、舗装道路の映像においては、 PRMオペレータは、図 19に示し たセンターライン等の道路面表示よりも、立体的な道路標識の認識においてその有 効性を発揮する。すなわち、道路標識の認識に関しては、図 20 (a)に示すように、 CV映像上に道路標識予想空間を.想定して、その限定された空間で目的の道路標識 の種類と位置と形状と座標を認識することが可能となる。  As shown in the figure, in the image of the pavement road, the PRM operator is more effective in recognizing the three-dimensional road sign than the road surface display such as the center line shown in FIG. In other words, regarding the recognition of road signs, as shown in Fig. 20 (a), assuming an expected road sign space on the CV image, the type, position, shape and shape of the target road sign in the limited space The coordinates can be recognized.
CV映像は、実写画像上に道路標識予想空間を CGとして合成配置することができ、 その制限範囲のみで,目的の道路標識を検索することが可能となる。  For CV video, the expected road sign space can be combined and arranged on the actual image as CG, and the target road sign can be searched only within the limited range.
また、道路標識の形状やサイズ等は通常決まっているので、予め用意してある各道 路標識の三次元オペレータを部品として(図 20 (b)参照)、道路標識予想空間の中 に三次元的に決まった大きさの標識を検索し、探し出すことが可能となる。そして、探 し出された標識の種類と位置と座標と形状が認識される。  In addition, since the shape and size of the road sign are usually determined, the three-dimensional operator of each road sign prepared in advance is used as a part (see Fig. 20 (b)), and the three-dimensional It is possible to search for a sign of a predetermined size and find it. Then, the type, position, coordinates, and shape of the found sign are recognized.
[0087] このように、 CV映像は、対象物が三次元座標を持つのと同じ扱いが可能であり、検 索には極めて有利となる。道路標識のように、検索するものの形状が既に決まってい るものについては、その三次元位置における見かけの大きさを計算で求められるの で、 PRMオペレータを使用するのが有利であり、 PRMオペレータ部品として様々な標 識を用意しておくことで、用意された標識部品の中から一致する部品を探し出すこと で、対象標識を認識することが可能となる。 [0087] As described above, a CV video image can be treated in the same way as an object having three-dimensional coordinates, and is extremely advantageous for searching. For a road sign, such as a road sign, for which the shape of the object to be searched has already been determined, the apparent size at its three-dimensional position can be calculated, so it is advantageous to use the PRM operator. By preparing various kinds of markers, it is possible to recognize the target sign by searching for a matching part from the prepared sign parts.
以上のようにし.て、本実施形態の 3D自動測量装置 100によれば、道路面三次元地 図作成装置 300を備えることにより、上述した第一実施形態で示した 3D自動測量装 置 100を基本としつつ、さらに、任意の道路面等について高精度な三次元地図を生 成することができるようになる。  As described above, according to the 3D automatic surveying device 100 of the present embodiment, the 3D automatic surveying device 100 shown in the first embodiment described above is provided by including the road surface three-dimensional map creating device 300. In addition to the basics, a high-precision three-dimensional map of an arbitrary road surface can be generated.
[0088] 以上、本発明の 3D自動測量装置について、好ましい実施形態を示して説明したが 、本発明に係る 3D自動測量装置は、上述した実施形態にのみ限定されるものでは なぐ本発明の範囲で種々の変更実施が可能であることは言うまでもない。 [0088] As described above, the preferred embodiment of the 3D automatic surveying device of the present invention has been described. However, the 3D automatic surveying device according to the present invention is not limited to only the above-described embodiment, but the scope of the present invention. It goes without saying that various modifications can be made.
例えば、上記の実施形態で示した 3D自動測量装置 100と、それに付加される準備 作業部 10, 20や距離演算部 112,面積'体積演算部 113、さらに平面凹凸測量装 置 200や道路面三次元地図作成装置 300は、それぞれ、任意の組合せにより実施 することができ、上述した実施形態で示した組合せのみに限定されるものではなぐ 適宜一部を省略したり、あるいは、すべての装置を同時に全部を備えることもできる。 産業上の利用可能性 For example, the 3D automatic surveying device 100 shown in the above embodiment, the preparation work units 10 and 20, the distance calculating unit 112, the area 'volume calculating unit 113, the planar unevenness measuring device 200, and the Original map creator 300 is implemented in any combination However, the present invention is not limited to only the combinations shown in the above-described embodiments. Some of them may be omitted as appropriate, or all the devices may be provided at the same time. Industrial applicability
本発明は、例えば、車載カメラでビデオ撮影された動画映像に基づ!ヽて所望の計 測点の位置や距離,面積を求める画像測量装置として利用することができる。  INDUSTRIAL APPLICABILITY The present invention can be used, for example, as an image surveying apparatus for obtaining the position, distance, and area of a desired measurement point based on a moving image video captured by a vehicle-mounted camera.

Claims

[1] 移動する 36〇度全周囲カメラにより、所望の計測点及び三次元絶対座標が既知の 所定の基準点を含む動画又は連続静止画を撮影する全周囲画像撮影部と、 前記全周囲画像撮影部で撮影された画像を記録する画像記録部と、 [1] a moving 36 ° omnidirectional camera, an omnidirectional image capturing section for capturing a moving image or a continuous still image including a predetermined measurement point and a predetermined reference point whose three-dimensional absolute coordinates are known, and the omnidirectional image An image recording unit that records an image photographed by the photographing unit;
前記画像記鑷部に記録された画像内におレ、て、'前記計測点以外の映像的特徴 ある部分を特徴点として抽出する特徴点抽出部と、  A feature point extracting unit that extracts a certain part as a feature point from the image recorded in the image recording unit,
前記画像記録部に記録された画像内におレ、て、前記計測点を自動抽出する計測 点特定部と、  A measurement point specifying unit for automatically extracting the measurement points in an image recorded in the image recording unit;
前記画像記録部に記録された画像内において、前記基準点を自動抽出する基準 点特定部と、  A reference point specifying unit that automatically extracts the reference point in an image recorded in the image recording unit;
前記計測点,基準点,特徴点を、各フレーム画像内に追跡して対応付ける対応点 追跡部と、  A corresponding point tracking unit for tracking and associating the measurement point, the reference point, and the feature point in each frame image,
前記対応点追跡部で対応づけられた計測点,基準点,特徴点と、必要に応じて前 記カメラの位置と回転を示すカメラベクトルにつレ、て、三次元相対座標を演算により 求めるベクトル演算部と、  A vector for calculating the three-dimensional relative coordinates based on the measurement point, the reference point, and the feature point associated with the corresponding point tracking unit and, if necessary, the camera vector indicating the position and rotation of the camera. An operation unit;
前記ベクトル演算部における演算を繰り返し、求められる三次元相対座標の誤差を 最小にするように重複演算を繰り返し、統計処理する誤差最小化処理部と、 前記基準点の既知の三次元絶対座標から、前記ベクトル演算部で求められた三次 元相対座標を絶対座標系に変換し、前記計測点,基準点,特徴点に三次元絶対座 標を付与する絶対座標取得部と、  An error minimization processing unit that repeats the operation in the vector operation unit, repeats the overlap operation so as to minimize the error of the obtained three-dimensional relative coordinates, and performs statistical processing, and from the known three-dimensional absolute coordinates of the reference point, An absolute coordinate acquisition unit that converts the three-dimensional relative coordinates obtained by the vector calculation unit into an absolute coordinate system and assigns a three-dimensional absolute coordinate to the measurement point, the reference point, and the feature point;
前記計測点,基準点,特徴点に付与された最終の絶対座標を記録する計測デー タ記録部と、  A measurement data recording unit that records final absolute coordinates assigned to the measurement points, the reference points, and the feature points;
前記計測データ記録部に記録された計測データを表示する表示部と、 を備えることを特徴とする 3D自動測量装置。  A display unit for displaying the measurement data recorded in the measurement data recording unit; and a 3D automatic surveying device.
[2] 移動する 360度全周囲カメラにより、所望の計測点、及び三次元絶対座標が既知 の所定の基準点を含む動画又は連続静止画を撮影する全周囲面像撮影部と、 前記全周囲画像撮影部で撮影された画像を記録する画像記録部と、 [2] a 360-degree omnidirectional camera that moves, captures a moving image or a continuous still image including a desired measurement point and a predetermined reference point whose three-dimensional absolute coordinates are known, An image recording unit that records an image photographed by the image photographing unit;
前記画像記録部に記録された画像內において、映像的特徴のある部分を特徴点と して抽出する特徴点抽出部と、 In the image 內 recorded in the image recording unit, a portion having a visual feature is defined as a feature point. A feature point extraction unit for extracting
前記画像記録部に記録された面像内において、前記基準点を自動抽出する基準 点特定部と、  A reference point specifying unit that automatically extracts the reference point in the plane image recorded in the image recording unit;
前記基準点,特徴点を、各フレーム画像内に追跡して対応付ける対応点追跡部と 前記対応点追跡部で対応づけられた基準点,特徴点カゝら前記カメラの位置と回転 を示すカメラベクトルについて、三次元相対座標を演算により求めるベクトル演算部と 前記ベクトル演算部における演算を繰り返し、求められる三次元相対座標の誤差を 最小にするように重複演算を繰り返し、統計処理する誤差最小化処理部と、 前記基準点の既知の三次元絶対座標から、前記ベクトル演算部で求められたカメ ラの三次元相対座標を絶対座標系に変換し、三次元絶対座標を付与する絶対座標 取得部と、  A corresponding point tracking unit for tracking and correlating the reference point and the feature point in each frame image, and a camera vector indicating the position and rotation of the camera, the reference point and the feature point correlated by the corresponding point tracking unit. An error minimization processing unit for repeating a calculation in the vector operation unit for calculating three-dimensional relative coordinates by the calculation and the vector operation unit, repeating an overlap operation so as to minimize an error in the obtained three-dimensional relative coordinates, and performing statistical processing An absolute coordinate acquisition unit that converts the three-dimensional relative coordinates of the camera determined by the vector calculation unit from the known three-dimensional absolute coordinates of the reference point into an absolute coordinate system, and assigns three-dimensional absolute coordinates,
前記画像記録部に記録された画像内において、前記計測点を自動抽出する計測 点特定部と、  A measurement point identification unit that automatically extracts the measurement point in the image recorded in the image recording unit;
前記計測点特定部で抽出された計測点を各フレーム画像内に追跡して対応付け る計測点追跡部と、  A measurement point tracking unit that tracks and associates the measurement points extracted by the measurement point identification unit in each frame image,
前記計測点追跡部で対応づけられた計測点の計測値を前記ベクトル演算部で求 められたカメラべ外ルから演算で求める計測点計測演算部と、  A measurement point measurement / calculation unit for calculating a measurement value of the measurement point associated with the measurement point tracking unit from the camera level determined by the vector calculation unit;
前記計測点の絶対座標を記録する計測データ記録部と、 .  A measurement data recording unit that records the absolute coordinates of the measurement point;
前記計測データ記録部に記録された計測データを表示する表示部と、 を備えることを特徴とする 3D自動測量装置。  A display unit for displaying the measurement data recorded in the measurement data recording unit; and a 3D automatic surveying device.
[3] 前記基準点は、三次元絶対座標が既知の基準点とともに、又は三次元絶対座標が 既知の基準点に換えて、長さが既知の長さ基準点を含み、 [3] The reference point includes a length reference point having a known length, together with a reference point having known three-dimensional absolute coordinates, or in place of a reference point having known three-dimensional absolute coordinates,
前記ベクトル演算部は、前記長さ基準点の 2点間の距離を演算により求め、 前記誤差最小化処理部は、前記べ外ル演算部で演算により得られる長さ基準点 の 2点間の距離が、当該長さ基準点の既知の長さと一致するように、重複演算を繰り 返し、統計処理する請求項 1又は 2記載の 3D自動測量装置。 The vector calculation unit calculates the distance between the two length reference points by calculation, and the error minimization processing unit calculates the distance between the two length reference points obtained by the calculation in the vector calculation unit. The 3D automatic surveying device according to claim 1 or 2, wherein the overlap calculation is repeated and statistical processing is performed so that the distance matches the known length of the length reference point.
[4] 前記画像記録部に記録された画像内において、任意の計測点を指定する画像内 測定点指定作業部と、前記画像記録部に記録された画像内において、任意の基準 点を指定する画像内基準点指定作業部と、を有する画像内準備作業部を備え、 この画像内準備作業部により、前記測定点特定部及び基準点特定部において、任 意の測定点及び基準点を指定して抽出させる請求項 1乃至 3のいずれかに記載の 3 [4] In the image recorded in the image recording unit, within the image to designate an arbitrary measurement point Measurement point designation work unit, and to designate an arbitrary reference point in the image recorded in the image recording unit An in-image preparation operation unit having an in-image reference point designation operation unit. The in-image preparation operation unit designates an arbitrary measurement point and a reference point in the measurement point specifying unit and the reference point specifying unit. 3. The method according to claim 1, wherein
[5] 前記ベクトル演算部は、 [5] The vector operation unit includes:
前記計測点,基準点,特徴点又はカメラベクトルの三次元相対座標演算に用いる 任意の二つのフレーム画像 Fn及び Fn+m(m=フレーム間隔)を単位画像として、 所望の三次元相対座標を求める単位演算を繰り返し、  Using the arbitrary two frame images Fn and Fn + m (m = frame interval) used for the three-dimensional relative coordinate calculation of the measurement point, reference point, feature point or camera vector as a unit image, obtain desired three-dimensional relative coordinates Repeat the unit operation,
前記誤差最小化処理部は、  The error minimization processing unit,
画像の進行とともに nが連続的に進行することにより、同一特徴点につ!/、て複数回 演算されて得られる各三次元相対座標の誤差が最小になるようにスケール調整して 統合し、最終の三次元相対座標を決定する請求項 1乃至 4のいずれかに記載の 3D  As n progresses continuously as the image progresses, scale adjustment is performed so that the error of each three-dimensional relative coordinate obtained by calculating multiple times for the same feature point is minimized and integrated. The 3D according to any one of claims 1 to 4, wherein a final three-dimensional relative coordinate is determined.
[6] 前記ベクトル演算部は、 [6] The vector operation unit includes:
前記フレーム間隔 mを、カメラから前記計測点,基準点,特徴点までの距離に応じ て、カメラからの距離が大きいほど mが大きくなるように設定して単位演算を行う請求 項 5記載の 3D自動測量装置。  The 3D according to claim 5, wherein the frame interval m is set according to the distance from the camera to the measurement point, the reference point, and the feature point such that the greater the distance from the camera, the greater the m. Automatic surveying equipment.
[7] 前記ベクトル镔算部は、 .  [7] The vector calculation unit:
求められた三次元相対座標の誤差の分布が大きい特徴点を削除し、必要に応じて 他の特微点に基づレ、て再演算を行レ、、測定点演算の精度を上げる請求項 1乃至 6 のいずれかに記載の 3D自動測量装置。  A feature point having a large error distribution of the obtained three-dimensional relative coordinates is deleted, and if necessary, re-calculation is performed based on another feature point, and the accuracy of measurement point calculation is improved. The 3D automatic surveying device according to any one of 1 to 6.
C8] 前記画像記録部に記録された任意の画像内の任意の計測点又は任意の特徴点を 始点とし、当該画像内又は異なる他の画像内の任意の計測点又は任意の特徴点を 終点として指定して、前記計測データ記録部に記録された絶対座標に基づき、同一 画像内又は異なる画像間で指定された任意の始点終点間の三次元距離を演算によ り求める距離演算部を備える請求項 1乃至 7のいずれかに記載の 3D自動測量装置 C8] An arbitrary measurement point or an arbitrary feature point in an arbitrary image recorded in the image recording unit is set as a start point, and an arbitrary measurement point or an arbitrary feature point in the image or another different image is set as an end point. A distance calculating unit for calculating a three-dimensional distance between arbitrary start points and end points specified within the same image or between different images based on the absolute coordinates recorded in the measurement data recording unit. Item 3D automatic surveying device according to any one of Items 1 to 7
[9] 前記画像記録部に記録された任意の同一画像内又は異なる画像間で複数の点を 指定して、前記距離演算部で求められる始点終点間の三次元距'離計測を複数組み 合わせ、同一画像内又は異なる画像間における所望の対象物の面積又は体積を演 算により求める面積 ·体積演算部を備える請求項 8記載の 3D自動測量装置。 [9] A combination of a plurality of three-dimensional distance measurement between a start point and an end point obtained by the distance calculation unit by designating a plurality of points in an arbitrary same image or between different images recorded in the image recording unit. 9. The 3D automatic surveying device according to claim 8, further comprising an area / volume calculator for calculating an area or volume of a desired object in the same image or between different images by calculation.
[10] 前記ベクトル演算部によって得られたカメラベクトルにより、前記全周囲画像撮影部 で得られた画像を進行方向に固定又は制御する進行方向制御部と、 [10] A traveling direction control unit that fixes or controls the image obtained by the omnidirectional image photographing unit in the traveling direction with the camera vector obtained by the vector calculation unit,
前記進行方向制御部により進行方向に安定化された画像を垂直面に展開する画 像垂直面展開部と、 '  An image vertical plane development unit for developing an image stabilized in the traveling direction by the traveling direction control unit on a vertical plane;
道路面の形状の各パラメータを未定とした道路面の基本形モデルを生成する道路 面基本形モデル生成部と、  A road surface basic form model generation unit for generating a road surface basic form model in which each parameter of the road surface shape is undetermined;
前記画像垂直面展開部により垂直面に展開された路面の画像力 道路面の三次 元座標を計測する道路面三次元計測部と、  A road surface three-dimensional measurement unit that measures three-dimensional coordinates of the road surface, the image force of the road surface developed on the vertical surface by the image vertical surface development unit,
前記道路面三次元計測部で得られた道路面計測データから、当該道路面の形状 の各パラメータを取得し、当該道路面の透明な CGを生成する道路透明 CG生成部と 前記道路透明 CG生成部で生成された透明 CGと前記進行方向制御部により進行 方向に安定化された道路面画像を合成して、道路面に平行に画像を展開する合成 道路面平面展開部と、  A road transparent CG generation unit that acquires parameters of the shape of the road surface from the road surface measurement data obtained by the road surface three-dimensional measurement unit and generates a transparent CG of the road surface; A synthetic road surface plane development unit that combines the transparent CG generated by the unit with the road surface image stabilized in the traveling direction by the traveling direction control unit and develops the image parallel to the road surface;
前記合成道路面平面展開部で展開された画像に道路面テクスチャーを加算平均 して、当該画像のノイズを減少させるテクスチャー加算平均部と、  A texture averaging unit that adds and averages road surface textures to the image developed by the composite road surface plane development unit to reduce noise in the image;
必要に応じて、道路面をブロック化し、道路面の特徴ある部分をテクスチャーの順 番を変更しないように柔軟に結合し、その出力を前記テクスチャー加算平均部に送る 道路面テクスチャー柔軟結合部と、  If necessary, the road surface is blocked, characteristic portions of the road surface are flexibly combined without changing the order of the textures, and the output is sent to the texture averaging unit.
前記テクスチャー加算平均部でノイズ低減された画像から、道路標示等の道路面 図形や障害物等の領域の概略を大きく切り取る対象物領域切り取り部と、 前記対象物領域切り取り部で切り取られた対象物領域から、目的の対象物を認識 し、その座標を取得する道路標示等認識及び座標取得部と、 前記座標が取得された目的の対象物を構成する多角形の各点を計測点として絶 対座標を求める前記計測点特定部に入力し、絶対座標を取得した前記計測データ 記録部からの出力を再構成して道路面の三次元地図を生成する三次元地図生成部 と、 From the image noise reduced by the texture averaging unit, an object region cutout unit that largely cuts an outline of a region such as a road surface figure or an obstacle such as a road sign, and an object cut out by the object region cutout unit A road marking recognition and coordinate acquisition unit for recognizing a target object from the area and acquiring its coordinates, Each point of the polygon constituting the target object from which the coordinates have been acquired is input to the measurement point identification unit for obtaining absolute coordinates using measurement points as measurement points, and the output from the measurement data recording unit for which absolute coordinates have been acquired is output. A three-dimensional map generator for reconstructing a three-dimensional map of the road surface;
を有する三次元地図生成装置を備える請求項 1乃至 9のいずれかに記載の 3D自 動測量装置。  The 3D automatic surveying device according to any one of claims 1 to 9, further comprising a three-dimensional map generation device having:
PCT/JP2004/015766 2003-10-29 2004-10-19 3d automatic measuring apparatus WO2005040721A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005514989A JP4545093B2 (en) 2003-10-29 2004-10-19 3D automatic surveying device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-369300 2003-10-29
JP2003369300 2003-10-29

Publications (1)

Publication Number Publication Date
WO2005040721A1 true WO2005040721A1 (en) 2005-05-06

Family

ID=34510381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2004/015766 WO2005040721A1 (en) 2003-10-29 2004-10-19 3d automatic measuring apparatus

Country Status (2)

Country Link
JP (1) JP4545093B2 (en)
WO (1) WO2005040721A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006113645A (en) * 2004-10-12 2006-04-27 Kajima Corp Moving locus analysis method
JP2007114916A (en) * 2005-10-19 2007-05-10 Kazuo Iwane Old/new video image coordinate integration unit
JP2007278844A (en) * 2006-04-06 2007-10-25 Topcon Corp Image processing apparatus and its processing method
CN100454291C (en) * 2005-06-07 2009-01-21 乐必峰软件公司 Method for detecting 3d measurement data using allowable error zone
CN102798380A (en) * 2012-07-09 2012-11-28 中国人民解放军国防科学技术大学 Method for measuring motion parameters of target in linear array image
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN104613941A (en) * 2015-01-30 2015-05-13 北京林业大学 Analysis method of terrestrial photograph Kappa and Omega angle with vertical base line
JP2019185404A (en) * 2018-04-10 2019-10-24 キヤノン株式会社 Image processing device, imaging device, image processing method, and image processing program
WO2020255266A1 (en) * 2019-06-18 2020-12-24 日本電信電話株式会社 Absolute coordinate acquisition method
CN112686989A (en) * 2021-01-04 2021-04-20 北京高因科技有限公司 Three-dimensional space roaming implementation method
CN113762428A (en) * 2021-11-10 2021-12-07 北京中科慧眼科技有限公司 Road surface bumping degree grade classification method and system
CN114061472A (en) * 2021-11-03 2022-02-18 常州市建筑科学研究院集团股份有限公司 Method for correcting measurement coordinate error based on target

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5671416B2 (en) * 2011-07-04 2015-02-18 大成建設株式会社 Panorama image distance calculation device
JP2014225108A (en) 2013-05-16 2014-12-04 ソニー株式会社 Image processing apparatus, image processing method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000171248A (en) * 1998-10-02 2000-06-23 Asahi Optical Co Ltd Target for photogrammetry
JP2002310619A (en) * 2001-01-31 2002-10-23 Hewlett Packard Co <Hp> Measurement device
JP2003091716A (en) * 2001-09-17 2003-03-28 Japan Science & Technology Corp Three-dimensional space measurement data accumulating method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3501936B2 (en) * 1998-01-14 2004-03-02 有三 大西 Displacement measuring method and displacement measuring device
JP4511147B2 (en) * 2003-10-02 2010-07-28 株式会社岩根研究所 3D shape generator

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000171248A (en) * 1998-10-02 2000-06-23 Asahi Optical Co Ltd Target for photogrammetry
JP2002310619A (en) * 2001-01-31 2002-10-23 Hewlett Packard Co <Hp> Measurement device
JP2003091716A (en) * 2001-09-17 2003-03-28 Japan Science & Technology Corp Three-dimensional space measurement data accumulating method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIYAKAWA I ET AL: "2.5 Dimensional Space Reconstruction from Position Data Road Images and Omni-directional Images.", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS., vol. 101, no. 125, 15 June 2001 (2001-06-15), pages 31 - 38, XP002996439 *
SATO T ET AL: "3-D Moling of an Outdoor Scene from Monocular Image Sequences by Multi-baseline Stereo.", TRANSACTION OF THE VIRTUAL REALITY SOCIETY OF JAPAN., vol. 7, no. 2, 30 June 2002 (2002-06-30), pages 275 - 282, XP002996438 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006113645A (en) * 2004-10-12 2006-04-27 Kajima Corp Moving locus analysis method
CN100454291C (en) * 2005-06-07 2009-01-21 乐必峰软件公司 Method for detecting 3d measurement data using allowable error zone
JP2007114916A (en) * 2005-10-19 2007-05-10 Kazuo Iwane Old/new video image coordinate integration unit
JP2007278844A (en) * 2006-04-06 2007-10-25 Topcon Corp Image processing apparatus and its processing method
CN102798380A (en) * 2012-07-09 2012-11-28 中国人民解放军国防科学技术大学 Method for measuring motion parameters of target in linear array image
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN104613941A (en) * 2015-01-30 2015-05-13 北京林业大学 Analysis method of terrestrial photograph Kappa and Omega angle with vertical base line
CN104613941B (en) * 2015-01-30 2017-02-22 北京林业大学 Analysis method of terrestrial photograph Kappa and Omega angle with vertical base line
JP2019185404A (en) * 2018-04-10 2019-10-24 キヤノン株式会社 Image processing device, imaging device, image processing method, and image processing program
JP7158881B2 (en) 2018-04-10 2022-10-24 キヤノン株式会社 Image processing device, imaging device, image processing method, and image processing program
WO2020255266A1 (en) * 2019-06-18 2020-12-24 日本電信電話株式会社 Absolute coordinate acquisition method
JPWO2020255266A1 (en) * 2019-06-18 2020-12-24
JP7227534B2 (en) 2019-06-18 2023-02-22 日本電信電話株式会社 How to get absolute coordinates
CN112686989A (en) * 2021-01-04 2021-04-20 北京高因科技有限公司 Three-dimensional space roaming implementation method
CN114061472A (en) * 2021-11-03 2022-02-18 常州市建筑科学研究院集团股份有限公司 Method for correcting measurement coordinate error based on target
CN114061472B (en) * 2021-11-03 2024-03-19 常州市建筑科学研究院集团股份有限公司 Method for correcting measurement coordinate error based on target
CN113762428A (en) * 2021-11-10 2021-12-07 北京中科慧眼科技有限公司 Road surface bumping degree grade classification method and system

Also Published As

Publication number Publication date
JPWO2005040721A1 (en) 2007-04-19
JP4545093B2 (en) 2010-09-15

Similar Documents

Publication Publication Date Title
CN105928498B (en) Method, the geodetic mapping and survey system, storage medium of information about object are provided
JP4767578B2 (en) High-precision CV calculation device, CV-type three-dimensional map generation device and CV-type navigation device equipped with this high-precision CV calculation device
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
US7206080B2 (en) Surface shape measurement apparatus, surface shape measurement method, surface state graphic apparatus
JP4980606B2 (en) Mobile automatic monitoring device
US20110206274A1 (en) Position and orientation estimation apparatus and position and orientation estimation method
US20130011013A1 (en) Measurement apparatus, measurement method, and feature identification apparatus
WO2005040721A1 (en) 3d automatic measuring apparatus
Rumpler et al. Automated end-to-end workflow for precise and geo-accurate reconstructions using fiducial markers
Erickson et al. The accuracy of photo-based three-dimensional scanning for collision reconstruction using 123D catch
JP2007089111A (en) Synthetic display device of two-dimensional drawing and video image
WO2022078442A1 (en) Method for 3d information acquisition based on fusion of optical scanning and smart vision
JP2003042732A (en) Apparatus, method and program for measurement of surface shape as well as surface-state mapping apparatus
JP4852006B2 (en) Spatial information database generation device and spatial information database generation program
RU2562368C1 (en) Three-dimensional (3d) mapping method
JP4624000B2 (en) Compound artificial intelligence device
JP4773794B2 (en) New and old video coordinate integration device
US20230351687A1 (en) Method for detecting and modeling of object on surface of road
US7046839B1 (en) Techniques for photogrammetric systems
US20220018950A1 (en) Indoor device localization
CN112257535B (en) Three-dimensional matching equipment and method for avoiding object
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
KR20010087493A (en) A survey equipment and method for rock excavation surface
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage
Asai et al. 3D modeling of outdoor scenes by integrating stop-and-go and continuous scanning of rangefinder

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005514989

Country of ref document: JP

122 Ep: pct application non-entry in european phase