WO2015159835A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2015159835A1
WO2015159835A1 PCT/JP2015/061306 JP2015061306W WO2015159835A1 WO 2015159835 A1 WO2015159835 A1 WO 2015159835A1 JP 2015061306 W JP2015061306 W JP 2015061306W WO 2015159835 A1 WO2015159835 A1 WO 2015159835A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
marker
dimensional
position information
image
Prior art date
Application number
PCT/JP2015/061306
Other languages
French (fr)
Japanese (ja)
Inventor
大輔 村山
徳井 圭
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2015159835A1 publication Critical patent/WO2015159835A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and the like that calculate the length and area of a subject based on image information.
  • the area was measured by manually measuring the length of the measurement object manually by a user using a measure or laser measuring instrument. If measurement is performed manually as described above, there is a possibility that an error may be caused by the measurer or the measurement may be forgotten.
  • a method has been developed in which an object to be measured is imaged using an imaging device such as a digital camera, and the three-dimensional position information of the object is estimated from the acquired image information and measured.
  • Patent Document 1 As a technique using an imaging device, for example, there is a technique using one sign having a known shape or size (Patent Document 1).
  • a subject to be measured and a marker are imaged by an imaging device, and a plurality of images are acquired by performing this multiple times from different imaging positions. Then, the user specifies the vertex position of the measurement range on the acquired image. Estimate the 3D position information of the subject at the position specified by the user based on the actual shape and dimensions of the sign and the area and inclination information of the sign appearing in each of the multiple images. Is calculated.
  • the technique disclosed in Patent Document 1 a subject to be measured and a marker are imaged by an imaging device from a plurality of different imaging positions. At this time, it is necessary that the sign is reflected in all of the plurality of images to be captured, and it is necessary to capture the entire measurement target with all of the plurality of images. That is, the measurable range is a range in which the sign and the measurement target can be imaged within the imaging range of the imaging apparatus.
  • the measurement range is set to a plurality of ranges that fall within the imaging range of the imaging device. To divide. And it can measure in each divided range, and can acquire the area of the whole floor by integrating each measurement result.
  • the present invention has been made in view of the above points, and an image processing apparatus capable of measuring with high accuracy by a method with less burden on the user even in a measurement range that does not fall within the imaging range of the imaging device.
  • An object of the present invention is to provide an image processing method.
  • an image processing device that calculates three-dimensional position information of a measurement position on a subject based on a plurality of pieces of image information obtained by imaging a subject to be measured and a marker at a plurality of imaging positions.
  • the marker has a plurality of pointers, and the positional relationship between the plurality of pointers represents a three-dimensional coordinate system, and the three-dimensional coordinate system for each of the plurality of imaging positions is based on the plurality of image information.
  • a three-dimensional position information calculation unit that calculates the three-dimensional position information represented, and a coordinate system that represents the three-dimensional position information calculated by the three-dimensional position information calculation unit.
  • an image processing apparatus comprising: a three-dimensional position information converting unit that converts the information into a three-dimensional coordinate system represented by the marker.
  • the user designates the measurement position from each of the captured images, and the calculation is performed based on the three-dimensional coordinate system of the marker. Even in a situation where a subject cannot be captured with a single image, the length and area of the subject can be accurately measured.
  • the marker includes a first marker representing a first three-dimensional coordinate system, a second marker representing a second three-dimensional coordinate system different from the first three-dimensional coordinate system, and ,
  • the three-dimensional position information calculation unit is based on image information in which the first marker and the second marker are simultaneously imaged among the plurality of image information.
  • a first positional relationship between the first three-dimensional coordinate system and the second three-dimensional coordinate system is calculated, and the three-dimensional positional information conversion unit is configured to determine whether the second marker among the plurality of image information is the second marker.
  • a coordinate system representing the three-dimensional position information calculated by the three-dimensional position information calculation unit based on the image information being imaged is changed from the three-dimensional coordinate system based on the image processing device to the second cubic. Convert to the original coordinate system, and further, the first positional relationship Based on an image processing apparatus characterized by converting the coordinate system to the first three-dimensional coordinate system from said second three-dimensional coordinate system.
  • all the calculated three-dimensional position information is converted into a three-dimensional coordinate system of one reference marker, and the length of the subject is based on the three-dimensional position information in which the coordinate system is integrated.
  • the sheath area can be accurately measured.
  • the present invention even in a measurement range that does not fall within the imaging range of the imaging device, it is possible to measure with high accuracy by a method with less burden on the user.
  • FIG. 1 is a schematic block diagram illustrating a configuration example of an image processing apparatus according to a first embodiment of the present invention. It is a functional block diagram which shows one structural example of an image process part. It is a schematic block diagram which shows the example of 1 structure of the image processing apparatus using the measuring method by two markers which concerns on the 2nd Embodiment of this invention. It is a figure which shows an example of the measurement scene which has one marker. It is a figure which shows the image imaged with the image processing apparatus. It is a flowchart figure which shows the flow of the measurement process of the measuring method by one marker which concerns on the 1st Embodiment of this invention. It is a figure which shows the relationship between parallax and distance. It is an external view which shows one structural example of a marker.
  • FIG. 1A is a functional block diagram illustrating a schematic configuration example of the image processing apparatus 1 according to the first embodiment of the present invention.
  • the image processing apparatus 1 captures a subject to acquire image information, calculates distance information corresponding to the image information, and outputs information from the acquisition unit 10 and the input unit 14.
  • An image processing unit 11 that performs image processing based on input information and calculates a measurement value
  • a storage unit 12 that stores output information of the acquisition unit 10 and the image processing unit 11, and an acquisition unit 10 and an image processing unit 11
  • the display unit 13 displays output information and information stored in the storage unit 12, and the input unit 14 receives a user operation and outputs input information to the image processing unit 11.
  • the acquisition unit 10 includes two imaging units 100 and 101, and a distance calculation unit 102 that calculates distance information based on the two pieces of image information acquired by the two imaging units 100 and 101.
  • the imaging units 100 and 101 are arranged side by side or vertically so that their optical axes are substantially parallel.
  • the imaging units 100 and 101 include an optical system such as a lens module (not shown) and an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • an analog signal processing unit, an A / D (Analog / Digital) conversion unit, and the like are provided, and a signal from the image sensor is output as image information.
  • the two imaging units 100 and 101 having the same configuration are used as an example. However, if the two imaging units can capture the same area and take correspondence between the pixels, the resolution and the image can be obtained. Imaging units having different configurations such as corners may be used.
  • the distance calculation unit 102 calculates subject distance information by a stereo method based on the image information acquired by the imaging units 100 and 101.
  • the stereo system is arranged so that the optical axes of the two imaging units are substantially parallel, and the two imaging units capture substantially the same region, obtain the parallax of the corresponding pixels between the two obtained images, and calculate the parallax. Based on this, the distance is calculated.
  • obtaining corresponding pixels between two images is called stereo matching.
  • one of the two images is set as a standard image, and the other is set as a reference image.
  • the pixel is matched by scanning the reference image in the horizontal direction.
  • Pixel matching is performed in units of blocks centered on the pixel of interest, and SAD (Sum of Absolute Difference) that calculates the sum of absolute value differences of the pixels in the block is calculated to determine a block that minimizes the SAD value.
  • SAD Sud of Absolute Difference
  • there are calculation methods such as SSD (Sum of Squared Difference), graph cut, and DP (Dynamic Programming) matching.
  • the parallax value can be calculated even when the two imaging units are arranged in the vertical direction instead of the horizontal direction. In this case, the reference image may be scanned in the vertical direction instead of the horizontal direction.
  • the parallax value of the pixel is known, and if this is performed for all the pixels, the parallax value corresponding to each pixel of the reference image is obtained.
  • the parallax is obtained by stereo matching, it is an area that is imaged in common by two images.
  • Z is a distance value
  • f is a focal length of the two imaging units
  • B is a baseline length representing a distance between the two imaging units
  • p is a pixel pitch of the image sensor of the imaging unit
  • d is a parallax value. is there.
  • the distance and the parallax are in an inversely proportional relationship, not a linear relationship. That is, the closer the distance, the larger the parallax, the farther the distance, the smaller the parallax, and the closer the distance, the higher the resolution of the distance. Therefore, when the distance of the subject is obtained by the stereo method, the distance to the subject can be obtained with higher resolution as the image is taken closer to the subject.
  • the reference of the three-dimensional coordinate system at the imaging position is the imaging unit 100. This changes depending on which imaging unit the above reference image is used as. That is, if the reference image is the image of the imaging unit 101, the reference of the three-dimensional coordinate system is the imaging unit 101, and the rear principal point position of the optical system of the imaging unit 101 is the origin of the three-dimensional coordinate system.
  • the reference may be either as long as it is fixed to either one.
  • the image processing unit 11 includes, for example, a processor such as a CPU (Central Processing Unit) and a main storage device such as a RAM (Random Access Memory), and executes a program stored in the storage device to perform processing. It is.
  • a processor such as a CPU (Central Processing Unit) and a main storage device such as a RAM (Random Access Memory), and executes a program stored in the storage device to perform processing. It is.
  • the storage unit 12 is a storage device configured by, for example, a flash memory or a hard disk.
  • the display unit 13 is a display that uses, for example, a liquid crystal element or an organic EL (Electro Luminescence) as a pixel.
  • a liquid crystal element or an organic EL (Electro Luminescence) as a pixel.
  • organic EL Electro Luminescence
  • the input unit 14 is, for example, an input device such as a button or switch provided in the image processing apparatus 1 or a touch panel.
  • the display unit 13 and the input unit 14 can be integrated into one device by using a general touch panel such as a resistive film type or a capacitance type as the display unit 13. As a result, an operation such as touching the image displayed on the screen and designating the measurement position of the subject becomes possible.
  • the image processing apparatus 1 including the acquisition unit 10, the image processing unit 11, the storage unit 12, the display unit 13, and the input unit 14 will be described as an example.
  • each processing unit is a separate device. It doesn't matter.
  • image information and distance information acquired by a separately prepared device are transferred to an image processing device having the function of the image processing unit 11 via a network or a storage device such as a flash memory. May be.
  • the display unit 13 may be an external display device such as a liquid crystal monitor
  • the input unit 14 may be an external input device such as a mouse or a keyboard, and the same effects as those of the image processing apparatus 1 of the present embodiment can be obtained.
  • FIG. 2 is an overview diagram showing how the area of the subject is measured using one marker.
  • the marker 2 and the subject 3 are imaged by the image processing apparatus 1 having the acquisition unit 10. The situation is shown.
  • a three-dimensional coordinate system Ma shown in FIG. 2 is a three-dimensional coordinate system based on the image processing apparatus 1 when the image processing apparatus 1 performs imaging at the imaging position a.
  • the three-dimensional coordinate system Mb is a three-dimensional coordinate system based on the image processing apparatus 1 when the image processing apparatus 1 performs imaging at the imaging position b.
  • the origins of the three-dimensional coordinate systems Ma and Mb coincide with the rear principal point position of the optical system of the imaging unit 100, and the optical axis of the imaging unit 100 coincides with the Z axis of each three-dimensional coordinate system.
  • the reference of the three-dimensional coordinate system at each imaging position is the imaging unit 100, but the imaging unit 101 may be used. As described above, the reference of the three-dimensional coordinate system changes depending on which image of the imaging unit 100 or 101 is used as the reference image when calculating the distance by the stereo method.
  • the three-dimensional coordinate system Om is a three-dimensional coordinate system based on the marker 2.
  • the origin of the three-dimensional coordinate system Om is a feature point (hereinafter also referred to as “pointer”) located at the center of the marker 2. Details of the marker 2 will be described later.
  • the measurement range is indicated by a thick line on the subject 3, and in order to measure the area of the rectangular area that is the measurement range, four measurement positions a 1, a 2, b 1 indicated by x marks are measured. , B2 three-dimensional position information is acquired. Measurement positions a1, a2, b1, and b2 are the vertices of the measurement range.
  • the measurement range on the subject 3 is a size that does not fit within the imaging range of the imaging units 100 and 101, and measurement is impossible with one imaging.
  • the image processing apparatus 1 picks up images from the two positions of the imaging position a and the imaging position b, acquires the three-dimensional position information of the measurement positions a1, a2, b1, and b2, and measures the area of the rectangular region a1a2b2b1.
  • 3A and 3B show images captured by the image processing apparatus 1 from the positions of the imaging positions a and b.
  • 3A and 3B show images captured by the imaging unit 100, respectively. Note that a total of four images are acquired because imaging is performed by both of the imaging units 100 and 101 at each of the imaging positions a and b.
  • 3A and 3B show that the marker 2 and a part of the subject 3 are reflected.
  • 3A shows the vertices a1 and a2 of the subject 3
  • FIG. 3B shows the vertices b1 and b2 of the subject 3.
  • the positions of the imaging positions a and b need only be set so that the marker 2 and the four vertices a1, a2, b1, and b2 of the measurement range can be seen, and the imaging position and the position of the marker 2 can be freely set. Can be set.
  • the marker is fixed without being moved during the measurement of one measurement range. This is because coordinate conversion described later is performed based on the marker position.
  • the distance calculation unit 102 calculates distance information to the subject by a stereo method. This distance information is distance information corresponding to each pixel of the image of the imaging unit 100. Image information of the imaging unit 100 and distance information of the distance calculation unit 102 are output to the image processing unit 11.
  • the image processing unit 11 uses the three-dimensional position information of the subject shown in FIG. 3A as the three-dimensional coordinate value in the three-dimensional coordinate system Ma based on the image information captured at the imaging position a and the calculated distance information. calculate. Further, based on the image information captured at the imaging position b and the calculated distance information, the three-dimensional position information of the subject shown in FIG. 3B is calculated as a three-dimensional coordinate value in the three-dimensional coordinate system Mb. At this time, the calculated three-dimensional position information of the vertices a1 and a2 and the three-dimensional position information of the vertices b1 and b2 are different from each other in the reference three-dimensional coordinate system. .
  • the image processing unit 11 uses the three-dimensional position information of the vertices a1 and a2 and the vertices b1 and b2 calculated in different coordinate systems as the third order of the three-dimensional coordinate system Om with the marker 2 as a reference. Convert to original location information.
  • the coordinate system conversion method will be described later.
  • the converted three-dimensional position information of the vertices a1, a2, b1, and b2 is in the same three-dimensional coordinate system Om, and represents the positional relationship in the actual space, so that the area can be calculated.
  • the image processing unit 11 calculates the area of the rectangular region connecting the vertices based on the three-dimensional position information obtained by converting the coordinate system.
  • the method of calculating the area of a region formed by known vertices with a three-dimensional position is a well-known technique and will not be described in detail. For example, a triangular region connecting vertices a1, a2, and b1, and vertices a2, b1, If the areas of two regions of the triangular region connecting b2 are calculated and added, the areas of the rectangular regions a1a2b2b1 can be obtained.
  • the image processing unit 11 can output the calculated length or area value as measurement information. It is also possible to calculate and output information such as straight lines passing through the vertices and angles formed by line segments.
  • the area value calculated by the image processing unit 11 as described above is output to and stored in the storage unit 12. Moreover, a measurement result can be notified to a user by outputting to the display part 13 and displaying on a display. As a result, the measurement result can be confirmed later or displayed on the display unit 13 together with other measurement results for confirmation.
  • the storage unit 12 may store image information, distance information, and the like of the acquisition unit 10. As a result, it is possible not to perform measurement on the spot but to perform measurement based on image information and distance information later.
  • the user can confirm the captured image.
  • the user can determine the measurement position by specifying the point on the subject that appears in the image by the input unit 14 while confirming the display on the display unit 13.
  • the user designates the two-dimensional coordinate positions of the vertices a1 and a2 and the vertices b1 and b2 on the images of FIGS. 3A and 3B displayed on the display unit 13.
  • the information on the designated position input by the input unit 14 is output to the image processing unit 11, and the image processing unit 11 acquires the three-dimensional position information on the designated position, and further performs coordinate conversion processing.
  • the image processing apparatus 1 calculates the three-dimensional position information of the measurement position on the subject from the acquired image information and distance information, and uses a plurality of different three-dimensional coordinate systems as a reference.
  • the three-dimensional position information is converted into a three-dimensional coordinate system of the marker 2 which is one integrated reference. Then, the area of the measurement range is calculated based on the three-dimensional position information represented by one three-dimensional coordinate system.
  • the measurement is performed by imaging from two imaging positions.
  • the number of images to be captured may be increased by adding imaging positions, and the measurement positions specified by the user may be increased.
  • measurement can be performed by converting the three-dimensional position information of the subject calculated at each imaging position into a three-dimensional coordinate system based on one marker by the same method as described above.
  • the measurement position on the subject and the position of the marker appearing on the image are specified by the user using the input unit 14 while confirming the image displayed on the display unit 13.
  • the position specifying accuracy of the user Even when is low, the measurement position and marker position are specified with high accuracy. This is because the measurement position is likely to be a characteristic part such as a corner of a room or an edge of a wall.
  • the specified position is corrected by the image processing unit 11 based on image information and distance information.
  • the shape and color of the marker are known, the marker may be automatically detected using a known image recognition technique such as a template matching method.
  • FIG. 1B is a functional block diagram showing a configuration example of the image processing unit 11, and such a function can be realized by either a hardware configuration or a software configuration.
  • the image processing unit 11 includes a measurement position designation input receiving unit 11-1, a three-dimensional position information calculation unit 11-2, a three-dimensional position information conversion unit 11-3, an area calculation unit 11-4, and a display control unit. 11-5.
  • the user installs the marker 2 near the center front of the measurement range as shown in FIG. 2 (step S101).
  • the marker installation position is appropriately set according to the measurement range of the subject, and is fixed until the measurement is completed once it is installed.
  • the user moves the image processing apparatus 1 to the imaging position a (step S102), and acquires a part of the subject 3 including the vertices a1 and a2 of the rectangular area that is the measurement range and the marker 2 of the acquisition unit 10.
  • Imaging is performed by the two imaging units 100 and 101 (step S103).
  • distance information is calculated by the distance calculation unit 102 based on the two pieces of captured image information.
  • the image processing apparatus 1 displays a reference image (in this embodiment, the image of the imaging unit 100) on the display unit 13 among the two images captured by the imaging units 100 and 101, and the user confirms the displayed image. Meanwhile, the apexes a1 and a2 of the measurement range are designated by the input unit 14 (step S104). The input information is received by the measurement position designation input receiving unit 11-1.
  • step S105 it is confirmed whether or not all the measurement positions, that is, all the vertex positions in the measurement range have been specified. If the measurement positions have been completed, the process proceeds to step S106. Return to step S102. If all measurement positions have not been specified, step S102 to step S105 are repeated until all measurement positions are specified. Whether or not all the measurement positions have been designated is displayed on the display unit 13 by, for example, displaying a question as to whether or not the measurement positions are to be designated.
  • step S102 After imaging the subject at the imaging position a and selecting the vertices a1 and a2, the process once returns to step S102. Then, the user moves the image processing apparatus 1 to the imaging position b (step S102), and images a part of the subject 3 including the vertices b1 and b2 and the marker 2 with the image processing apparatus 1 (step S103).
  • the image processing apparatus 1 displays the image of the imaging unit 100 on the display unit 13, and the user designates the vertices b1 and b2 of the measurement range on the displayed image by the input unit 14 (step S104). Then, it is confirmed again whether or not all the measurement positions have been designated (step S105). Since the designation has been completed, the process proceeds to step S106.
  • the three-dimensional position information calculation unit 11-2 of the image processing unit 11 determines the measurement positions a1 and a2 and the measurement position b1 designated by the user based on the image information and the distance information output from the acquisition unit 10.
  • the three-dimensional position information with b2 is calculated (step S106).
  • the 3D position information conversion unit 11-3 uses the 3D position information of the measurement positions a1 and a2 and the measurement positions b1 and b2 calculated in different 3D coordinate systems as a 3D coordinate system with the marker 2 as a reference. Is converted into the three-dimensional position information represented by (step S107).
  • the area calculation unit 11-4 of the image processing unit 11 calculates an area (step S108), and the calculation result is sent to the display unit 13. Then, the display control unit 11-5 performs display on the display unit 13 (step S109).
  • the image processing apparatus 1 can measure the area of the subject using one marker.
  • output information in each of the above steps that is, imaged image information, measurement position information designated by the user, and output information of the acquisition unit 10 and the image processing unit 11 are stored in the storage unit 12 and read out appropriately. Is called.
  • step S104 it is confirmed whether or not all the measurement positions have been specified in step S105. If the measurement positions have been completed, the process proceeds to step S106.
  • the processing of step S106 and step S107 may be performed when step S104 is completed, and then the confirmation of step S105 may be performed. That is, when the user designates the measurement position on the captured image, the three-dimensional position information of the measurement position may be calculated, and it may be confirmed whether all the measurement positions have been designated thereafter.
  • the calculated three-dimensional position information is stored in the storage unit 12, and after the three-dimensional position information of all measurement positions is calculated, the process proceeds to step S ⁇ b> 108. Information may be read to calculate the area.
  • the processing up to step S108 may be executed to calculate the area of the region formed at the designated measurement positions. Every time a measurement position is designated, if the area of the region formed at all the measurement positions designated so far is calculated and displayed on the display unit 13, the procedure for performing the confirmation in step S105 every time is performed. It can be omitted.
  • the image processing apparatus 1 converts the three-dimensional position information of the subject calculated based on the three-dimensional coordinate system of the image processing apparatus 1 into a three-dimensional coordinate system based on the marker.
  • the image processing unit 11 detects a marker from the images captured by the imaging units 100 and 101, and detects a pointer that is a feature point included in the marker, whereby a three-dimensional coordinate system based on the marker is used. Is calculated. Therefore, the marker of this embodiment has a shape for representing a three-dimensional coordinate system and a feature point that can be easily detected from an image. The coordinate system conversion method will be described later.
  • FIG. 6 shows an example of an external view of the marker 2 in the present embodiment.
  • the marker 2 shown in FIG. 6 has three axes orthogonal to each other, and a sphere as a pointer is arranged at the intersection of the three axes and the tip of each axis.
  • the pointer at the intersection of the three axes represents the origin P 0 of the three-dimensional coordinate system, and the line segment connecting the center of gravity of the pointer at the origin and the center of gravity of the other pointers Px, Py, Pz is the three-dimensional coordinate system.
  • the three-dimensional coordinate system can be represented by the markers.
  • the three-dimensional coordinate system represented by the marker 2 is a three-dimensional orthogonal coordinate system.
  • the line segments connecting the pointers are synonymous even if they are straight lines passing through the pointers. If the marker 2 is installed on the floor surface, the XZ plane becomes parallel to the floor surface.
  • the pointer has a sphere shape so that the pointer has the same shape when viewed from any angle and direction. This facilitates marker detection from the image, that is, detection of each pointer. Furthermore, if the hue and saturation of each pointer are made different, each pointer can be easily distinguished and detected in the image, and the direction of the coordinate axis can be determined from any angle and direction.
  • the pointers are arranged so that the respective axes are orthogonal so that the markers themselves represent a three-dimensional orthogonal coordinate system.
  • the arrangement of the pointers is limited to this. is not. Even if the axes of the marker are not orthogonal, if the shape of the marker, that is, the positional relationship of the pointers is known, and the position of the origin and the angle between the axes are known, the axes based on the prior information The direction can be corrected and handled as a three-dimensional orthogonal coordinate system. However, in that case, the number of processes for correcting the axial direction increases, and an error may occur during correction. Therefore, the marker itself may have a shape that can represent a three-dimensional orthogonal coordinate system as in the example of FIG. desirable.
  • the pointer spheres are connected by an axis.
  • the marker may have any configuration as long as the positional relationship between the pointers does not change.
  • a configuration may be adopted in which a pointer sphere is supported from below by a pedestal, or a cube having a known size may be used as a marker, and each vertex position of the cube may be set as a pointer.
  • the vertices appearing in the image may differ depending on the angle and direction of imaging.
  • the origin position serving as the reference of the three-dimensional coordinate system with the position and direction of the axis based on the dimension information of the cube, that is, the positional relationship information of the vertices.
  • the vertex position can be determined from the information of the surface shown in the image, and the identification of the three-dimensional coordinate system is easy. Become.
  • the image processing apparatus 1 converts the three-dimensional position information of the subject calculated in the three-dimensional coordinate system of the image processing apparatus 1 into a three-dimensional coordinate system based on the marker.
  • the coordinate conversion method will be described with reference to the example of FIG.
  • FIG. 7 is an external view showing a coordinate position relationship among the image processing apparatus 1, the marker 2, and the subject 4.
  • 7 shows a three-dimensional coordinate system M based on the image processing apparatus 1 at a certain imaging position, a two-dimensional coordinate system m representing the image coordinate system of the imaging unit 100 included in the image processing apparatus 1, and a marker 2.
  • a three-dimensional coordinate system O and a point P on the subject 4 are shown.
  • i, j, and k representing unit vectors in the respective axis directions are shown.
  • the two-dimensional coordinate system m is an image projection plane, and shows a state in which the subject 4 and the marker 2 are projected.
  • the Z axis of the three-dimensional coordinate system M coincides with the optical axis of the imaging unit 100 which is the reference in the image processing apparatus 1.
  • the origin of the two-dimensional coordinate system m is the position of the upper left corner on the image projection plane
  • the horizontal axis direction of the two-dimensional coordinate system m coincides with the X-axis direction of the three-dimensional coordinate system M
  • the two-dimensional coordinate system m The vertical axis direction coincides with the Y-axis direction of the three-dimensional coordinate system M.
  • the three-dimensional coordinate system is all represented by the right-handed system.
  • the relationship between the three-dimensional coordinate system M and the two-dimensional coordinate system m is shown as a perspective projection model. Since the perspective projection model is a known technique, a detailed description thereof is omitted.
  • the three-dimensional position information (X, Y, Z) of a point on the subject in the three-dimensional coordinate system M is the same as the two-dimensional position information (u, v) of the point projected on the image projection plane in the two-dimensional coordinate system m. It can be calculated based on the subject distance information Z obtained by the stereo method described above.
  • the calculation formula is shown in Formula (1).
  • the calculation formula of Z shown in Formula (1) is the same as what was mentioned above about the stereo system.
  • f in Expression (1) is the focal length of the imaging unit
  • p is the pixel pitch in the image sensor of the imaging unit
  • w is the horizontal resolution of the image captured by the imaging unit
  • h is the vertical resolution
  • B is the two imagings.
  • the base line length between the parts, d, represents the parallax value obtained by the stereo method.
  • X is an operator representing multiplication. Note that, as shown in Expression (1), X and Y may be calculated by an expression that substitutes the base line length B and the parallax value d.
  • the image processing unit 11 calculates the three-dimensional position information in the three-dimensional coordinate system M from the two-dimensional position information in the two-dimensional coordinate system m of the point P on the subject and the marker 2 by Expression (1).
  • the calculated 3D position information in the 3D coordinate system M is converted into 3D position information in the 3D coordinate system O by the following method.
  • the conversion from the three-dimensional coordinate system M to the three-dimensional coordinate system O can be expressed by a matrix operation of rotation and translation.
  • the vector to the point P viewed from the three-dimensional coordinate system M is P M
  • the vector to the pointer at the origin position of the marker 2 is O M
  • the vector to the point P viewed from the three-dimensional coordinate system O is P O.
  • each vector t P M (P MX, P MY, P MZ)
  • t O M (O MX, O MY, O MZ)
  • t P O (P OX, P OY, P OZ) It is represented by t represents transposition.
  • the three-dimensional position information representing the vectors P M and O M is calculated by the above equation (1). Further, the three-dimensional position information of the point P in the three-dimensional coordinate system O, that is, PO is calculated by the equation (2).
  • Equation (2) is expressed using a homogeneous coordinate system.
  • is an operator representing an inner product operation.
  • the matrix included in the right side of Expression (2) is a transformation matrix, and the three-dimensional position information in the three-dimensional coordinate system M is converted into the three-dimensional position information in the three-dimensional coordinate system O by the transformation matrix.
  • the transformation matrix V is shown in Formula (3).
  • the transformation matrix V can be calculated from the position of the marker 2 in the image, and the unit vectors i, j, and k are calculated from the three-dimensional position information of each pointer of the marker 2.
  • a method for calculating unit vectors i, j, and k will be described below.
  • FIG. 8 shows an enlarged view of the area of the three-dimensional coordinate system M and the marker 2 of FIG. As shown in FIG.
  • the vector to the origin pointer of the marker 2 viewed from the three-dimensional coordinate system M is g0
  • the vector to the pointer in the X-axis direction is g1
  • the vector to the pointer in the Y-axis direction is g2
  • the Z-axis When the vector to the direction pointer is g3, the unit vectors i, j, and k can be calculated by the equation (4).
  • the three-dimensional position information of the three-dimensional coordinate system based on the image processing apparatus can be calculated from the two-dimensional position information of the subject in the image. Furthermore, by calculating the conversion matrix, it is possible to convert the three-dimensional position information into a three-dimensional coordinate system based on the marker.
  • the image processing apparatus 1 in FIG. 1A is used as in the first embodiment.
  • the functional block diagram of the image processing unit 11 illustrated in FIG. 1B is replaced with a functional block diagram illustrating a configuration example of the image processing unit 11a illustrated in FIG. 1C. Details of the functional block will be described later.
  • the measurement range that does not fall within the imaging range of the imaging apparatus described in this embodiment is a measurement range that requires multiple measurements in the measurement method using one marker.
  • it is a measurement range such as a large room such as a hall or a floor such as a corridor where a shielding area is generated in one shooting including a corner.
  • FIG. 9A shows two markers 20 and 21
  • FIG. 9B shows two markers 21 and 22
  • FIG. 9C shows two markers 22 and 23.
  • the marker 22 in FIG. 9B is the same as the marker 20 in FIG. 9A, and indicates that the marker 20 has been moved from the position in FIG. 9A in the direction of the one-dot chain line L1 shown in FIG. 9B.
  • the marker 23 in FIG. 9C is the same as the marker 21 in FIGS.
  • the marker 20 (or marker 22) and the marker 21 (or marker 23) are shown as different figures for easy understanding, but the actual shape is the same as the marker 2 shown in FIG. belongs to.
  • the color scheme of the markers may be different. This makes it possible to perform marker discrimination at the time of marker association described later. Note that the present invention is not limited to the method of changing the color scheme, and any method may be used as long as the two markers can be distinguished, such as marking the marker.
  • FIG. 9A shows imaging positions 90, 91, and 92, which indicate that the image processing apparatus 1 captures images from the respective imaging positions in the direction of the dashed arrows.
  • the measurement positions 900, 901, and 902 specified by the user in the image captured at the imaging position 90, the measurement position 910 specified by the user in the image captured at the imaging position 91, and the image captured at the imaging position 92 are displayed.
  • Measurement positions 920 and 921 designated by the user are shown.
  • FIG. 9A is merely an example, and when 900 is shown in both images at the imaging positions 90 and 91, any image may be selected.
  • the 900 position is not selected in the captured image at the imaging position 91. Therefore, in the image of the imaging position 90, 900 may be selected without selecting 900, or both images 900 may be selected.
  • FIG. 9B shows an image pickup position 93, which shows that the image processing apparatus 1 picks up an image in the direction of the broken line arrow.
  • measurement positions 930 and 931 designated by the user in the image captured at the imaging position 93 are shown.
  • FIG. 9C shows imaging positions 94 and 95, which indicate that the image processing apparatus 1 captures images from the respective positions in the direction of the broken-line arrows.
  • measurement positions 940 and 941 specified by the user in the image captured at the imaging position 94 and measurement positions 950 and 951 specified by the user in the image captured at the imaging position 95 are shown.
  • the number of each imaging position described above represents the order of imaging, and imaging is performed in the order of imaging positions 90, 91, 92, 93, 94, and 95.
  • the marker and a part of the measurement range are imaged for each imaging position, and the user designates the measurement position from each of the captured images.
  • imaging is performed at imaging positions 90, 91, and 92 so that the marker 20 and the vertex position of the measurement range are shown. Then, the vertex position is specified in each image to determine the measurement position, and the three-dimensional position information is converted into the three-dimensional coordinate system of the marker 20 in the same manner as described above. That is, the measurement positions 900, 901, 902, 910, 920, and 921 are converted into three-dimensional position information represented by the three-dimensional coordinate system of the marker 20.
  • both the marker 20 and the marker 21 are shown in the image at the imaging position 92 in FIG. 9A, and each three-dimensional position information is based on the three-dimensional coordinate system based on the image processing apparatus 1 at the imaging position 92. It is represented by Therefore, the positional relationship between the three-dimensional coordinate system of the marker 20 and the marker 21 can be obtained, and the three-dimensional position information represented by the three-dimensional coordinate system of one marker is used as the three-dimensional coordinate of the other marker. It is possible to convert it into three-dimensional position information represented by a system. In addition, the coordinate conversion method between markers is mentioned later.
  • the marker 20 in FIG. 9A is moved to the position of the marker 22 in FIG. 9B, and the marker 21, the marker 22, and the measurement position that has not yet been specified are imaged at the imaging position 93 in FIG. 9B.
  • the three-dimensional position information is calculated by designating the measurement positions 930 and 931 in the image of the imaging position 93 and converted into the three-dimensional coordinate system of the marker 21.
  • the measurement positions 930 and 931 specified by the image of the imaging position 93 are represented by the three-dimensional coordinate system of the marker 20. Into three-dimensional position information.
  • both the marker 21 and the marker 22 are shown in the image at the imaging position 93 in FIG. 9B, and the correspondence between the three-dimensional coordinate systems of the marker 21 and the marker 22 can be obtained in the same manner as described above. That is, the three-dimensional position information represented by the three-dimensional coordinate system of the marker 22 can be converted into the three-dimensional position information represented by the three-dimensional coordinate system of the marker 21.
  • the marker 21 in FIG. 9B is moved to the position of the marker 23 in FIG. 9C, and the marker 22, the marker 23, and the measurement position that has not yet been specified are imaged at the imaging position 94 in FIG. 9C.
  • the measurement positions 940 and 941 are specified in the image of the imaging position 94 to calculate the three-dimensional position information, and the marker 22 is converted into the three-dimensional coordinate system.
  • the three-dimensional position information is calculated by designating the imaging positions 950 and 951 in the image captured at the imaging position 95 and converted into the three-dimensional coordinate system of the marker 23.
  • both the marker 22 and the marker 23 are shown in the image at the imaging position 94 in FIG. 9C, and the correspondence between the three-dimensional coordinate systems of the marker 22 and the marker 23 can be obtained in the same manner as described above. That is, the three-dimensional position information represented by the three-dimensional coordinate system of the marker 23 can be converted into the three-dimensional position information represented by the three-dimensional coordinate system of the marker 22.
  • the three-dimensional coordinate systems indicated by the markers 20, 21, 22, and 23 are associated with each other based on the images at the imaging positions 92, 93, and 94, and can be converted into each other's coordinate system.
  • the plurality of measurement positions designated by the images captured at the plurality of imaging positions are converted into the three-dimensional position information represented by the three-dimensional coordinate system of one marker serving as a reference for integration. be able to. That is, when the marker 20 is set as the reference marker in the above example, all measurement positions shown in FIGS. 9A, 9B, and 9C are calculated as three-dimensional position information represented by the three-dimensional coordinate system of the reference marker 20. Can do.
  • the reference marker may be a marker other than the marker 20.
  • the marker installation position and the imaging position shown in FIG. 9 are not limited to this, and the marker appears in each image, and all measurement positions can be selected from images captured at any imaging position. Each may be set as follows. Further, when coordinate conversion between markers is performed, settings are made so that two markers are captured at one imaging position, such as imaging positions 92, 93, and 94 in FIG.
  • the three-dimensional position information of all measurement positions is calculated by the above method, the connection between them is unknown, and it is not known what range the area for calculating the area is. Therefore, the three-dimensional position information of all measurement positions selected by the user is projected onto a surface where the measurement range exists or a surface parallel to the surface where the measurement range exists, and the measurement positions as shown in FIG. Create an overhead view.
  • the measurement range is the floor surface
  • the reference marker 20 is installed on the floor surface, the XZ plane in the three-dimensional coordinate system of the reference marker 20 becomes parallel to the floor surface of the measurement range.
  • the three-dimensional position information of all measurement positions is projected onto the XZ plane, the two-dimensional position information represented by the two-dimensional coordinate system of the X-axis and the Z-axis is calculated, and this two-dimensional coordinate system is represented in the overhead view.
  • the image coordinate system it is possible to create an overhead view as if the floor surface was viewed from vertically above.
  • the overhead view may be created by projecting the three-dimensional position information of all measurement positions onto a plane parallel to the wall surface.
  • the XY plane of the marker may be used as the projection plane.
  • the bird's-eye view in this case is not a bird's-eye view looking down from above, but a diagram seen in a direction perpendicular to the wall surface, and is a bird's-eye view for convenience of explanation.
  • the position information of the surface to be measured is calculated, and the 3D coordinate system of the marker What is necessary is just to obtain
  • the surface information can be calculated if the three-dimensional position information of three or more points existing on the surface is known, the user designates three or more points on the surface of the measurement range in the image. By doing so, it is possible to calculate information on the surface of the measurement range.
  • the image processing apparatus 1 connects the measurement positions shown in FIG. 10A in the order specified by the user, thereby creating an overhead view as shown in FIG. 10B and displaying it on the display unit 13.
  • the user makes corrections. For example, while confirming the display on the display unit 13, the user specifies and corrects an erroneously connected part or an unconnected part using the input unit 14, and the measurement position is correctly connected as shown in FIG. 10C. Create an overhead view.
  • connection method of the measurement position is not limited to the above, and the positional relationship between the measurement position and surrounding objects may be estimated based on image information or three-dimensional position information. For example, if there is a wall between two measurement positions, the connection order of the measurement positions can be estimated such that the measurement positions are not connected through the wall.
  • the image processing apparatus 1 divides the measurement range as shown in FIG. 10D based on the three-dimensional position information of each measurement position and the connection information of the measurement position in the overhead view of FIG. 10C. To do. By dividing into triangular areas as shown in FIG. 10D, the area can be calculated from the three-dimensional position information of the three vertices for each divided area, and the total area can be obtained by adding them. Any method may be used for dividing the area, but it is necessary that the entire range is included in any divided area and that the divided areas do not overlap each other. If the overhead view of FIG. 10D is displayed on the display unit 13, even if the area division of the image processing apparatus 1 is incorrect and the same range is included in a plurality of divided areas, the user can input the portion 14. Can be specified and corrected.
  • the user can manually connect the measurement points and save the trouble of dividing the region of the measurement range. it can.
  • the image processing apparatus 1 calculates the area of each divided triangular region based on the three-dimensional position information of the measurement position, and calculates the area of the entire measurement range by adding all the calculation results.
  • the area calculation method is not limited to the method of dividing the area as described above, and any method can be used as long as the area of the entire measurement range can be calculated based on the three-dimensional position information of the measurement position. I do not care.
  • the length of the line segment connecting the measurement positions that is, the length of each side, the interval between sides, etc.
  • the entire area can be calculated in a single measurement even if the measurement range requires multiple measurements.
  • all 3D position information is expressed in one 3D coordinate system with the reference marker as a reference, the user accurately grasps the positional relationship of the measurement positions as in the case of performing multiple measurements. Thus, the trouble of integrating a plurality of results can be eliminated.
  • the measurement position designated by the user is automatically corrected by the image processing unit 11 based on the image information and the distance information, even when the user's designation accuracy is low. Measurement is possible by specifying the position with high accuracy.
  • the imaging and the movement of the marker are repeated as described above, it is possible to measure even a wider range or a complicated shape range than the example of FIG.
  • it is possible to repeat the coordinate system conversion from the marker to the marker by imaging while alternately exchanging the marker to be moved and the fixed marker. If you set the first marker as the reference marker and take an image, then move the reference marker and the other markers (non-reference markers) while moving them alternately.
  • the three-dimensional position information of the measurement position can be converted into the three-dimensional coordinate system of the reference marker before movement. And based on the three-dimensional position information with which the coordinate system was integrated, an overhead view can be produced and measured.
  • the moved reference marker is handled in the same manner as other non-reference markers. That is, since the three-dimensional coordinate system of the reference marker after movement exists at a different position from the three-dimensional coordinate system of the reference marker before movement, it can be handled in the same manner as the three-dimensional coordinate system of the non-reference marker. Therefore, it is possible to convert the coordinate system from the three-dimensional coordinate system of the reference marker after movement to the three-dimensional coordinate system of the non-reference marker, and further to the three-dimensional coordinate system of the reference marker before movement. .
  • the method using two markers is described, but the number of markers is not limited to this.
  • the marker 22 is the same as the marker 20, but the marker 22 may be a marker different from the marker 20 or 21. That is, three markers may be used. Also in that case, coordinate conversion is possible from the positional relationship between the markers appearing in the image, and conversion into a three-dimensional coordinate system of one marker as a reference is possible. Even if more markers are used, if coordinate conversion is performed in the same manner as described above, all measurement positions can be converted into three-dimensional position information of one three-dimensional coordinate system, and measurement can be performed. However, it is desirable to use a minimum number of markers because the number of markers increases the cost and takes time and effort.
  • the measurement is performed by performing the imaging while moving the two markers and moving in a certain direction in the order of the imaging positions 90 to 95, but the imaging order is not limited to this.
  • the imaging order of the imaging positions 90 and 91 may be reversed, or imaging may be performed in the order of the imaging positions 95 to 90 while moving the marker in the reverse order to the above, and any imaging order It does not matter.
  • the coordinate system is converted based on the positional relationship between the two markers that appear in the images at the imaging positions 92, 93, and 94.
  • the marker once the marker is moved, it is completely in the same position and orientation as before the movement. It is difficult to return. Therefore, for example, if the marker 20 is moved before the imaging at the imaging position 92, the correspondence between the coordinate system of the marker 20 and the marker 21 cannot be achieved. For this reason, it is necessary to determine the movement timing and movement position of the marker, the imaging order, the imaging position, and the like so that the coordinate systems of all the markers can be associated with each other.
  • the image processing apparatus 1 can convert the coordinate system by associating the markers appearing in the image in the order of imaging. It is.
  • the user informs the image processing apparatus 1 of information such as in which image the marker in the image is the same that has not moved. By inputting, all the markers are associated with each other and the coordinate system is converted.
  • all the three-dimensional position information is converted into the three-dimensional coordinate system of the reference marker.
  • the information may be converted into a three-dimensional coordinate system based on the image processing apparatus 1.
  • the reference three-dimensional coordinate system may be the three-dimensional coordinate system of the image processing apparatus 1.
  • all the three-dimensional position information is converted through the marker three-dimensional coordinate system, and further converted into the coordinate system of the image processing apparatus 1 when the reference marker is first imaged. That's fine. Conversion from the coordinate system of the reference marker shown in the image to the coordinate system of the image processing apparatus 1 can be performed based on the camera parameters.
  • the axial direction of the three-dimensional coordinate system of the image processing apparatus 1 changes depending on the orientation of the imaging unit 100 as a reference.
  • the attitude information of the image processing apparatus 1 can be determined by detecting the attitude information of the imaging unit 100 using a triaxial acceleration sensor or the like, The positional relationship with the projection plane can be grasped. Thereby, it is possible to save the trouble of calculating the information of the surface in the measurement range from the image and estimating the positional relationship between the surface and the reference three-dimensional coordinate system.
  • the three-axis acceleration sensor examples include a capacitance type, a piezoresistive type, a heat detection type sensor, and the like to which MEMS (Micro Electro Electro Mechanical Systems: a micro electro mechanical element and its creation technology) is applied.
  • the capacitance type is a method for detecting a change in capacitance between the movable part of the sensor element and the fixed part
  • the piezoresistive type is a spring that is accelerated by a piezoresistive element placed in the spring part that connects the movable part of the sensor element and the fixed part. This is a method for detecting distortion generated in a portion.
  • the heat detection type is a method in which a hot air current is generated in a housing by a heater and a convection change due to acceleration is detected by a thermal resistance or the like.
  • the image processing unit 11a includes a measurement position designation input receiving unit 11a-1, a three-dimensional position information calculating unit 11a-2, an inter-marker association processing unit 11a-3, a three-dimensional position information converting unit 11a-4, A marker movement instruction receiving unit 11a-5, an overhead view creation / display control unit 11a-6, an overhead view correction control unit 11a-7, a measurement range region division processing unit 11a-8, an area calculation unit 11a-9, Display control unit 11a-10.
  • the user installs the two markers 20 and 21 at, for example, the position shown in FIG. 9A (step S201).
  • the marker 20 is installed in the vicinity of the center of the lower region of FIG. 9A
  • the marker 21 is installed at a position where it can be imaged simultaneously with the marker 20 and in the vicinity of the center of the upper region of FIG.
  • the user moves the image processing apparatus 1 to the imaging position 90 (step S202), and the image processing apparatus 1 images the marker 20 and a part of the measurement range (step 203).
  • the image information captured by the imaging unit 100 and the distance information calculated by the distance calculation unit 102 based on the image information of the imaging unit 100 and the imaging unit 101 are output to the image processing unit 11a.
  • the image information of the acquisition unit 100 is displayed on the display unit 13. While confirming the image displayed on the display unit 13, the user designates the measurement positions 900 and 901 with the input unit 14 (step S204), and the measurement position designation input reception unit 11a-1 receives the input.
  • the three-dimensional position information calculation unit 11a-2 of 11a calculates the three-dimensional position information of each measurement position by the method described above (step S205).
  • the marker-to-marker association processing unit 11a-3 of the image processing unit 11a detects a marker in the image, and associates the marker (step S206). Details of step S206 will be described later.
  • the three-dimensional position information conversion unit 11a-4 of the image processing unit 11a converts the three-dimensional position information of the measurement position calculated in step S205 into the three-dimensional coordinate system of the reference marker associated in step S206. (Step S207).
  • step S208 it is confirmed whether or not all measurement positions in the measurement range have been designated. If the designation has not been completed, the process proceeds to step S209. If the designation has been completed, the process proceeds to step S211. In step S208, the user is allowed to select the display unit 13 by displaying a question as to whether or not to continue specifying the measurement position.
  • step S209 the marker movement instruction receiving unit 11a-5 confirms whether or not to move the marker (step S209). If the marker is not moved, the process returns to step S202. If the marker is moved, the user moves the marker (step S209). S210), the process returns to step 202. In step S209, the question is displayed on the display unit 13 and the user is allowed to select.
  • step S202 since imaging at the imaging positions 91 and 92 has not been completed at this point, the process returns to step S202 without moving the marker 20. Then, the user moves the image processing apparatus 1 to the imaging position 91 (step S202), and executes steps S203 to S208, whereby the tertiary of the measurement positions 910 and 911 converted into the three-dimensional coordinate system of the marker 20 is obtained. Get original location information. At this point in time, since all the measurement positions have not been specified yet, the process proceeds to step S209. Since the imaging at the imaging position 92 is not completed, the process returns to step S202.
  • Step S202 the user moves the image processing apparatus 1 to the imaging position 92 (Step S202), and executes Steps S203 to S208, whereby the measurement position 920 converted to the three-dimensional coordinate system of the marker 20, The three-dimensional position information 921 and 922 is acquired.
  • step S209 Since the imaging of the lower region in FIG. 9A and the designation of the measurement position have been completed, the user moves the marker 20 to the position of the marker 22 in FIG. 9B (step S210). At this time, by specifying the marker 20 to be moved on the image of the imaging position 92 displayed on the display unit 13, information indicating which marker is the moving marker is input to the image processing unit 11. .
  • step S208 it is not always necessary to confirm step S209. If it is determined in step S208 that the designation of the measurement position has not been completed, and the marker needs to be moved, the imaging is performed before step 203. The user may move the marker. At this time, it is necessary to input information indicating which marker has been moved to the image processing unit 11a by selecting it on the image.
  • Step S202 the user moves the image processing apparatus 1 to the imaging position 93 (Step S202), and executes Steps S203 to S208, whereby the measurement positions 930, 931, converted to the three-dimensional coordinate system of the marker 20, are obtained. 932 three-dimensional position information is acquired. Since all the measurement positions have not been specified even at this time, the process proceeds to step S209 again. Since the marker 20 has already been moved, the process returns to step S202, and the user moves the image processing apparatus 1 to the imaging position 94 (step S202). Then, Steps S203 to S208 are executed, and the three-dimensional position information of the measurement positions 940 and 941 converted into the three-dimensional coordinate system of the marker 20 is acquired. Since all the measurement positions have been designated by the above processing, the process proceeds to step S211.
  • step S211 the overhead view creation / display control unit 11a-6 projects the three-dimensional position information of all measurement positions onto the XZ plane of the marker 20, and represents the two-dimensional position represented by the two-dimensional coordinate system of the XZ axis. Calculate the information and create an overhead view. Furthermore, the overhead view creation / display control unit 11a-6 connects the measurement positions and displays the overhead view as shown in FIG. 10B on the display unit 13 (step S211).
  • the user While confirming the bird's-eye view image displayed on the display unit 13, the user designates an erroneously connected portion or a non-connected portion by the input unit 14, and the overhead view correction control unit 11a-7
  • the overhead view is corrected based on the designation information (step S212).
  • the corrected overhead view is as shown in FIG. 10C.
  • the measurement range region division processing unit 11a-8 divides the measurement range into triangular regions (step S213), and creates an overhead view as shown in FIG. 10D. Is displayed on the display unit 13. If the region division is incorrect, the user designates the erroneous portion by the input unit 14, and the measurement range region division processing unit 11a-8 performs correction based on the designation information.
  • the area calculation unit 11a-9 calculates an area for each divided region, and adds all the calculation results to calculate the area of the entire measurement range (step S214). Then, the display control unit 11a-10 displays the calculated result on the display unit 13 (step S215).
  • the image processing apparatus 1 can perform measurement using two markers even in a measurement range where there is a shielding area such as a corridor including a corner.
  • the bird's-eye view is created in step S211 after all the measurement positions are specified, but the present invention is not limited to such processing.
  • an overhead view is created only from the designated measurement position and displayed on the display unit 13.
  • Step S206 Marker association
  • a marker is detected from the captured image and two markers are associated with each other.
  • the image processing unit 11a distinguishes the first set reference marker from the other non-reference markers.
  • association is performed by calculating a transformation matrix of a three-dimensional coordinate system from the non-reference marker to the reference marker.
  • the image processing unit 11a sets, as a reference marker, the marker 20 that appears in the image at the imaging position 90 that is first imaged.
  • the reference marker may be specified and set on the image by the user. For example, when a plurality of markers are shown in one image, the user selects the reference marker and designates it using the input unit 14. It is good to do.
  • a priority order may be given as the first reference marker, and it may be set so as to be discriminated by color or the like. Since only the marker 20 is shown in the image at the imaging position 91, it is associated with the same as the reference marker.
  • the image processing unit 11a detects the two markers, and which is the reference marker due to a difference in coloration of the two markers or the like. Is estimated. Then, a conversion matrix for converting from the three-dimensional coordinate system of the marker 21 to the three-dimensional coordinate system of the marker 20 as the reference marker is calculated by a method described later.
  • the marker 21 and the marker 22 that has moved the marker 20 are shown.
  • the marker 22 is the same as the marker 20, but since it is known which marker the user has moved in step S209 and step S210, the two markers detected from the image at the imaging position 93 are both standards. It can be seen that it is not a marker (non-reference marker). It can be seen that the marker 21 detected in the image at the imaging position 93 has not moved since the time when the image was captured at the imaging position 92, and the marker 22 has been moved from the time when the image was captured at the imaging position 92. Yes.
  • the image processing unit 11a calculates a conversion matrix for converting the three-dimensional coordinate system from the marker 22 to the marker 21 by a method described later. At this time, since the transformation matrix of the three-dimensional coordinate system from the marker 21 to the marker 20 has already been calculated as described above, the three-dimensional coordinate system from the marker 22 to the marker 20 that is the reference marker based on the two transformation matrices. Can be calculated.
  • the marker 22 is shown in the image at the imaging position 94, and the marker 22 is obtained by moving the marker 20, and it is known that the marker 22 has not moved since the image was captured at the imaging position 93. Therefore, by applying the three-dimensional coordinate system conversion matrix from the marker 22 to the marker 20, the conversion to the reference marker is possible.
  • a conversion matrix for converting from the marker shown in each image to the three-dimensional coordinate system of the reference marker is calculated. That is, the marker is associated.
  • coordinate system transformation of the three-dimensional position information is performed in step S207.
  • the imaging position and the measurement position can be set to measure a wider range. Even when the number is increased, conversion to a reference marker is possible. The same conversion is possible when the number of markers is two or more. However, since it is necessary to calculate a conversion matrix to a reference marker via a marker that has not moved, it is not possible to move all the markers simultaneously. Therefore, imaging and marker movement may be repeated while alternately exchanging the marker to be moved and the marker not to be moved.
  • the reference marker is selected from the markers that appear in the first captured image.
  • the present invention is not limited to this, and it is only necessary to calculate a conversion matrix from each marker to the reference marker.
  • the conversion matrix of the three-dimensional coordinate system from the marker 20 and the marker 22 to the marker 21 may be calculated using the marker 21 imaged at the imaging position 92 as a reference marker.
  • the area can be measured if the above-described overhead view and the like are prepared based on the marker 21.
  • the three-dimensional coordinate system with the image processing apparatus 1 at the imaging position c as a reference is represented by Mc and the marker K.
  • the three-dimensional coordinate system based O K the three-dimensional coordinate system based on the marker L and O L.
  • Q vectors to the point Q viewed from each of the three three-dimensional coordinate systems.
  • the transformation matrices V K and V L into the three-dimensional coordinate system represented by each of the two markers can be calculated by using the above-described method using the equation (3). Therefore, the relationship between Q Mc and Q OK and Q OL is expressed by equation (5).
  • V KL is a transformation matrix. Note that -1 represents an inverse matrix.
  • Equation (7) the conversion from the three-dimensional coordinate system of the marker J to the three-dimensional coordinate system of the marker K, which is the reference marker, is expressed by Equation (7). It is represented by
  • V KJ is a transformation matrix of the three-dimensional coordinate system from the marker J to the reference marker.
  • a conversion matrix between two markers appearing in an image can be calculated. Furthermore, it is possible to calculate a conversion matrix for converting a three-dimensional coordinate system of a marker imaged at a different imaging position into a three-dimensional coordinate system of a reference marker via a marker that has not moved.
  • the coordinate system between a plurality of markers can be converted based on the image information, and all the three-dimensional position information can be converted to the reference markers. Can be integrated into the coordinate system. Therefore, it is possible to integrate and associate the positional relationship of the subject appearing in a plurality of images captured at different positions into one coordinate system. Thereby, since a user can grasp
  • the measurement position specified by the user is corrected to a characteristic position such as the corner or edge of the subject by image recognition technology, even if the user's specified accuracy is low, measurement position deviation is eliminated and measurement is performed with high accuracy. be able to.
  • measurement positions at different positions are all designated in images captured at different imaging positions.
  • the user may specify a previously designated measurement position again while repeating the imaging and measurement position designation. It is also assumed that the same measurement position is photographed from a plurality of imaging positions, three-dimensional position information of the measurement position is calculated for each, and measurement is performed using the information calculated with the highest accuracy.
  • the image processing apparatus 5 in the present embodiment uses the measurement position with the shorter distance from the imaging position for measurement. This is because, as described above with respect to the stereo system, when distance information is calculated using the stereo system, distance information is calculated with higher resolution for a subject with a shorter distance. That is, the closer the distance is, the higher the accuracy of calculating the three-dimensional position information and the accuracy of the measurement result is improved.
  • the image processing apparatus 5 of the present embodiment has the same configuration as the image processing apparatus 1 of the first embodiment, and the processing described below is added instead of the image processing unit 11 of FIG.
  • the image processing apparatus 5 of the present embodiment will be described with reference to the example of FIG.
  • the measurement position 921 is shown in both images of the imaging position 90 and the imaging position 92, and therefore can be specified in both images.
  • the specified measurement is performed by converting the three-dimensional position information of the measurement position 921 specified in each image into the three-dimensional coordinate system of the marker 20. It can be seen that the positions are the same.
  • the image processing unit 51 determines whether or not the measurement position has been designated when the transformation of the three-dimensional coordinate system is performed in the processing step S207.
  • a threshold value is set, and when the distance between the two measurement positions is equal to or less than the threshold value, it is determined that the measurement positions are the same.
  • the distance information at each imaging position is compared, and the measurement position with the shorter distance from each imaging position to the measurement position is prioritized.
  • the distance from the imaging position 90 to the measurement position 921 is compared with the distance from the imaging position 92 to the measurement position 921, and the distance from the imaging position 90 is short.
  • the three-dimensional position information of the designated measurement position 921 is used for measurement.
  • the distance acquisition method is not limited to the stereo method, it is preferable that the measurement position with the highest distance calculation accuracy is selected in the image processing unit 51 in accordance with the distance acquisition method.
  • the reliability is calculated at the measurement position of each image, and the value with the highest reliability is used.
  • the reliability includes the evaluation value of stereo matching and the difference from the peripheral distance value.
  • the image processing apparatus 5 of the present embodiment even when the user designates the same position in three-dimensional space on different images, the calculation accuracy of the three-dimensional position information is automatically higher. Since the measurement position is selected, measurement can be performed with high accuracy based on more accurate three-dimensional position information.
  • the image processing apparatus 7 of the present embodiment performs imaging by moving at least one of two or more markers as an absolute reference marker that is not fixed and moved, and moving other markers. Then, the three-dimensional position information of all designated measurement positions is converted into a three-dimensional coordinate system based on the absolute reference marker. This reduces the amount of processing when re-execution is necessary, such as when the imaging position or measurement position is specified incorrectly during measurement, thereby reducing the burden on the user.
  • the image processing device 7 of the present embodiment can be realized by the same configuration as the image processing device 1 of the first and second embodiments or the image processing device 5 of the third embodiment. In the following description, it is assumed that the image processing device 7 has the same configuration as the image processing device 1.
  • the user places a marker in the measurement range and takes an image with the acquisition unit 10 of the image processing device 7, and the image processing unit 11 of the image processing device 7 detects an absolute reference marker that appears in the first imaged image.
  • the user repeats imaging while alternately moving markers (non-reference markers) other than the absolute reference marker as in the first embodiment, and designates all vertex positions in the measurement range.
  • the image processing unit 11 calculates the three-dimensional position information of all the designated measurement positions, and further converts all the three-dimensional position information into a three-dimensional coordinate system based on the absolute reference marker. Based on the three-dimensional position information obtained by converting the coordinate system, the area is calculated in the same manner as the method described in the first embodiment.
  • the image processing unit 11 For calculating the area, the image processing unit 11 creates an overhead view based on the three-dimensional position information, and the user confirms the information displayed on the display unit 13 of the image processing device 7 while checking the information displayed on the input unit of the image processing device 7.
  • the area to be measured is designated and corrected in step 14, and the image processing unit calculates the area.
  • the image processing unit 11 detects whether or not the absolute reference marker is included in the captured image, and if detected, the timing is stored in the image processing device 7. Store in the unit 12.
  • the first detection timing is when the absolute reference marker is first detected
  • the second detection timing is when the absolute reference marker is detected next. Thereafter, the detection timing is stored in the storage unit every time it is detected.
  • the absolute reference in the first captured image Detection of the marker is the first detection timing
  • detection at the third captured image is the second detection timing
  • detection at the fifth captured image is the third detection timing.
  • the measurement position designated in the first to third captured images between the first and second detection timings is coordinate-converted via a marker appearing in the first to third captured images.
  • the three-dimensional position information of these measurement positions is converted into a three-dimensional coordinate system based on the absolute reference marker at the first detection timing.
  • the measurement position designated in the 3rd to 5th captured images between the second and third detection timings is coordinate-converted via a marker appearing in the 3rd to 5th captured images.
  • the three-dimensional position information of these measurement positions is converted into a three-dimensional coordinate system based on the absolute reference marker at the second detection timing.
  • the measurement positions specified in the fifth and sixth captured images are coordinate-converted via markers appearing in the respective captured images, and converted into a three-dimensional coordinate system based on the absolute reference marker at the third detection timing. Is done.
  • the three-dimensional position information of the measurement position designated between the two detection timings is reflected in the first detected image in the same manner as described above. Convert to a coordinate system based on an absolute reference marker.
  • the image processing apparatus 7 varies the image used for coordinate system conversion, that is, the marker used during coordinate conversion, according to the detection timing of the absolute reference marker, and uses the absolute reference marker as a reference. Convert to a 3D coordinate system.
  • all converted three-dimensional position information is information based on the same coordinate system.
  • the image processing unit can calculate the area by associating all the three-dimensional position information.
  • the marker installation and imaging can be performed again only in the detection timing section including the image obtained by imaging the erroneously moved marker.
  • the imaging omission portion should be additionally imaged while copying the absolute reference marker and the non-reference marker. That's fine. If the absolute reference marker has not moved, the three-dimensional position information of the measurement position specified by the additional captured image can also be converted into a coordinate system based on the same absolute reference marker. As described above, even when there is a measurement omission, it is possible to easily specify additional imaging and a measurement position.
  • FIG. 12 shows an overhead view of a long passage with a corner such that two passages are connected.
  • FIG. 12 shows an absolute reference marker 60 and movable non-reference markers 61 and 62.
  • the imaging positions 70, 71, 72, 73, 74, 75, 76, 77, 78, and 79 are indicated by broken-line circle shapes, and broken-line arrows extending from the respective imaging positions indicate the imaging directions.
  • alternate long and short dash line arrows L11, 12, 13, and 14 indicate the directions in which the markers 61 and 62 are moved, and the positions of the moved markers 61 and 62 are indicated by broken line figures at the ends of the alternate long and short dash arrows, respectively. ing.
  • the markers 60, 61, 62 all have the same shape as the marker 2 shown in FIGS. 2 and 6 of the first embodiment. However, in order to make it easier to detect that the markers are different in the image to be captured, the color scheme of the markers is different. Further, a cross indicates a measurement position, and one of them is a measurement position 80.
  • the user first places the absolute reference marker 60 near the center of the measurement range. Also, the markers 61 and 62 are installed at the positions shown in the figure. The absolute reference marker 60 is not moved until it is confirmed that there is no correction such as re-imaging after completion of the measurement.
  • the absolute reference marker 60 and the marker 61 are sequentially imaged at the imaging position 70, the markers 61 and 62 at the imaging position 71, and the marker 62 at the imaging position 72.
  • the marker 62 is moved in the direction of the one-dot chain line arrow L11, and the markers 61 and 62 are sequentially imaged at the imaging position 73, and the marker 62 is sequentially imaged at the imaging position 74.
  • the markers 61 and 62 are moved from the right side region to the left side region in the overhead view in the direction of dashed-dotted arrows L12 and L13.
  • the absolute reference marker 60 and the marker 61 are sequentially imaged at the imaging position 75
  • the markers 61 and 62 are sequentially imaged at the imaging position 76
  • the marker 62 is sequentially imaged at the imaging position 77.
  • the marker 62 is moved from the lower left portion of the overhead view to the upper left portion in the direction of a one-dot chain line arrow L14, and the markers 61 and 62 are sequentially imaged at the imaging position 78 and the marker 62 is sequentially imaged at the imaging position 79.
  • the measurement position of the left region of the overhead view of FIG. 12 is designated.
  • the absolute reference marker 60 is reflected in the images taken at the imaging positions 70 and 75, and the image processing unit 11 detects the absolute reference marker 60 and stores it in the storage unit 12 as the first and second detection timings.
  • the marker is moved and imaged by the above procedure, and the user designates the measurement position in the captured image. Marker detection and measurement position designation are performed in the same manner as described in the first embodiment.
  • the image processing unit 71 converts the measurement position designated in each image into a three-dimensional coordinate system with the absolute reference marker 60 as a reference.
  • the image captured between the first detection timing and the second detection timing that is, the measurement positions designated in the images from the imaging positions 70 to 75 are the markers shown in the images from the imaging positions 70 to 75.
  • the coordinate conversion is performed via.
  • the measurement positions designated in the images captured after the second detection timing that is, in the images captured from the imaging positions 75 to 79, are coordinated via the markers shown in the images from the imaging positions 75 to 79. Conversion is performed.
  • the image used for coordinate conversion that is, the marker used during coordinate conversion differs between the right region and the left region in the overhead view of FIG.
  • the image processing unit 11 creates an overhead view based on the coordinate-converted three-dimensional position information and displays it on the display unit 13.
  • the user specifies and corrects the measurement range and the like with the input unit 14 while confirming the display, and the image processing unit 11 calculates the area.
  • Each processing such as coordinate conversion, overhead view creation, user designation, and area calculation is performed in the same manner as the method described in the first embodiment.
  • the markers 61 and 62 are set again at the positions indicated by the solid line figure in FIG. Then, imaging is performed at the imaging positions 70, 71, and 72, and the measurement position is designated again. Then, the three-dimensional position information of the measurement position is coordinate-converted to the coordinates of the absolute reference marker 60 and integrated with other designated measurement positions to calculate the area. As described above, since the absolute reference marker 60 is not moved, only necessary portions can be corrected or added.
  • the three-dimensional position information is converted into the three-dimensional coordinate system of the absolute reference marker that does not move, so that the measurement position and the like can be corrected. This eliminates the need to redo all measurements, improving the convenience for the user. Furthermore, even if there is a measurement omission, it is not necessary to redo all measurements from the beginning, and additional imaging and measurement positions can be specified, reducing the burden on the user.
  • measurement may be performed by imaging while moving one non-reference marker around one absolute reference marker, or three or more non-reference markers may be used.
  • measurement is performed with one absolute reference marker and a plurality of moving markers for each area, and finally coordinate conversion between the plurality of absolute reference markers is performed to integrate the three-dimensional position information. You may make it measure.
  • Each component of the present invention can be arbitrarily selected, and an invention having a selected configuration is also included in the present invention.
  • the image processing apparatus 1, 5 or 7 described in each of the above embodiments includes, for example, a processor such as a DSP (Digital Signal Processor) and a CPU (Central Processing Unit), a main storage device such as a RAM (Random Access Memory), and the like.
  • the processing of each processing unit described above can be realized by executing a program stored in the storage device.
  • the above processing may be realized by hardware by providing a programmable integrated circuit such as an FPGA (Field (Programmable Gate Array) or an integrated circuit dedicated to the above processing.
  • the acquisition unit in each of the above embodiments acquires image information and distance information by the two imaging units and the distance calculation unit.
  • the distance between the image information and the subject corresponding to each pixel of the image information Any device may be used as long as information is input to the image processing unit.
  • image information and distance information may be acquired by an imaging device and a distance measuring device.
  • the distance measuring device for example, a technique using infrared rays represented by TOF (Time of Flight) method may be used.
  • the TOF method irradiates light such as infrared light that cannot be visually recognized by humans from a light source such as an LED (Light Emitting Diode), and measures the elapsed time from when the light is irradiated until it is reflected by the surface of the subject and returned. .
  • the distance to the subject is measured based on this elapsed time.
  • a method of directly measuring the time until the laser beam is pulsed and reflected by the surface of the object and returned, or the irradiation light (for example, infrared) is modulated and irradiated There is a method of calculating based on the phase difference between the phase of the light and the phase of the light reflected and returned.
  • the image is acquired by the acquisition unit, the distance information is calculated by the distance calculation unit, and then the distance information is input to the image processing unit.
  • the user may specify a measurement position on the image, and the distance calculation unit may calculate distance information for only the specified position. As a result, it is possible to measure without calculating distance information of a subject in a range not used for measurement, and thus it is possible to reduce the processing amount.

Abstract

An image processing device for calculating three-dimensional position information for a measurement position on a subject to be measured on the basis of a plurality of information items for images of the subject and a marker photographed from a plurality of photograph positions, wherein the image processing device is characterized in that the marker has a plurality of pointers and the positional relationships of the plurality of pointers express a three-dimensional coordinate system, and is characterized by having a three-dimensional position information calculation unit for calculating three-dimensional position information expressed by the three-dimensional coordinate systems of each of the plurality of photograph positions on the basis of the plurality of photograph information items and a three-dimensional position information conversion unit for converting the coordinate systems expressing the three-dimensional position information calculated by the three-dimensional position information calculation unit from the three-dimensional coordinate systems for each photograph position to the three-dimensional coordinate system expressed by the marker.

Description

画像処理装置、画像処理方法、プログラムImage processing apparatus, image processing method, and program
 本発明は、被写体の長さや面積を画像情報に基づいて算出する画像処理装置、画像処理方法等に関する。 The present invention relates to an image processing apparatus, an image processing method, and the like that calculate the length and area of a subject based on image information.
 例えば、住宅のリフォームやクリーニングサービスなどにおいて、施工する範囲の広さに応じて施工料金の見積りを行うために、床や壁、窓といった部分の長さや面積を計測したいという要望がある。 For example, in home renovations and cleaning services, there is a demand for measuring the length and area of parts such as floors, walls, and windows in order to estimate the construction fee according to the size of the construction area.
 従来は、メジャーやレーザー計測器などを用いて、ユーザが手動で計測対象の長さを測り、手計算することで面積計測が行われていた。このように人手で計測すると計測者によって誤差が生じたり、計測し忘れが発生したりする可能性があった。そこで、ディジタルカメラなどの撮像装置を用いて計測対象の被写体を撮像し、取得した画像情報から被写体の三次元位置情報を推定して計測する方法が開発されている。 Conventionally, the area was measured by manually measuring the length of the measurement object manually by a user using a measure or laser measuring instrument. If measurement is performed manually as described above, there is a possibility that an error may be caused by the measurer or the measurement may be forgotten. In view of this, a method has been developed in which an object to be measured is imaged using an imaging device such as a digital camera, and the three-dimensional position information of the object is estimated from the acquired image information and measured.
 撮像装置を用いた手法として、例えば、形状や寸法が既知の標識を1つ用いる手法がある(特許文献1)。この手法では、計測対象の被写体と標識とを撮像装置で撮像し、これを異なる撮像位置から複数回行うことで複数の画像を取得する。そして、取得した画像上で計測範囲の頂点位置をユーザが指定する。標識の実際の形状や寸法と、複数の画像それぞれに写る標識の面積や傾きの情報とに基づいて、ユーザが指定した位置の被写体の三次元位置情報を推定し、この情報に基づいて計測範囲の面積を算出する。 As a technique using an imaging device, for example, there is a technique using one sign having a known shape or size (Patent Document 1). In this method, a subject to be measured and a marker are imaged by an imaging device, and a plurality of images are acquired by performing this multiple times from different imaging positions. Then, the user specifies the vertex position of the measurement range on the acquired image. Estimate the 3D position information of the subject at the position specified by the user based on the actual shape and dimensions of the sign and the area and inclination information of the sign appearing in each of the multiple images. Is calculated.
特開2010-286450号公報JP 2010-286450 A
 特許文献1に開示されている手法では、異なる複数の撮像位置から計測対象の被写体と標識とを撮像装置で撮像する。このとき、撮像される複数の画像全てに標識が写っている必要があり、かつ、複数の画像全てで計測対象全体を撮像する必要がある。すなわち、計測可能な範囲は、撮像装置の撮像範囲内に標識と計測対象とが撮像できる範囲となる。 In the technique disclosed in Patent Document 1, a subject to be measured and a marker are imaged by an imaging device from a plurality of different imaging positions. At this time, it is necessary that the sign is reflected in all of the plurality of images to be captured, and it is necessary to capture the entire measurement target with all of the plurality of images. That is, the measurable range is a range in which the sign and the measurement target can be imaged within the imaging range of the imaging apparatus.
 したがって、画角の狭い撮像装置で大きな被写体を計測するような場合には、撮像装置の撮像範囲内に被写体を収めるために、被写体から距離をとって撮像する必要がある。しかしながら、室内の計測を行う場合など、被写体からの距離を十分にとることができないシーンでは、一度の計測で面積を求めることは不可能である。 Therefore, when measuring a large subject with an imaging device with a narrow angle of view, it is necessary to take an image at a distance from the subject in order to keep the subject within the imaging range of the imaging device. However, in a scene where a sufficient distance from the subject cannot be taken, such as when measuring indoors, it is impossible to obtain the area with a single measurement.
 上記のようなシーン、例えば、広い部屋や長い廊下の床などで、特許文献1の技術によって面積を計測しようとする場合には、まず計測範囲を撮像装置の撮像範囲内に収まる複数の範囲に分割する。そして、分割した範囲それぞれで計測を行い、各計測結果を統合することで床全体の面積を取得できる。 In the above-described scene, for example, in a large room or a long corridor floor, when the area is to be measured by the technique of Patent Document 1, first, the measurement range is set to a plurality of ranges that fall within the imaging range of the imaging device. To divide. And it can measure in each divided range, and can acquire the area of the whole floor by integrating each measurement result.
 しかしながら、この方法では各計測における位置関係をユーザが正確に把握して計測を繰り返し、結果を統合していく必要がある。 However, this method requires the user to accurately grasp the positional relationship in each measurement, repeat the measurement, and integrate the results.
 このように、特許文献1の技術を用いる場合には、ユーザ自らが計測位置を正確に把握して対応をとらなければならず、負担がきわめて大きいという問題がある。また、ユーザによる計測範囲の指定精度が低い場合や、計測に漏れがあった場合など、統合した最終的な計測結果の誤差が大きくなるという問題がある。 As described above, when using the technique of Patent Document 1, there is a problem that the user himself / herself must grasp the measurement position accurately and take a countermeasure, and the burden is extremely large. In addition, there is a problem that an error in the integrated final measurement result becomes large, for example, when the accuracy of specifying the measurement range by the user is low or when there is a measurement failure.
 また、撮像装置の撮像範囲内に収まるサイズの被写体の計測であっても、遮蔽物によってどの撮像位置からも計測範囲の全ての頂点位置を一度で撮像できないような場合が考えられる。例えば、曲がり角を含む廊下の床面積や、壁面側に凹んだ部分が含まれるような単純な矩形ではない部屋の床面積などである。これらの場合も上記のように複数の範囲に分割して床全体を計測する必要があるため、上記と同様の問題が生じる。 In addition, even when measuring a subject having a size that falls within the imaging range of the imaging apparatus, there may be a case where all the vertex positions in the measurement range cannot be captured at any time from any imaging position due to the shielding object. For example, the floor area of a corridor including a corner or the floor area of a non-rectangular room that includes a recessed portion on the wall surface side. In these cases as well, since it is necessary to measure the entire floor by dividing it into a plurality of ranges as described above, the same problem as described above arises.
 本発明は、上記の点に鑑みてなされたものであり、撮像装置の撮像範囲内に収まらない計測範囲であっても、ユーザの負担が少ない方法で、高精度に計測が可能な画像処理装置、画像処理方法を提供することを目的とする。 The present invention has been made in view of the above points, and an image processing apparatus capable of measuring with high accuracy by a method with less burden on the user even in a measurement range that does not fall within the imaging range of the imaging device. An object of the present invention is to provide an image processing method.
 本発明の一観点によれば、計測対象である被写体とマーカーとを複数の撮像位置で撮像した複数の画像情報に基づいて、該被写体上の計測位置の三次元位置情報を算出する画像処理装置であって、前記マーカーは複数のポインターを有し、該複数のポインターの位置関係が三次元座標系を表し、前記複数の画像情報に基づいて、前記複数の撮像位置毎の三次元座標系で表される前記三次元位置情報を算出する三次元位置情報算出部と、前記三次元位置情報算出部で算出された前記三次元位置情報を表す座標系を、前記撮像位置毎の三次元座標系から前記マーカーが表す三次元座標系へと変換する三次元位置情報変換部と、を有することを特徴とする画像処理装置が提供される。 According to one aspect of the present invention, an image processing device that calculates three-dimensional position information of a measurement position on a subject based on a plurality of pieces of image information obtained by imaging a subject to be measured and a marker at a plurality of imaging positions. The marker has a plurality of pointers, and the positional relationship between the plurality of pointers represents a three-dimensional coordinate system, and the three-dimensional coordinate system for each of the plurality of imaging positions is based on the plurality of image information. A three-dimensional position information calculation unit that calculates the three-dimensional position information represented, and a coordinate system that represents the three-dimensional position information calculated by the three-dimensional position information calculation unit. There is provided an image processing apparatus comprising: a three-dimensional position information converting unit that converts the information into a three-dimensional coordinate system represented by the marker.
 この発明によれば、撮像位置毎にマーカーと計測範囲の一部分とを撮像し、撮像された画像それぞれからユーザが計測位置を指定し、マーカーの三次元座標系を基準として計算を行うことで、被写体を1つの画像で撮影できない状況でも、被写体の長さや面積を精度良く測定することができる。 According to the present invention, by imaging a marker and a part of the measurement range for each imaging position, the user designates the measurement position from each of the captured images, and the calculation is performed based on the three-dimensional coordinate system of the marker. Even in a situation where a subject cannot be captured with a single image, the length and area of the subject can be accurately measured.
 また、本発明は、前記マーカーには、第1の三次元座標系を表す第1のマーカーと、第1の三次元座標系とは異なる第2の三次元座標系を表す第2のマーカーと、を含む複数のマーカーがあり、前記三次元位置情報算出部は、前記複数の画像情報のうち、前記第1のマーカーと前記第2のマーカーとが同時に撮像されている画像情報に基づいて、前記第1の三次元座標系と前記第2の三次元座標系との第1の位置関係を算出し、前記三次元位置情報変換部は、前記複数の画像情報のうち前記第2のマーカーが撮像されている画像情報に基づいて、前記三次元位置情報算出部で算出された前記三次元位置情報を表す座標系を、前記画像処理装置を基準とする三次元座標系から前記第2の三次元座標系へと変換し、さらに、前記第1の位置関係に基づいて、前記第2の三次元座標系から前記第1の三次元座標系へと座標系を変換することを特徴とする画像処理装置である。 According to the present invention, the marker includes a first marker representing a first three-dimensional coordinate system, a second marker representing a second three-dimensional coordinate system different from the first three-dimensional coordinate system, and , And the three-dimensional position information calculation unit is based on image information in which the first marker and the second marker are simultaneously imaged among the plurality of image information. A first positional relationship between the first three-dimensional coordinate system and the second three-dimensional coordinate system is calculated, and the three-dimensional positional information conversion unit is configured to determine whether the second marker among the plurality of image information is the second marker. A coordinate system representing the three-dimensional position information calculated by the three-dimensional position information calculation unit based on the image information being imaged is changed from the three-dimensional coordinate system based on the image processing device to the second cubic. Convert to the original coordinate system, and further, the first positional relationship Based on an image processing apparatus characterized by converting the coordinate system to the first three-dimensional coordinate system from said second three-dimensional coordinate system.
 この発明によれば、算出される全ての三次元位置情報は、1つの基準となるマーカーの三次元座標系へと変換され、座標系が統合された三次元位置情報に基づいて、被写体の長さや面積を精度良く測定することができる。 According to the present invention, all the calculated three-dimensional position information is converted into a three-dimensional coordinate system of one reference marker, and the length of the subject is based on the three-dimensional position information in which the coordinate system is integrated. The sheath area can be accurately measured.
 本明細書は本願の優先権の基礎である日本国特許出願2014-085659号の明細書および/または図面に記載される内容を包含する。 This specification includes the contents described in the specification and / or drawings of Japanese Patent Application No. 2014-085659 which is the basis of the priority of the present application.
 本発明によれば、撮像装置の撮像範囲内に収まらない計測範囲であっても、ユーザの負担が少ない方法で、高精度に計測することが可能となる。 According to the present invention, even in a measurement range that does not fall within the imaging range of the imaging device, it is possible to measure with high accuracy by a method with less burden on the user.
本発明の第1の実施形態に係る画像処理装置の一構成例を示す概略ブロック図である。1 is a schematic block diagram illustrating a configuration example of an image processing apparatus according to a first embodiment of the present invention. 画像処理部の一構成例を示す機能ブロック図である。It is a functional block diagram which shows one structural example of an image process part. 本発明の第2の実施形態に係る2つのマーカーによる計測方法を用いた画像処理装置の一構成例を示す概略ブロック図である。It is a schematic block diagram which shows the example of 1 structure of the image processing apparatus using the measuring method by two markers which concerns on the 2nd Embodiment of this invention. 1つのマーカーを有する計測シーンの一例を示す図である。It is a figure which shows an example of the measurement scene which has one marker. 画像処理装置で撮像された画像を示す図である。It is a figure which shows the image imaged with the image processing apparatus. 本発明の第1の実施形態に係る1つのマーカーによる計測方法の計測処理の流れ示すフローチャート図である。It is a flowchart figure which shows the flow of the measurement process of the measuring method by one marker which concerns on the 1st Embodiment of this invention. 視差と距離との関係を示す図である。It is a figure which shows the relationship between parallax and distance. マーカーの一構成例を示す外観図である。It is an external view which shows one structural example of a marker. 画像処理装置とマーカーと被写体との座標位置関係を示す図である。It is a figure which shows the coordinate position relationship of an image processing apparatus, a marker, and a to-be-photographed object. 三次元座標系におけるマーカーへのベクトルを示す図である。It is a figure which shows the vector to the marker in a three-dimensional coordinate system. 本発明の第1の実施形態に係る計測範囲の俯瞰図である。It is an overhead view of the measurement range which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る計測範囲の俯瞰図である。It is an overhead view of the measurement range which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る計測範囲の俯瞰図である。It is an overhead view of the measurement range which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る計測位置の俯瞰図である。It is an overhead view of the measurement position which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る計測位置の指定順に接続した俯瞰図である。It is a bird's-eye view connected in order of specification of a measurement position concerning a 1st embodiment of the present invention. 本発明の第1の実施形態に係る計測範囲の俯瞰図である。It is an overhead view of the measurement range which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る計測範囲を三角形領域に分割した俯瞰図である。It is an overhead view which divided | segmented the measurement range which concerns on the 1st Embodiment of this invention into the triangular area | region. 本発明の第2の実施形態に係る2つのマーカーによる計測方法の計測処理の流れを示すフローチャート図である。It is a flowchart figure which shows the flow of the measurement process of the measuring method by the two markers which concern on the 2nd Embodiment of this invention. 本発明の第2の実施形態に係る計測範囲の俯瞰図である。It is an overhead view of the measurement range which concerns on the 2nd Embodiment of this invention.
 以下、本発明の実施形態について、図面を参照して詳細に説明する。なお、各図における表現は理解しやすいように誇張して記載しており、実際とは異なる場合がある。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Note that the expressions in the drawings are exaggerated for easy understanding, and may differ from actual ones.
(第1の実施形態)
 図1Aは、本発明の第1の実施形態における画像処理装置1の概略構成例を示す機能ブロック図である。
(First embodiment)
FIG. 1A is a functional block diagram illustrating a schematic configuration example of the image processing apparatus 1 according to the first embodiment of the present invention.
 本実施形態における画像処理装置1は、被写体を撮像して画像情報を取得し、さらに、画像情報に対応した距離情報を算出する取得部10と、取得部10からの出力情報と入力部14の入力情報とに基づいて画像処理を行って計測値を算出する画像処理部11と、取得部10や画像処理部11の出力情報を記憶する記憶部12と、取得部10や画像処理部11の出力情報と記憶部12の記憶する情報とを表示する表示部13と、ユーザの操作を受け付けて入力情報を画像処理部11へ出力する入力部14とを備える。 The image processing apparatus 1 according to the present embodiment captures a subject to acquire image information, calculates distance information corresponding to the image information, and outputs information from the acquisition unit 10 and the input unit 14. An image processing unit 11 that performs image processing based on input information and calculates a measurement value, a storage unit 12 that stores output information of the acquisition unit 10 and the image processing unit 11, and an acquisition unit 10 and an image processing unit 11 The display unit 13 displays output information and information stored in the storage unit 12, and the input unit 14 receives a user operation and outputs input information to the image processing unit 11.
 取得部10は2つの撮像部100および101と、2つの撮像部100、101で取得した2つの画像情報に基づいて距離情報を算出する距離算出部102とを備える。 The acquisition unit 10 includes two imaging units 100 and 101, and a distance calculation unit 102 that calculates distance information based on the two pieces of image information acquired by the two imaging units 100 and 101.
 撮像部100および101は、光軸が略平行となるように横並び又は縦並びに配置されている。撮像部100および101は図示しないレンズモジュールなどの光学系と、CCD(Charge Coupled Device)又はCMOS(Complementary Metal Oxide Semiconductor)などのイメージセンサで構成される。また、アナログ信号処理部やA/D(Analog/Digital)変換部などを備え、イメージセンサからの信号を画像情報として出力する。本実施形態では、例として2つの撮像部100、101には同一構成のものを用いるが、2つの撮像部で同領域を撮像し画素間の対応を取ることが可能であれば、解像度や画角など構成の異なる撮像部を用いても構わない。 The imaging units 100 and 101 are arranged side by side or vertically so that their optical axes are substantially parallel. The imaging units 100 and 101 include an optical system such as a lens module (not shown) and an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). In addition, an analog signal processing unit, an A / D (Analog / Digital) conversion unit, and the like are provided, and a signal from the image sensor is output as image information. In the present embodiment, the two imaging units 100 and 101 having the same configuration are used as an example. However, if the two imaging units can capture the same area and take correspondence between the pixels, the resolution and the image can be obtained. Imaging units having different configurations such as corners may be used.
 距離算出部102は、撮像部100および101で取得された画像情報に基づき、ステレオ方式によって被写体の距離情報を算出する。 The distance calculation unit 102 calculates subject distance information by a stereo method based on the image information acquired by the imaging units 100 and 101.
 ステレオ方式は、2つの撮像部の光軸が略平行になるように並べ、2つの撮像部でほぼ同じ領域を撮像し、得られた2つの画像間で対応する画素の視差を求め、視差を基に距離を算出するものである。ステレオ方式において、2つの画像間で対応する画素を求めることはステレオマッチングと呼ばれる。 The stereo system is arranged so that the optical axes of the two imaging units are substantially parallel, and the two imaging units capture substantially the same region, obtain the parallax of the corresponding pixels between the two obtained images, and calculate the parallax. Based on this, the distance is calculated. In the stereo method, obtaining corresponding pixels between two images is called stereo matching.
 ステレオマッチングでは、2つの画像のどちらか一方を基準画像とし、もう一方を参照画像として設定する。撮像部が左右に配置されているときの基準画像のある画素について、参照画像上を水平方向に走査することで画素をマッチングする。画素のマッチングは注目画素を中心としたブロック単位で行われ、ブロック内の画素の絶対値差分の総和をとるSAD(Sum of Absolute Difference)を計算し、SADの値が最小となるブロックを決定することで、基準画像の注目画素に対応する参照画像上の画素を求める。SADによる計算手法以外に、SSD(Sum of Squared Difference)やグラフカット、DP(Dynamic Programming)マッチングといった計算手法もある。2つの撮像部が左右方向でなく、上下方向に配置されていた場合でも視差値は算出可能で、その場合は参照画像の走査を水平方向に代えて垂直方向にすればよい。 In stereo matching, one of the two images is set as a standard image, and the other is set as a reference image. For a certain pixel in the standard image when the imaging units are arranged on the left and right, the pixel is matched by scanning the reference image in the horizontal direction. Pixel matching is performed in units of blocks centered on the pixel of interest, and SAD (Sum of Absolute Difference) that calculates the sum of absolute value differences of the pixels in the block is calculated to determine a block that minimizes the SAD value. Thus, the pixel on the reference image corresponding to the target pixel of the standard image is obtained. In addition to SAD calculation methods, there are calculation methods such as SSD (Sum of Squared Difference), graph cut, and DP (Dynamic Programming) matching. The parallax value can be calculated even when the two imaging units are arranged in the vertical direction instead of the horizontal direction. In this case, the reference image may be scanned in the vertical direction instead of the horizontal direction.
 ステレオマッチングにより対応画素が求まることでその画素の視差値がわかり、これを全ての画素について行えば、基準画像の各画素に対応した視差値が求められる。ただし、ステレオマッチングで視差を求められるので、2つの画像で共通して撮像される領域である。 When the corresponding pixel is obtained by stereo matching, the parallax value of the pixel is known, and if this is performed for all the pixels, the parallax value corresponding to each pixel of the reference image is obtained. However, since the parallax is obtained by stereo matching, it is an area that is imaged in common by two images.
 算出された視差値に基づいて、画像処理装置1と被写体との間の距離はZ=(f×B)/(p×d)の式によって算出することができる。ここで、Zは距離値、fは2つの撮像部の焦点距離、Bは2つの撮像部の間の距離を表す基線長、pは撮像部のもつイメージセンサのピクセルピッチ、dは視差値である。 Based on the calculated parallax value, the distance between the image processing apparatus 1 and the subject can be calculated by the equation Z = (f × B) / (p × d). Here, Z is a distance value, f is a focal length of the two imaging units, B is a baseline length representing a distance between the two imaging units, p is a pixel pitch of the image sensor of the imaging unit, and d is a parallax value. is there.
 また、距離と視差とには相関があり、図5に示すような関係となる。図5に示すグラフは、横軸が距離、縦軸が視差を表している。図5に示すように、距離と視差とは反比例の関係にあり、線形的な関係ではない。つまり、距離が近いほど視差が大きく、距離が遠いほど視差が小さくなり、距離が近いほど距離の分解能が高くなる。したがって、ステレオ方式で被写体の距離を求める場合には、被写体に近づいて撮像するほど、より高い分解能で被写体までの距離を求めることができる。 Further, there is a correlation between the distance and the parallax, and the relationship is as shown in FIG. In the graph shown in FIG. 5, the horizontal axis represents distance, and the vertical axis represents parallax. As shown in FIG. 5, the distance and the parallax are in an inversely proportional relationship, not a linear relationship. That is, the closer the distance, the larger the parallax, the farther the distance, the smaller the parallax, and the closer the distance, the higher the resolution of the distance. Therefore, when the distance of the subject is obtained by the stereo method, the distance to the subject can be obtained with higher resolution as the image is taken closer to the subject.
 なお、本実施形態では撮像位置における三次元座標系の基準を撮像部100とする。これは上記の基準画像をどちらの撮像部の画像とするかによって変化する。つまり、基準画像を撮像部101の画像とすれば、三次元座標系の基準は撮像部101となり、撮像部101の光学系の後側主点位置が三次元座標系の原点となる。どちらか一方に固定であれば基準はどちらであっても構わない。 In the present embodiment, the reference of the three-dimensional coordinate system at the imaging position is the imaging unit 100. This changes depending on which imaging unit the above reference image is used as. That is, if the reference image is the image of the imaging unit 101, the reference of the three-dimensional coordinate system is the imaging unit 101, and the rear principal point position of the optical system of the imaging unit 101 is the origin of the three-dimensional coordinate system. The reference may be either as long as it is fixed to either one.
 また、上記では基準画像の全画素について視差値を算出する方法を説明したが、面積や長さを算出するためには、計測位置およびマーカーの三次元座標が算出できればよいため、計測位置およびマーカーの視差値を算出することで計算量を削減することができる。 In the above description, the method for calculating the parallax value for all the pixels of the reference image has been described. However, in order to calculate the area and the length, it is only necessary to calculate the three-dimensional coordinates of the measurement position and the marker. The amount of calculation can be reduced by calculating the parallax value.
 画像処理部11は、例えば、CPU(Central Processing Unit)などのプロセッサや、RAM(Random Access Memory)などの主記憶装置などを備え、記憶装置に格納されたプログラムを実行して処理を行う処理装置である。 The image processing unit 11 includes, for example, a processor such as a CPU (Central Processing Unit) and a main storage device such as a RAM (Random Access Memory), and executes a program stored in the storage device to perform processing. It is.
 記憶部12は、例えば、フラッシュメモリやハードディスクなどによって構成される記憶装置である。 The storage unit 12 is a storage device configured by, for example, a flash memory or a hard disk.
 表示部13は、例えば、液晶素子や有機EL(Electro Luminescence)などを画素とする表示ディスプレイである。 The display unit 13 is a display that uses, for example, a liquid crystal element or an organic EL (Electro Luminescence) as a pixel.
 入力部14は、例えば、画像処理装置1に備えられたボタンやスイッチ、あるいは、タッチパネルなどの入力装置である。タッチパネルの場合、表示部13を抵抗膜方式や静電容量方式などの一般的なタッチパネルとすることで、表示部13と入力部14とを1つの装置とすることができる。これにより、画面上に表示した画像にタッチして、被写体の計測位置を指定するなどの操作が可能となる。 The input unit 14 is, for example, an input device such as a button or switch provided in the image processing apparatus 1 or a touch panel. In the case of a touch panel, the display unit 13 and the input unit 14 can be integrated into one device by using a general touch panel such as a resistive film type or a capacitance type as the display unit 13. As a result, an operation such as touching the image displayed on the screen and designating the measurement position of the subject becomes possible.
 なお、本実施形態では、取得部10、画像処理部11、記憶部12、表示部13、入力部14を備える画像処理装置1を例として説明を行うが、各処理部が別の装置であっても構わない。例えば、別途用意した装置で取得された画像情報と距離情報とを、ネットワークを介して、あるいは、フラッシュメモリなどの記憶装置を介して画像処理部11の機能を持つ画像処理装置へ転送するようにしてもよい。表示部13は液晶モニタなど外部の表示装置、入力部14はマウスやキーボードなどの外部の入力装置としてもよく、本実施形態の画像処理装置1と同様の効果を得ることができる。 In this embodiment, the image processing apparatus 1 including the acquisition unit 10, the image processing unit 11, the storage unit 12, the display unit 13, and the input unit 14 will be described as an example. However, each processing unit is a separate device. It doesn't matter. For example, image information and distance information acquired by a separately prepared device are transferred to an image processing device having the function of the image processing unit 11 via a network or a storage device such as a flash memory. May be. The display unit 13 may be an external display device such as a liquid crystal monitor, and the input unit 14 may be an external input device such as a mouse or a keyboard, and the same effects as those of the image processing apparatus 1 of the present embodiment can be obtained.
 以下では、まず、第1の実施の形態として、三次元位置情報の統合基準として1つのマーカーを用い、複数回撮像を行うことで計測する方法について説明する。 In the following, first, as a first embodiment, a method of measuring by performing imaging a plurality of times using one marker as an integration reference of three-dimensional position information will be described.
 <1つのマーカーによる計測方法>
 図2は1つのマーカーを用いて被写体の面積を計測する様子を示した概観図であり、図2には、取得部10を有する画像処理装置1によってマーカー2と被写体3とを撮像している様子が示されている。
<Measurement method with one marker>
FIG. 2 is an overview diagram showing how the area of the subject is measured using one marker. In FIG. 2, the marker 2 and the subject 3 are imaged by the image processing apparatus 1 having the acquisition unit 10. The situation is shown.
 図2に示される三次元座標系Maは、撮像位置aにおいて画像処理装置1で撮像を行う際の画像処理装置1を基準とした三次元座標系である。 A three-dimensional coordinate system Ma shown in FIG. 2 is a three-dimensional coordinate system based on the image processing apparatus 1 when the image processing apparatus 1 performs imaging at the imaging position a.
 また、三次元座標系Mbは、撮像位置bにおいて画像処理装置1で撮像を行う際の画像処理装置1を基準とした三次元座標系である。 The three-dimensional coordinate system Mb is a three-dimensional coordinate system based on the image processing apparatus 1 when the image processing apparatus 1 performs imaging at the imaging position b.
 三次元座標系MaおよびMbの原点は、撮像部100の光学系の後側主点位置に一致し、撮像部100の光軸は各三次元座標系のZ軸と一致する。なお、本実施形態では各撮像位置における三次元座標系の基準を撮像部100とするが、撮像部101であっても構わない。三次元座標系の基準は、上述したように、ステレオ方式による距離算出時に、撮像部100と101のどちらの画像を基準画像として用いるかに依存して変化する。 The origins of the three-dimensional coordinate systems Ma and Mb coincide with the rear principal point position of the optical system of the imaging unit 100, and the optical axis of the imaging unit 100 coincides with the Z axis of each three-dimensional coordinate system. In the present embodiment, the reference of the three-dimensional coordinate system at each imaging position is the imaging unit 100, but the imaging unit 101 may be used. As described above, the reference of the three-dimensional coordinate system changes depending on which image of the imaging unit 100 or 101 is used as the reference image when calculating the distance by the stereo method.
 また、三次元座標系Omは、マーカー2を基準とした三次元座標系である。三次元座標系Omの原点は、マーカー2の中央に位置する特徴点(以下、「ポインター」とも呼ぶ)である。マーカー2の詳細は後述する。 Further, the three-dimensional coordinate system Om is a three-dimensional coordinate system based on the marker 2. The origin of the three-dimensional coordinate system Om is a feature point (hereinafter also referred to as “pointer”) located at the center of the marker 2. Details of the marker 2 will be described later.
 図2の例では、計測範囲が被写体3上に太線で示されており、計測範囲である矩形領域の面積を計測するために、×印で示されている4つの計測位置a1、a2、b1、b2の三次元位置情報を取得する。計測位置a1、a2、b1、b2は計測範囲の頂点である。被写体3上の計測範囲は撮像部100および101の撮像範囲内に収まらない大きさであり、一度の撮像では計測が不可能である。そこで、撮像位置aと撮像位置bの2つの位置から画像処理装置1によって撮像し、計測位置a1、a2、b1、b2の三次元位置情報を取得して、矩形領域a1a2b2b1の面積を計測する。 In the example of FIG. 2, the measurement range is indicated by a thick line on the subject 3, and in order to measure the area of the rectangular area that is the measurement range, four measurement positions a 1, a 2, b 1 indicated by x marks are measured. , B2 three-dimensional position information is acquired. Measurement positions a1, a2, b1, and b2 are the vertices of the measurement range. The measurement range on the subject 3 is a size that does not fit within the imaging range of the imaging units 100 and 101, and measurement is impossible with one imaging. Therefore, the image processing apparatus 1 picks up images from the two positions of the imaging position a and the imaging position b, acquires the three-dimensional position information of the measurement positions a1, a2, b1, and b2, and measures the area of the rectangular region a1a2b2b1.
 撮像位置aおよびbの位置から画像処理装置1によって撮像された画像を図3(A)および(B)に示す。図3(A)および(B)には、それぞれ撮像部100で撮像された画像が示されている。なお、撮像位置aおよびbの位置それぞれにおいて、撮像部100、101の両方で撮像が行われるため計4枚の画像が取得される。 3A and 3B show images captured by the image processing apparatus 1 from the positions of the imaging positions a and b. 3A and 3B show images captured by the imaging unit 100, respectively. Note that a total of four images are acquired because imaging is performed by both of the imaging units 100 and 101 at each of the imaging positions a and b.
 図3(A)および(B)には、マーカー2と被写体3の一部とが写り込んでいることが示されている。図3(A)には被写体3の頂点a1およびa2が、図3(B)には被写体3の頂点b1およびb2が写されている。なお、撮像位置aおよびbの位置は、マーカー2と、計測範囲の4つの頂点a1、a2、b1、b2とが写るように設定されていればよく、撮像位置とマーカー2の位置は自由に設定することができる。ただし、本実施形態の例の場合には、1つの計測範囲の計測を行っている間はマーカーを移動させず固定とする。これは、後述する座標変換を、マーカー位置を基準として行うためである。 3A and 3B show that the marker 2 and a part of the subject 3 are reflected. 3A shows the vertices a1 and a2 of the subject 3, and FIG. 3B shows the vertices b1 and b2 of the subject 3. The positions of the imaging positions a and b need only be set so that the marker 2 and the four vertices a1, a2, b1, and b2 of the measurement range can be seen, and the imaging position and the position of the marker 2 can be freely set. Can be set. However, in the case of the example of the present embodiment, the marker is fixed without being moved during the measurement of one measurement range. This is because coordinate conversion described later is performed based on the marker position.
 上記の通り、撮像位置aとbとのそれぞれで、撮像部100、101により画像が撮像される。この2枚の画像情報に基づいて、距離算出部102はステレオ方式によって被写体までの距離情報を算出する。この距離情報は撮像部100の画像の各画素に対応した距離情報である。撮像部100の画像情報と、距離算出部102の距離情報とが画像処理部11へ出力される。 As described above, images are captured by the imaging units 100 and 101 at the imaging positions a and b, respectively. Based on the two pieces of image information, the distance calculation unit 102 calculates distance information to the subject by a stereo method. This distance information is distance information corresponding to each pixel of the image of the imaging unit 100. Image information of the imaging unit 100 and distance information of the distance calculation unit 102 are output to the image processing unit 11.
 画像処理部11は、撮像位置aで撮像された画像情報と算出された距離情報とに基づき、図3(A)に写る被写体の三次元位置情報を三次元座標系Maにおける三次元座標値として算出する。また、撮像位置bで撮像された画像情報と算出された距離情報とに基づき、図3(B)に写る被写体の三次元位置情報を三次元座標系Mbにおける三次元座標値として算出する。このとき、算出された頂点a1、a2の三次元位置情報と、頂点b1、b2の三次元位置情報とは、基準となる三次元座標系が異なるため、このままでは面積の計測が不可能である。 The image processing unit 11 uses the three-dimensional position information of the subject shown in FIG. 3A as the three-dimensional coordinate value in the three-dimensional coordinate system Ma based on the image information captured at the imaging position a and the calculated distance information. calculate. Further, based on the image information captured at the imaging position b and the calculated distance information, the three-dimensional position information of the subject shown in FIG. 3B is calculated as a three-dimensional coordinate value in the three-dimensional coordinate system Mb. At this time, the calculated three-dimensional position information of the vertices a1 and a2 and the three-dimensional position information of the vertices b1 and b2 are different from each other in the reference three-dimensional coordinate system. .
 そこで、本実施の形態の画像処理部11は、異なる座標系で算出された頂点a1、a2と頂点b1、b2との三次元位置情報を、マーカー2を基準とする三次元座標系Omの三次元位置情報へと変換する。座標系の変換方法については後述する。変換された頂点a1、a2、b1、b2の三次元位置情報は、同一の三次元座標系Omにおけるものであり、実際の空間上での位置関係を表すため、面積の算出が可能となる。 Therefore, the image processing unit 11 according to the present embodiment uses the three-dimensional position information of the vertices a1 and a2 and the vertices b1 and b2 calculated in different coordinate systems as the third order of the three-dimensional coordinate system Om with the marker 2 as a reference. Convert to original location information. The coordinate system conversion method will be described later. The converted three-dimensional position information of the vertices a1, a2, b1, and b2 is in the same three-dimensional coordinate system Om, and represents the positional relationship in the actual space, so that the area can be calculated.
 画像処理部11は、座標系が変換された三次元位置情報に基づいて各頂点間を結ぶ矩形領域の面積を算出する。三次元位置が既知の頂点で形成される領域の面積算出方法は公知の技術であり、詳細な説明は割愛するが、例えば、頂点a1、a2、b1を結ぶ三角形領域と、頂点a2、b1、b2を結ぶ三角形領域の2つの領域の面積を算出して加算すれば矩形領域a1a2b2b1の面積を求めることができる。 The image processing unit 11 calculates the area of the rectangular region connecting the vertices based on the three-dimensional position information obtained by converting the coordinate system. The method of calculating the area of a region formed by known vertices with a three-dimensional position is a well-known technique and will not be described in detail. For example, a triangular region connecting vertices a1, a2, and b1, and vertices a2, b1, If the areas of two regions of the triangular region connecting b2 are calculated and added, the areas of the rectangular regions a1a2b2b1 can be obtained.
 また、上記では4つの頂点で囲まれる矩形領域の面積を求める例を示したが、計測位置が3つ以上指定されれば3つの頂点で囲まれる三角形領域の面積計測が可能である。また、2点以上が指定されれば、2点間の距離すなわち長さを算出することも可能である。したがって、画像処理部11は計測情報として、算出した長さや面積の値などを出力することができる。また、頂点を通る直線や線分同士がなす角度などの情報を算出し出力することも可能である。 In the above example, the area of the rectangular area surrounded by the four vertices is obtained. However, if three or more measurement positions are specified, the area of the triangular area surrounded by the three vertices can be measured. If two or more points are specified, the distance between the two points, that is, the length can be calculated. Therefore, the image processing unit 11 can output the calculated length or area value as measurement information. It is also possible to calculate and output information such as straight lines passing through the vertices and angles formed by line segments.
 上記のようにして画像処理部11で算出された面積の値は、記憶部12に出力され記憶される。また、表示部13へ出力してディスプレイに表示することで、ユーザに計測結果を知らせることができる。これにより、計測結果を後で確認したり、他の計測結果と併せて表示部13に表示して確認したりすることも可能となる。また、記憶部12には、計測情報の他に、取得部10の画像情報や距離情報などを記憶させておいてもよい。これにより、その場で計測を行うのではなく、後から画像情報と距離情報とに基づいて計測を行うことも可能となる。 The area value calculated by the image processing unit 11 as described above is output to and stored in the storage unit 12. Moreover, a measurement result can be notified to a user by outputting to the display part 13 and displaying on a display. As a result, the measurement result can be confirmed later or displayed on the display unit 13 together with other measurement results for confirmation. In addition to the measurement information, the storage unit 12 may store image information, distance information, and the like of the acquisition unit 10. As a result, it is possible not to perform measurement on the spot but to perform measurement based on image information and distance information later.
 また、表示部13に、計測結果の他に撮像部100の画像情報を表示することで、ユーザは撮像された画像を確認することができる。これによって、ユーザは表示部13の表示を確認しながら、入力部14によって画像に写る被写体上の点を指定して計測位置を決定することができる。例えば上記の例では、ユーザは表示部13に表示される図3(A)および(B)の画像上で、頂点a1、a2および頂点b1、b2の二次元座標位置を指定する。入力部14で入力された指定位置の情報は画像処理部11へ出力され、画像処理部11は指定された位置の三次元位置情報を取得し、さらに座標変換の処理を行う。 Further, by displaying the image information of the imaging unit 100 in addition to the measurement result on the display unit 13, the user can confirm the captured image. As a result, the user can determine the measurement position by specifying the point on the subject that appears in the image by the input unit 14 while confirming the display on the display unit 13. For example, in the above example, the user designates the two-dimensional coordinate positions of the vertices a1 and a2 and the vertices b1 and b2 on the images of FIGS. 3A and 3B displayed on the display unit 13. The information on the designated position input by the input unit 14 is output to the image processing unit 11, and the image processing unit 11 acquires the three-dimensional position information on the designated position, and further performs coordinate conversion processing.
 以上のように、本実施形態の画像処理装置1は、取得した画像情報と距離情報とから、被写体上の計測位置の三次元位置情報を算出し、異なる三次元座標系を基準とする複数の三次元位置情報を、1つの統合基準であるマーカー2の三次元座標系へと変換する。そして、1つの三次元座標系で表される三次元位置情報に基づいて、計測範囲の面積を算出する。 As described above, the image processing apparatus 1 according to the present embodiment calculates the three-dimensional position information of the measurement position on the subject from the acquired image information and distance information, and uses a plurality of different three-dimensional coordinate systems as a reference. The three-dimensional position information is converted into a three-dimensional coordinate system of the marker 2 which is one integrated reference. Then, the area of the measurement range is calculated based on the three-dimensional position information represented by one three-dimensional coordinate system.
 なお、上記の例では2つの撮像位置から撮像することで計測を行ったが、さらに撮像位置を追加して撮像する画像を増やし、ユーザが指定する計測位置を増やしてもよい。この場合も上記と同様の方法により、各撮像位置で算出される被写体の三次元位置情報を、1つのマーカーを基準とする三次元座標系へと変換することで計測が可能となる。 In the above example, the measurement is performed by imaging from two imaging positions. However, the number of images to be captured may be increased by adding imaging positions, and the measurement positions specified by the user may be increased. In this case as well, measurement can be performed by converting the three-dimensional position information of the subject calculated at each imaging position into a three-dimensional coordinate system based on one marker by the same method as described above.
 なお、画像に写る被写体上の計測位置やマーカーの位置は、ユーザが表示部13に表示された画像を確認しながら、入力部14を用いて指定する。このとき、ユーザに指定された位置を、指定された位置の付近で最も特徴的な位置、あるいは、被写体の角や端となっている位置に修正するようにすることで、ユーザの位置指定精度が低い場合にも、高精度に計測位置やマーカーの位置を指定する。これは、計測位置は部屋の角や壁の端など、特徴的な部分である可能性が高いためである。なお、指定位置の修正は、画像情報や距離情報に基づいて画像処理部11で行う。ただし、マーカーの形状や色は既知であるので、テンプレートマッチング手法など公知の画像認識技術を用いて自動的に検出するようにしてもよい。 Note that the measurement position on the subject and the position of the marker appearing on the image are specified by the user using the input unit 14 while confirming the image displayed on the display unit 13. At this time, by correcting the position specified by the user to the most characteristic position in the vicinity of the specified position, or the position at the corner or end of the subject, the position specifying accuracy of the user Even when is low, the measurement position and marker position are specified with high accuracy. This is because the measurement position is likely to be a characteristic part such as a corner of a room or an edge of a wall. The specified position is corrected by the image processing unit 11 based on image information and distance information. However, since the shape and color of the marker are known, the marker may be automatically detected using a known image recognition technique such as a template matching method.
(1つのマーカーによる計測方法の処理手順)
 以下に、上述した1つのマーカーによる計測方法の処理手順について、図1Bの機能ブロック図および図4のフローチャート図を用いて説明する。図1Bは、画像処理部11の一構成例を示す機能ブロック図であり、このような機能は、ハードウェア構成又はソフトウェア構成のいずれによっても実現することができる。画像処理部11は、計測位置指定入力受付部11-1と、三次元位置情報算出部11-2と、三次元位置情報変換部11-3と、面積算出部11-4と、表示制御部11-5と、を有する。
(Measurement method processing procedure with one marker)
Hereinafter, the processing procedure of the measurement method using one marker will be described with reference to the functional block diagram of FIG. 1B and the flowchart of FIG. FIG. 1B is a functional block diagram showing a configuration example of the image processing unit 11, and such a function can be realized by either a hardware configuration or a software configuration. The image processing unit 11 includes a measurement position designation input receiving unit 11-1, a three-dimensional position information calculation unit 11-2, a three-dimensional position information conversion unit 11-3, an area calculation unit 11-4, and a display control unit. 11-5.
 まず、ユーザはマーカー2を図2に示すように計測範囲の中央前方付近に設置する(ステップS101)。なお、本実施形態では、マーカーの設置位置は被写体の計測範囲に応じて適宜設定し、一度設置した後は計測が終了するまで固定とする。 First, the user installs the marker 2 near the center front of the measurement range as shown in FIG. 2 (step S101). In the present embodiment, the marker installation position is appropriately set according to the measurement range of the subject, and is fixed until the measurement is completed once it is installed.
 次に、ユーザは画像処理装置1を撮像位置aへと移動させ(ステップS102)、計測範囲である矩形領域の頂点a1、a2を含む被写体3の一部と、マーカー2とを取得部10の2つの撮像部100および101で撮像する(ステップS103)。このとき、撮像された2つの画像情報に基づいて、距離算出部102により距離情報が算出される。 Next, the user moves the image processing apparatus 1 to the imaging position a (step S102), and acquires a part of the subject 3 including the vertices a1 and a2 of the rectangular area that is the measurement range and the marker 2 of the acquisition unit 10. Imaging is performed by the two imaging units 100 and 101 (step S103). At this time, distance information is calculated by the distance calculation unit 102 based on the two pieces of captured image information.
 画像処理装置1は撮像部100および101で撮像された2つの画像のうち基準とする画像(本実施形態では撮像部100の画像)を表示部13へ表示し、ユーザは表示された画像を確認しながら、計測範囲の頂点a1、a2を入力部14によって指定する(ステップS104)。入力された情報は、計測位置指定入力受付部11-1が受付ける。 The image processing apparatus 1 displays a reference image (in this embodiment, the image of the imaging unit 100) on the display unit 13 among the two images captured by the imaging units 100 and 101, and the user confirms the displayed image. Meanwhile, the apexes a1 and a2 of the measurement range are designated by the input unit 14 (step S104). The input information is received by the measurement position designation input receiving unit 11-1.
 次に、全ての計測位置、すなわち、計測範囲の全ての頂点位置の指定が終わっているかどうかの確認を行い(ステップS105)、終わっている場合にはステップS106へと移行し、終わっていない場合にはステップS102に戻る。全ての計測位置を指定し終わっていない場合、全ての計測位置が指定されるまでステップS102からステップS105を繰り返す。全ての計測位置を指定し終わったかどうかは、表示部13へ計測位置の指定を継続するかどうかの質問を表示するなどしてユーザに選択させる。 Next, it is confirmed whether or not all the measurement positions, that is, all the vertex positions in the measurement range have been specified (step S105). If the measurement positions have been completed, the process proceeds to step S106. Return to step S102. If all measurement positions have not been specified, step S102 to step S105 are repeated until all measurement positions are specified. Whether or not all the measurement positions have been designated is displayed on the display unit 13 by, for example, displaying a question as to whether or not the measurement positions are to be designated.
 図2の例の場合では、撮像位置aで被写体を撮像して頂点a1、a2を選択した後、一度ステップS102に戻る。そして、ユーザは画像処理装置1を撮像位置bに移動させ(ステップS102)、頂点b1、b2を含む被写体3の一部とマーカー2とを画像処理装置1で撮像する(ステップS103)。 In the case of the example in FIG. 2, after imaging the subject at the imaging position a and selecting the vertices a1 and a2, the process once returns to step S102. Then, the user moves the image processing apparatus 1 to the imaging position b (step S102), and images a part of the subject 3 including the vertices b1 and b2 and the marker 2 with the image processing apparatus 1 (step S103).
 画像処理装置1は撮像部100の画像を表示部13へ表示し、ユーザは表示された画像上で計測範囲の頂点b1、b2を入力部14によって指定する(ステップS104)。そして、再度全ての計測位置を指定したかどうか確認し(ステップS105)、指定が終わっているのでステップS106へと移行する。 The image processing apparatus 1 displays the image of the imaging unit 100 on the display unit 13, and the user designates the vertices b1 and b2 of the measurement range on the displayed image by the input unit 14 (step S104). Then, it is confirmed again whether or not all the measurement positions have been designated (step S105). Since the designation has been completed, the process proceeds to step S106.
 次に、画像処理部11の三次元位置情報算出部11-2は、取得部10が出力した画像情報と距離情報とに基づいて、ユーザに指定された計測位置a1、a2と計測位置b1、b2との三次元位置情報を算出する(ステップS106)。 Next, the three-dimensional position information calculation unit 11-2 of the image processing unit 11 determines the measurement positions a1 and a2 and the measurement position b1 designated by the user based on the image information and the distance information output from the acquisition unit 10. The three-dimensional position information with b2 is calculated (step S106).
 そして、三次元位置情報変換部11-3が、異なる三次元座標系で算出された計測位置a1、a2および計測位置b1、b2の三次元位置情報を、マーカー2を基準とする三次元座標系で表される三次元位置情報へと変換する(ステップS107)。 Then, the 3D position information conversion unit 11-3 uses the 3D position information of the measurement positions a1 and a2 and the measurement positions b1 and b2 calculated in different 3D coordinate systems as a 3D coordinate system with the marker 2 as a reference. Is converted into the three-dimensional position information represented by (step S107).
 変換された計測位置a1、a2、b1、b2の三次元位置情報に基づいて、画像処理部11の面積算出部11-4は面積を算出し(ステップS108)、算出結果を表示部13へと出力し、表示制御部11-5が、表示部13への表示を行う(ステップS109)。 Based on the three-dimensional position information of the converted measurement positions a1, a2, b1, and b2, the area calculation unit 11-4 of the image processing unit 11 calculates an area (step S108), and the calculation result is sent to the display unit 13. Then, the display control unit 11-5 performs display on the display unit 13 (step S109).
 以上の手順によって、画像処理装置1により、1つのマーカーを用いて被写体の面積を計測することができる。 By the above procedure, the image processing apparatus 1 can measure the area of the subject using one marker.
 なお、上記の各ステップにおける出力情報、すなわち撮像された画像情報やユーザによって指定された計測位置の情報、取得部10や画像処理部11の出力情報は記憶部12に記憶され、適宜読み出しが行われる。 Note that output information in each of the above steps, that is, imaged image information, measurement position information designated by the user, and output information of the acquisition unit 10 and the image processing unit 11 are stored in the storage unit 12 and read out appropriately. Is called.
 なお、上記の説明では、ステップS104の後にステップS105において全ての計測位置の指定が終わったかどうかを確認し、終わっていた場合にステップS106へ移行する処理手順を述べたが、これに限られるものではない。例えば、ステップS104が終わった時点でステップS106とステップS107の処理を行い、その後にステップS105の確認を行うようにしてもよい。つまり、撮像された画像上でユーザが計測位置を指定した時点で、計測位置の三次元位置情報を算出し、その後に全ての計測位置を指定したかどうか確認するようにしてもよい。この場合、算出された三次元位置情報は記憶部12に記憶しておき、全ての計測位置の三次元位置情報が算出された後にステップS108へと移行し、画像処理部11が記憶部12から情報を読み出して面積を算出するようにすればよい。 In the above description, after step S104, it is confirmed whether or not all the measurement positions have been specified in step S105. If the measurement positions have been completed, the process proceeds to step S106. However, the present invention is not limited to this. is not. For example, the processing of step S106 and step S107 may be performed when step S104 is completed, and then the confirmation of step S105 may be performed. That is, when the user designates the measurement position on the captured image, the three-dimensional position information of the measurement position may be calculated, and it may be confirmed whether all the measurement positions have been designated thereafter. In this case, the calculated three-dimensional position information is stored in the storage unit 12, and after the three-dimensional position information of all measurement positions is calculated, the process proceeds to step S <b> 108. Information may be read to calculate the area.
 さらに、例えば、3つ以上の計測位置が指定された時点でステップS108までの処理を実行し、指定済みの計測位置で形成される領域の面積を算出するようにしてもよい。計測位置が指定される度に、それまでに指定された全ての計測位置で形成される領域の面積を算出し、表示部13へ表示するようにすれば、ステップS105の確認を毎回行う手順を省略することも可能である。 Furthermore, for example, when three or more measurement positions are designated, the processing up to step S108 may be executed to calculate the area of the region formed at the designated measurement positions. Every time a measurement position is designated, if the area of the region formed at all the measurement positions designated so far is calculated and displayed on the display unit 13, the procedure for performing the confirmation in step S105 every time is performed. It can be omitted.
(マーカーについて)
 上述の通り、本実施形態の画像処理装置1は、画像処理装置1の三次元座標系を基準に算出される被写体の三次元位置情報を、マーカーを基準とする三次元座標系に変換する。このとき、画像処理部11は、撮像部100および101で撮像された画像内からマーカーを検出し、マーカーに含まれる特徴点であるポインターを検出することで、マーカーを基準とする三次元座標系を算出する。したがって、本実施形態のマーカーは三次元座標系を表すための形状と、画像から検出することが容易な特徴点を持つ。座標系の変換方法については後述する。
(About markers)
As described above, the image processing apparatus 1 according to the present embodiment converts the three-dimensional position information of the subject calculated based on the three-dimensional coordinate system of the image processing apparatus 1 into a three-dimensional coordinate system based on the marker. At this time, the image processing unit 11 detects a marker from the images captured by the imaging units 100 and 101, and detects a pointer that is a feature point included in the marker, whereby a three-dimensional coordinate system based on the marker is used. Is calculated. Therefore, the marker of this embodiment has a shape for representing a three-dimensional coordinate system and a feature point that can be easily detected from an image. The coordinate system conversion method will be described later.
 図6に本実施形態におけるマーカー2の外観図の一例を示す。図6に示すマーカー2は、互いに直交する3つの軸を持ち、3つの軸の交点と各軸の先端にポインターである球が配置されている。3つの軸の交点位置のポインターが三次元座標系の原点Pを表し、原点のポインターの重心位置とそれ以外のポインターPx,Py,Pzの重心位置とを結ぶ線分が三次元座標系のX軸、Y軸、Z軸の3軸を表す。このようにポインター間を結ぶ3つの線分が直交となるようにすることで、マーカーによって三次元座標系を表すことができる。このとき、マーカー2の各軸は互いに直交しているため、マーカー2が表す三次元座標系は三次元直交座標系である。なお、ポインター間を結ぶ線分は、各ポインターを通る直線であっても同義である。マーカー2を床面に設置すれば、XZ平面は床面と平行になる。 FIG. 6 shows an example of an external view of the marker 2 in the present embodiment. The marker 2 shown in FIG. 6 has three axes orthogonal to each other, and a sphere as a pointer is arranged at the intersection of the three axes and the tip of each axis. The pointer at the intersection of the three axes represents the origin P 0 of the three-dimensional coordinate system, and the line segment connecting the center of gravity of the pointer at the origin and the center of gravity of the other pointers Px, Py, Pz is the three-dimensional coordinate system. Represents three axes: X-axis, Y-axis, and Z-axis. By making the three line segments connecting the pointers orthogonal to each other in this way, the three-dimensional coordinate system can be represented by the markers. At this time, since the axes of the marker 2 are orthogonal to each other, the three-dimensional coordinate system represented by the marker 2 is a three-dimensional orthogonal coordinate system. The line segments connecting the pointers are synonymous even if they are straight lines passing through the pointers. If the marker 2 is installed on the floor surface, the XZ plane becomes parallel to the floor surface.
 図6の例では、ポインターの形状を球とすることで、どのような角度、方向から見た場合にもポインターが同様の形状となるようにしている。これにより、画像内からのマーカーの検出、すなわち、各ポインターの検出が容易となる。さらに、それぞれのポインターの色相や彩度を異ならせれば、各ポインターを画像内で区別して検出し易くなり、どのような角度、方向から見た場合にも座標軸の方向の判別が可能となる。 In the example of FIG. 6, the pointer has a sphere shape so that the pointer has the same shape when viewed from any angle and direction. This facilitates marker detection from the image, that is, detection of each pointer. Furthermore, if the hue and saturation of each pointer are made different, each pointer can be easily distinguished and detected in the image, and the direction of the coordinate axis can be determined from any angle and direction.
 なお、上記の通り図6の例では、各軸が直交するようにポインターを配置することで、マーカー自体が三次元直交座標系を表すようにしているが、ポインターの配置はこれに限られるものではない。マーカーの各軸が直交でない場合であっても、マーカーの形状すなわちポインターの配置関係が既知であり、原点の位置と各軸同士の成す角度とが分かっていれば、その事前情報に基づいて軸方向を修正し、三次元直交座標系として扱うことが可能である。ただし、その場合には軸方向を修正する処理が増え、さらに、修正時に誤差が生じる可能性があるため、図6の例のようにマーカー自体が三次元直交座標系を表せる形状であることが望ましい。 In the example of FIG. 6 as described above, the pointers are arranged so that the respective axes are orthogonal so that the markers themselves represent a three-dimensional orthogonal coordinate system. However, the arrangement of the pointers is limited to this. is not. Even if the axes of the marker are not orthogonal, if the shape of the marker, that is, the positional relationship of the pointers is known, and the position of the origin and the angle between the axes are known, the axes based on the prior information The direction can be corrected and handled as a three-dimensional orthogonal coordinate system. However, in that case, the number of processes for correcting the axial direction increases, and an error may occur during correction. Therefore, the marker itself may have a shape that can represent a three-dimensional orthogonal coordinate system as in the example of FIG. desirable.
 また、図6の例ではポインターの球を軸で結ぶ構成としているが、各ポインターの位置関係が変化しなければ、マーカーの構成はどのようなものであってもよい。例えば、ポインターの球が台座によって下から支えられるような構成としてもよいし、寸法が既知の立方体をマーカーとし、立方体の各頂点位置をポインターとして設定してもよい。ただし、立方体をマーカーとする場合には、撮像する角度や方向によって画像に写る頂点が異なる可能性がある。そこで、立方体の寸法情報、すなわち、頂点の位置関係の情報に基づいて、三次元座標系の基準となる原点位置と、軸の位置および方向を合わせるようにする必要がある。このとき、立方体の各面の配色を異ならせたり、面毎に異なる目印を記したりしておけば、画像に写る面の情報から頂点位置の判断ができ、三次元座標系の特定が容易になる。 In the example shown in FIG. 6, the pointer spheres are connected by an axis. However, the marker may have any configuration as long as the positional relationship between the pointers does not change. For example, a configuration may be adopted in which a pointer sphere is supported from below by a pedestal, or a cube having a known size may be used as a marker, and each vertex position of the cube may be set as a pointer. However, when a cube is used as a marker, the vertices appearing in the image may differ depending on the angle and direction of imaging. Therefore, it is necessary to match the origin position serving as the reference of the three-dimensional coordinate system with the position and direction of the axis based on the dimension information of the cube, that is, the positional relationship information of the vertices. At this time, if the coloration of each surface of the cube is different, or a different mark is written for each surface, the vertex position can be determined from the information of the surface shown in the image, and the identification of the three-dimensional coordinate system is easy. Become.
 なお、マーカーの構成に関わらず、異なる撮像位置からマーカーを撮像した際に、全てのポインターが画像に写りやすい形状とすることが望ましい。 It should be noted that, regardless of the marker configuration, it is desirable that all the pointers are easily captured in the image when the marker is imaged from different imaging positions.
(三次元座標系の変換について)
 上述の通り、本実施形態の画像処理装置1は、画像処理装置1の三次元座標系で算出された被写体の三次元位置情報を、マーカーを基準とする三次元座標系へと変換する。以下に、図7の例を用いて座標変換方法について説明する。
(Conversion of 3D coordinate system)
As described above, the image processing apparatus 1 according to the present embodiment converts the three-dimensional position information of the subject calculated in the three-dimensional coordinate system of the image processing apparatus 1 into a three-dimensional coordinate system based on the marker. Hereinafter, the coordinate conversion method will be described with reference to the example of FIG.
 図7は画像処理装置1とマーカー2と被写体4との座標位置関係を示す外観図である。図7には、ある撮像位置における画像処理装置1を基準とした三次元座標系Mと、画像処理装置1が備える撮像部100の画像座標系を表す二次元座標系mと、マーカー2を基準とした三次元座標系Oと、被写体4上の点Pとが示されている。マーカー2を基準とした三次元座標系Oには、各軸方向の単位ベクトルを表すi、j、kが示されている。二次元座標系mは画像投影面であり、被写体4とマーカー2とが投影されている様子が示されている。 FIG. 7 is an external view showing a coordinate position relationship among the image processing apparatus 1, the marker 2, and the subject 4. 7 shows a three-dimensional coordinate system M based on the image processing apparatus 1 at a certain imaging position, a two-dimensional coordinate system m representing the image coordinate system of the imaging unit 100 included in the image processing apparatus 1, and a marker 2. A three-dimensional coordinate system O and a point P on the subject 4 are shown. In the three-dimensional coordinate system O with the marker 2 as a reference, i, j, and k representing unit vectors in the respective axis directions are shown. The two-dimensional coordinate system m is an image projection plane, and shows a state in which the subject 4 and the marker 2 are projected.
 ここで、三次元座標系MのZ軸は、画像処理装置1で基準としている撮像部100の光軸と一致する。また、二次元座標系mの原点は画像投影面上の左上隅の位置であり、二次元座標系mの水平軸方向は三次元座標系MのX軸方向と一致し、二次元座標系mの垂直軸方向は三次元座標系MのY軸方向と一致する。 Here, the Z axis of the three-dimensional coordinate system M coincides with the optical axis of the imaging unit 100 which is the reference in the image processing apparatus 1. The origin of the two-dimensional coordinate system m is the position of the upper left corner on the image projection plane, the horizontal axis direction of the two-dimensional coordinate system m coincides with the X-axis direction of the three-dimensional coordinate system M, and the two-dimensional coordinate system m The vertical axis direction coincides with the Y-axis direction of the three-dimensional coordinate system M.
 なお、図7の例では、三次元座標系は全て右手系で表している。また、説明のために、三次元座標系Mと二次元座標系mの関係は透視投影モデルで図示されている。透視投影モデルは公知の技術であるため詳細な説明は割愛する。 In the example of FIG. 7, the three-dimensional coordinate system is all represented by the right-handed system. For the sake of explanation, the relationship between the three-dimensional coordinate system M and the two-dimensional coordinate system m is shown as a perspective projection model. Since the perspective projection model is a known technique, a detailed description thereof is omitted.
 三次元座標系Mにおける被写体上のある点の三次元位置情報(X,Y,Z)は、画像投影面に投影された点の二次元座標系mにおける二次元位置情報(u,v)と、上述のステレオ方式で求められた被写体の距離情報Zとに基づいて算出することができる。算出式を式(1)に示す。なお、式(1)に示すZの算出式はステレオ方式について上述したものと同一である。
Figure JPOXMLDOC01-appb-M000001
The three-dimensional position information (X, Y, Z) of a point on the subject in the three-dimensional coordinate system M is the same as the two-dimensional position information (u, v) of the point projected on the image projection plane in the two-dimensional coordinate system m. It can be calculated based on the subject distance information Z obtained by the stereo method described above. The calculation formula is shown in Formula (1). In addition, the calculation formula of Z shown in Formula (1) is the same as what was mentioned above about the stereo system.
Figure JPOXMLDOC01-appb-M000001
 ここで、式(1)のfは撮像部の焦点距離、pは撮像部のイメージセンサにおけるピクセルピッチ、wは撮像部で撮像される画像の水平解像度、hは垂直解像度、Bは2つの撮像部間の基線長、dはステレオ方式で求められた視差値を表す。また、×は乗算を表す演算子である。なお、式(1)に示すように、XとYは基線長Bと視差値dを代入する式で算出するようにしても構わない。 Here, f in Expression (1) is the focal length of the imaging unit, p is the pixel pitch in the image sensor of the imaging unit, w is the horizontal resolution of the image captured by the imaging unit, h is the vertical resolution, and B is the two imagings. The base line length between the parts, d, represents the parallax value obtained by the stereo method. X is an operator representing multiplication. Note that, as shown in Expression (1), X and Y may be calculated by an expression that substitutes the base line length B and the parallax value d.
 式(1)によって、画像処理部11は、被写体上の点Pやマーカー2の二次元座標系mにおける二次元位置情報から、三次元座標系Mにおける三次元位置情報を算出する。 The image processing unit 11 calculates the three-dimensional position information in the three-dimensional coordinate system M from the two-dimensional position information in the two-dimensional coordinate system m of the point P on the subject and the marker 2 by Expression (1).
 次に、以下の方法で、算出した三次元座標系Mにおける三次元位置情報を、三次元座標系Oにおける三次元位置情報へと変換する。 Next, the calculated 3D position information in the 3D coordinate system M is converted into 3D position information in the 3D coordinate system O by the following method.
 三次元座標系Mから三次元座標系Oへの変換は、回転と並進の行列演算で表すことができる。三次元座標系Mから見た点PへのベクトルをP、マーカー2の原点位置のポインターへのベクトルをOとし、三次元座標系Oから見た点PへのベクトルをPとする。このとき、各ベクトルは=(PMX,PMY,PMZ)、=(OMX,OMY,OMZ)、=(POX,POY,POZ)で表される。は転置を表す。 The conversion from the three-dimensional coordinate system M to the three-dimensional coordinate system O can be expressed by a matrix operation of rotation and translation. The vector to the point P viewed from the three-dimensional coordinate system M is P M , the vector to the pointer at the origin position of the marker 2 is O M, and the vector to the point P viewed from the three-dimensional coordinate system O is P O. . At this time, each vector t P M = (P MX, P MY, P MZ), t O M = (O MX, O MY, O MZ), t P O = (P OX, P OY, P OZ) It is represented by t represents transposition.
 また、単位ベクトルi、j、kはi=(ix,iy,iz)、j=(jx,jy,jz)、k=(kx,ky,kz)で表される。ベクトルPおよびOを表す三次元位置情報は上記式(1)によって算出される。また、三次元座標系Oにおける点Pの三次元位置情報、つまり、Pは式(2)によって算出される。
Figure JPOXMLDOC01-appb-M000002
The unit vectors i, j, and k are represented by ti = (ix, iy, iz), tj = (jx, jy, jz), and tk = (kx, ky, kz). The three-dimensional position information representing the vectors P M and O M is calculated by the above equation (1). Further, the three-dimensional position information of the point P in the three-dimensional coordinate system O, that is, PO is calculated by the equation (2).
Figure JPOXMLDOC01-appb-M000002
 ここで、式(2)は同次座標系を用いて表されている。また、・は内積演算を表す演算子である。 Here, Equation (2) is expressed using a homogeneous coordinate system. Further, · is an operator representing an inner product operation.
 式(2)の右辺に含まれる行列が変換行列であり、三次元座標系Mにおける三次元位置情報は変換行列によって三次元座標系Oにおける三次元位置情報へと変換される。変換行列Vを式(3)に示す。
Figure JPOXMLDOC01-appb-M000003
The matrix included in the right side of Expression (2) is a transformation matrix, and the three-dimensional position information in the three-dimensional coordinate system M is converted into the three-dimensional position information in the three-dimensional coordinate system O by the transformation matrix. The transformation matrix V is shown in Formula (3).
Figure JPOXMLDOC01-appb-M000003
 変換行列Vは、画像に写るマーカー2の位置から算出可能であり、単位ベクトルi、j、kは、マーカー2の各ポインターの三次元位置情報から算出される。以下に単位ベクトルi、j、kの算出方法を述べる。 The transformation matrix V can be calculated from the position of the marker 2 in the image, and the unit vectors i, j, and k are calculated from the three-dimensional position information of each pointer of the marker 2. A method for calculating unit vectors i, j, and k will be described below.
 二次元座標系mにおけるマーカー2の各ポインターの二次元位置情報を、画像内から各ポインターを検出することで取得し、上記式(1)によって三次元座標系Mにおける各ポインターの三次元位置情報を算出する。ここで、図8に図7の三次元座標系Mとマーカー2の領域を拡大した図を示す。図8に示すように、三次元座標系Mからみたマーカー2の原点のポインターへのベクトルをg0、X軸方向のポインターへのベクトルをg1、Y軸方向のポインターへのベクトルをg2、Z軸方向のポインターへのベクトルをg3とすると、単位ベクトルi、j、kは式(4)によって算出することができる。
Figure JPOXMLDOC01-appb-M000004
The two-dimensional position information of each pointer of the marker 2 in the two-dimensional coordinate system m is acquired by detecting each pointer from the image, and the three-dimensional position information of each pointer in the three-dimensional coordinate system M is obtained by the above equation (1). Is calculated. Here, FIG. 8 shows an enlarged view of the area of the three-dimensional coordinate system M and the marker 2 of FIG. As shown in FIG. 8, the vector to the origin pointer of the marker 2 viewed from the three-dimensional coordinate system M is g0, the vector to the pointer in the X-axis direction is g1, the vector to the pointer in the Y-axis direction is g2, and the Z-axis When the vector to the direction pointer is g3, the unit vectors i, j, and k can be calculated by the equation (4).
Figure JPOXMLDOC01-appb-M000004
 ここで、||は絶対値を表し、式(4)の各式における分母部分はポインター間の距離を表す。なお、ベクトルg0はベクトルOと同一である。Oとi、j、kに基づいて、変換行列Vを算出することができる。 Here, || represents an absolute value, and the denominator portion in each expression of Expression (4) represents the distance between the pointers. Incidentally, the vector g0 is identical to the vector O M. O M a i, j, based on the k, it is possible to calculate a transformation matrix V.
 上記の方法によって、画像に写る被写体の二次元位置情報から、画像処理装置を基準とする三次元座標系の三次元位置情報を算出することができる。さらに、変換行列を算出することで、マーカーを基準とする三次元座標系へと三次元位置情報を変換することが可能となる。 By the above method, the three-dimensional position information of the three-dimensional coordinate system based on the image processing apparatus can be calculated from the two-dimensional position information of the subject in the image. Furthermore, by calculating the conversion matrix, it is possible to convert the three-dimensional position information into a three-dimensional coordinate system based on the marker.
(第2の実施形態)
 次いで、撮像装置の撮像範囲内に収まらない計測範囲について、2つのマーカーを用いて計測する第2の実施の形態による方法について説明する。
(Second Embodiment)
Next, a method according to the second embodiment in which a measurement range that does not fall within the imaging range of the imaging device is measured using two markers will be described.
  <2つのマーカーによる計測方法>
 本実施形態では、第1の実施形態と同様に図1Aの画像処理装置1を用いる。ただし、図1Bに示した画像処理部11の機能ブロック図は、図1Cに示す画像処理部11aの一構成例を示す機能ブロック図へと置き換えられる。機能ブロックの詳細については後述する。
<Measurement method using two markers>
In the present embodiment, the image processing apparatus 1 in FIG. 1A is used as in the first embodiment. However, the functional block diagram of the image processing unit 11 illustrated in FIG. 1B is replaced with a functional block diagram illustrating a configuration example of the image processing unit 11a illustrated in FIG. 1C. Details of the functional block will be described later.
 本実施形態で述べる撮像装置の撮像範囲内に収まらない計測範囲とは、上記の1つのマーカーによる計測方法では、複数回の計測が必要になるような計測範囲である。例えば、ホールなどの広い部屋や、曲がり角などを含み1回の撮影では遮蔽領域が生じてしまう廊下などの床といった計測範囲である。 The measurement range that does not fall within the imaging range of the imaging apparatus described in this embodiment is a measurement range that requires multiple measurements in the measurement method using one marker. For example, it is a measurement range such as a large room such as a hall or a floor such as a corridor where a shielding area is generated in one shooting including a corner.
 以下では、図9に示す曲がり角を含む廊下の場合を例として、2つのマーカーによる計測方法を説明する。図9A、BおよびCは曲がり角を含む廊下の俯瞰図であり、それぞれ同一の領域が示されている。図9Aには2つのマーカー20および21が示され、図9Bには2つのマーカー21および22が示され、図9Cには2つのマーカー22および23が示されている。図9Bのマーカー22は図9Aのマーカー20と同一のものであり、マーカー20を図9Aの位置から図9Bに示す一点鎖線L1方向へ移動したことを表している。同様に、図9Cのマーカー23は図9AおよびBのマーカー21と同一のものであり、マーカー21を図9AおよびBの位置から図9Cに示す一点鎖線L2方向へ移動したことを表している。また、マーカー20(またはマーカー22)とマーカー21(またはマーカー23)とは説明を分かり易くするために異なる図形で示されているが、実際の形状はどちらも図6に示したマーカー2と同様のものである。ただし、撮像する画像内で2つのマーカーが異なるものであることを検出し易くするため、例えばマーカーの配色を異ならせておくとよい。これにより、後述するマーカーの対応付けの際に、マーカーの判別を行うことを可能とする。なお、配色を異ならせる方法に限らず、マーカーに目印をつけるなど、2つのマーカーを区別することができればどのようにしてもよい。 In the following, a measurement method using two markers will be described by taking as an example the case of a corridor including a corner shown in FIG. 9A, 9B and 9C are overhead views of a corridor including a corner, each showing the same region. FIG. 9A shows two markers 20 and 21, FIG. 9B shows two markers 21 and 22, and FIG. 9C shows two markers 22 and 23. The marker 22 in FIG. 9B is the same as the marker 20 in FIG. 9A, and indicates that the marker 20 has been moved from the position in FIG. 9A in the direction of the one-dot chain line L1 shown in FIG. 9B. Similarly, the marker 23 in FIG. 9C is the same as the marker 21 in FIGS. 9A and B, which indicates that the marker 21 has been moved from the position in FIGS. 9A and B in the direction of the one-dot chain line L2 shown in FIG. 9C. Further, the marker 20 (or marker 22) and the marker 21 (or marker 23) are shown as different figures for easy understanding, but the actual shape is the same as the marker 2 shown in FIG. belongs to. However, in order to easily detect that two markers are different in an image to be captured, for example, the color scheme of the markers may be different. This makes it possible to perform marker discrimination at the time of marker association described later. Note that the present invention is not limited to the method of changing the color scheme, and any method may be used as long as the two markers can be distinguished, such as marking the marker.
 図9Aには撮像位置90、91および92が示されており、それぞれの撮像位置から破線の矢印の方向へ画像処理装置1で撮像することが示されている。また、撮像位置90で撮像される画像においてユーザが指定する計測位置900、901および902と、撮像位置91で撮像される画像においてユーザが指定する計測位置910と、撮像位置92で撮像される画像においてユーザが指定する計測位置920および921とが示されている。 FIG. 9A shows imaging positions 90, 91, and 92, which indicate that the image processing apparatus 1 captures images from the respective imaging positions in the direction of the dashed arrows. In addition, the measurement positions 900, 901, and 902 specified by the user in the image captured at the imaging position 90, the measurement position 910 specified by the user in the image captured at the imaging position 91, and the image captured at the imaging position 92 are displayed. , Measurement positions 920 and 921 designated by the user are shown.
 尚、図9Aはあくまで一例であり、撮像位置90と91の両方の画像に900が写っている場合には、いずれの画像から選択しても良い。この例では、撮像位置90の画像で既に900を選択済みであるため、撮像位置91の撮影画像では900の位置を選択していない。したがって、撮像位置90の画像では900を選択せずに撮像位置91で選択しても良く、或いは、両方の画像900を選択しても良い。 Note that FIG. 9A is merely an example, and when 900 is shown in both images at the imaging positions 90 and 91, any image may be selected. In this example, since 900 has already been selected in the image at the imaging position 90, the 900 position is not selected in the captured image at the imaging position 91. Therefore, in the image of the imaging position 90, 900 may be selected without selecting 900, or both images 900 may be selected.
 図9Bには撮像位置93が示されており、破線矢印の方向へ画像処理装置1で撮像することが示されている。また、撮像位置93で撮像される画像においてユーザが指定する計測位置930および931が示されている。 FIG. 9B shows an image pickup position 93, which shows that the image processing apparatus 1 picks up an image in the direction of the broken line arrow. In addition, measurement positions 930 and 931 designated by the user in the image captured at the imaging position 93 are shown.
 図9Cには撮像位置94および95が示されており、それぞれの位置から破線矢印の方向へ画像処理装置1で撮像することが示されている。また、撮像位置94で撮像される画像においてユーザが指定する計測位置940および941と、撮像位置95で撮像される画像においてユーザが指定する計測位置950および951とが示されている。 FIG. 9C shows imaging positions 94 and 95, which indicate that the image processing apparatus 1 captures images from the respective positions in the direction of the broken-line arrows. In addition, measurement positions 940 and 941 specified by the user in the image captured at the imaging position 94 and measurement positions 950 and 951 specified by the user in the image captured at the imaging position 95 are shown.
 上述した各撮像位置の番号は撮像の順番を表しており、撮像位置90、91、92、93、94、95の順に撮像が行われる。 The number of each imaging position described above represents the order of imaging, and imaging is performed in the order of imaging positions 90, 91, 92, 93, 94, and 95.
 2つマーカーによる計測の場合も、1つのマーカーによる計測方法と同様に、撮像位置毎にマーカーと計測範囲の一部分とを撮像し、撮像された画像それぞれからユーザが計測位置を指定する。 In the case of measurement using two markers, similarly to the measurement method using one marker, the marker and a part of the measurement range are imaged for each imaging position, and the user designates the measurement position from each of the captured images.
 まず図9Aに示すように、マーカー20と計測範囲の頂点位置が写るように撮像位置90、91、92で撮像を行う。そして、各画像で頂点位置を指定して計測位置を決定し、それらの三次元位置情報は、上述の方法と同様にしてマーカー20の三次元座標系へと変換される。つまり、計測位置900、901、902、910、920、921は、マーカー20の三次元座標系で表される三次元位置情報へ変換される。 First, as shown in FIG. 9A, imaging is performed at imaging positions 90, 91, and 92 so that the marker 20 and the vertex position of the measurement range are shown. Then, the vertex position is specified in each image to determine the measurement position, and the three-dimensional position information is converted into the three-dimensional coordinate system of the marker 20 in the same manner as described above. That is, the measurement positions 900, 901, 902, 910, 920, and 921 are converted into three-dimensional position information represented by the three-dimensional coordinate system of the marker 20.
 ここで、図9Aの撮像位置92の画像にはマーカー20とマーカー21との両方が写っており、それぞれの三次元位置情報は、撮像位置92における画像処理装置1を基準とする三次元座標系で表される。したがって、マーカー20とマーカー21との三次元座標系間の位置関係を求めることが可能であり、一方のマーカーの三次元座標系で表される三次元位置情報を、他方のマーカーの三次元座標系で表される三次元位置情報へ変換することが可能である。なお、マーカー間の座標変換方法については後述する。 Here, both the marker 20 and the marker 21 are shown in the image at the imaging position 92 in FIG. 9A, and each three-dimensional position information is based on the three-dimensional coordinate system based on the image processing apparatus 1 at the imaging position 92. It is represented by Therefore, the positional relationship between the three-dimensional coordinate system of the marker 20 and the marker 21 can be obtained, and the three-dimensional position information represented by the three-dimensional coordinate system of one marker is used as the three-dimensional coordinate of the other marker. It is possible to convert it into three-dimensional position information represented by a system. In addition, the coordinate conversion method between markers is mentioned later.
 次に、図9Aのマーカー20を図9Bのマーカー22の位置へ移動し、図9Bの撮像位置93でマーカー21とマーカー22と、まだ指定していない計測位置とを撮像する。そして、撮像位置93の画像で計測位置930および931を指定して三次元位置情報を算出し、マーカー21の三次元座標系へと変換する。さらに、上記の通りマーカー21からマーカー20への三次元座標系の変換が可能であるため、撮像位置93の画像で指定された計測位置930および931は、マーカー20の三次元座標系で表される三次元位置情報へと変換される。 Next, the marker 20 in FIG. 9A is moved to the position of the marker 22 in FIG. 9B, and the marker 21, the marker 22, and the measurement position that has not yet been specified are imaged at the imaging position 93 in FIG. 9B. Then, the three-dimensional position information is calculated by designating the measurement positions 930 and 931 in the image of the imaging position 93 and converted into the three-dimensional coordinate system of the marker 21. Furthermore, since the three-dimensional coordinate system can be converted from the marker 21 to the marker 20 as described above, the measurement positions 930 and 931 specified by the image of the imaging position 93 are represented by the three-dimensional coordinate system of the marker 20. Into three-dimensional position information.
 ここで、図9Bの撮像位置93の画像にはマーカー21とマーカー22との両方が写っており、上記同様にマーカー21とマーカー22との三次元座標系間の対応関係を求めることができる。つまり、マーカー22の三次元座標系で表される三次元位置情報は、マーカー21の三次元座標系で表される三次元位置情報へ変換することが可能である。 Here, both the marker 21 and the marker 22 are shown in the image at the imaging position 93 in FIG. 9B, and the correspondence between the three-dimensional coordinate systems of the marker 21 and the marker 22 can be obtained in the same manner as described above. That is, the three-dimensional position information represented by the three-dimensional coordinate system of the marker 22 can be converted into the three-dimensional position information represented by the three-dimensional coordinate system of the marker 21.
 次に、図9Bのマーカー21を図9Cのマーカー23の位置へ移動し、図9Cの撮像位置94でマーカー22とマーカー23と、まだ指定していない計測位置とを撮像する。そして、撮像位置94の画像において計測位置940および941を指定して三次元位置情報を算出し、マーカー22の三次元座標系へと変換する。さらに、撮像位置95で撮像した画像において撮像位置950および951を指定して三次元位置情報を算出し、マーカー23の三次元座標系へと変換する。 Next, the marker 21 in FIG. 9B is moved to the position of the marker 23 in FIG. 9C, and the marker 22, the marker 23, and the measurement position that has not yet been specified are imaged at the imaging position 94 in FIG. 9C. Then, the measurement positions 940 and 941 are specified in the image of the imaging position 94 to calculate the three-dimensional position information, and the marker 22 is converted into the three-dimensional coordinate system. Further, the three-dimensional position information is calculated by designating the imaging positions 950 and 951 in the image captured at the imaging position 95 and converted into the three-dimensional coordinate system of the marker 23.
 ここで、図9Cの撮像位置94の画像にはマーカー22とマーカー23との両方が写っており、上記同様にマーカー22とマーカー23との三次元座標系間の対応関係を求めることができる。つまり、マーカー23の三次元座標系で表される三次元位置情報は、マーカー22の三次元座標系で表される三次元位置情報へ変換することが可能である。 Here, both the marker 22 and the marker 23 are shown in the image at the imaging position 94 in FIG. 9C, and the correspondence between the three-dimensional coordinate systems of the marker 22 and the marker 23 can be obtained in the same manner as described above. That is, the three-dimensional position information represented by the three-dimensional coordinate system of the marker 23 can be converted into the three-dimensional position information represented by the three-dimensional coordinate system of the marker 22.
 上記の通り、マーカー20、21、22、23のそれぞれが示す三次元座標系は、撮像位置92、93、94の画像に基づいて対応づけられ、互いの座標系への変換が可能となる。 As described above, the three-dimensional coordinate systems indicated by the markers 20, 21, 22, and 23 are associated with each other based on the images at the imaging positions 92, 93, and 94, and can be converted into each other's coordinate system.
 以上の方法により、複数の撮像位置で撮像された画像それぞれで指定された複数の計測位置は、統合の基準となる1つのマーカーの三次元座標系で表される三次元位置情報へと変換することができる。つまり、上記の例でマーカー20を基準マーカーとして設定した場合、図9A、BおよびCに示す全ての計測位置は、基準マーカー20の三次元座標系で表される三次元位置情報として算出することができる。なお、基準マーカーはマーカー20以外のマーカーであっても構わない。基準マーカーを最初に設置するマーカーに設定することで、基準マーカーの位置に合わせて基準でない他のマーカー(非基準マーカー)の設置位置を調整することができる。 By the above method, the plurality of measurement positions designated by the images captured at the plurality of imaging positions are converted into the three-dimensional position information represented by the three-dimensional coordinate system of one marker serving as a reference for integration. be able to. That is, when the marker 20 is set as the reference marker in the above example, all measurement positions shown in FIGS. 9A, 9B, and 9C are calculated as three-dimensional position information represented by the three-dimensional coordinate system of the reference marker 20. Can do. The reference marker may be a marker other than the marker 20. By setting the reference marker as the marker to be set first, it is possible to adjust the setting position of another marker (non-reference marker) that is not the reference in accordance with the position of the reference marker.
 なお、図9に示したマーカーの設置位置や撮像位置はこれに限られるものではなく、それぞれの画像にマーカーが写り、かつ、全ての計測位置をいずれかの撮像位置で撮像した画像から選択できるように、それぞれ設定すればよい。また、マーカーとマーカーとの間の座標変換を行う場合には、図9の撮像位置92、93、94のように、1つの撮像位置で2つのマーカーが写るように設定する。 Note that the marker installation position and the imaging position shown in FIG. 9 are not limited to this, and the marker appears in each image, and all measurement positions can be selected from images captured at any imaging position. Each may be set as follows. Further, when coordinate conversion between markers is performed, settings are made so that two markers are captured at one imaging position, such as imaging positions 92, 93, and 94 in FIG.
 次に、面積を算出する範囲を決めるために計測範囲の俯瞰図を作製する。上記の方法によって全ての計測位置の三次元位置情報が算出されるが、それらのつながりは不明であり、面積を算出する領域がどの範囲なのかはわからない。そこで、ユーザに選択された全ての計測位置の三次元位置情報を、計測範囲の存在する面、あるいは計測範囲の存在する面と平行な面へと投影し、図10Aに示すような計測位置の俯瞰図を作製する。この場合、計測範囲は床面であるので、基準マーカー20を床面に設置すれば、基準マーカー20の三次元座標系におけるXZ平面は計測範囲の床面と平行になる。したがって、全ての計測位置の三次元位置情報をXZ平面へ投影して、X軸とZ軸の二次元座標系で表される二次元位置情報を算出し、この二次元座標系を俯瞰図の画像座標系とすることで、床面を鉛直上方から見たような俯瞰図を作製することができる。 Next, create an overhead view of the measurement range in order to determine the range for calculating the area. Although the three-dimensional position information of all measurement positions is calculated by the above method, the connection between them is unknown, and it is not known what range the area for calculating the area is. Therefore, the three-dimensional position information of all measurement positions selected by the user is projected onto a surface where the measurement range exists or a surface parallel to the surface where the measurement range exists, and the measurement positions as shown in FIG. Create an overhead view. In this case, since the measurement range is the floor surface, if the reference marker 20 is installed on the floor surface, the XZ plane in the three-dimensional coordinate system of the reference marker 20 becomes parallel to the floor surface of the measurement range. Therefore, the three-dimensional position information of all measurement positions is projected onto the XZ plane, the two-dimensional position information represented by the two-dimensional coordinate system of the X-axis and the Z-axis is calculated, and this two-dimensional coordinate system is represented in the overhead view. By using the image coordinate system, it is possible to create an overhead view as if the floor surface was viewed from vertically above.
 なお、例えば、床に対して垂直に立つ壁面の面積を計測する場合には、壁面に平行な面へと全ての計測位置の三次元位置情報を投影して俯瞰図を作製すればよい。このとき、マーカーの三次元座標系のX軸が計測範囲の壁面と平行になるようにマーカーを床面に設置した場合には、マーカーのXY平面を投影面とすればよい。この場合の俯瞰図は、鉛直上方から見下ろして俯瞰した図ではなく、壁面に対して垂直方向に見た図を表し、説明の便宜上俯瞰図とする。また、計測範囲の面と、マーカーのいずれかの2軸で表されるいずれかの面とが平行でない場合には、計測対象の面の位置情報を算出してマーカーの三次元座標系との位置関係を求め、位置関係に基づいて計測範囲の面へと三次元位置情報を投影すればよい。ここで、面の情報は面上に存在する3つ以上の点の三次元位置情報が分かれば算出することができるので、画像に写る計測範囲の面上の3つ以上の点をユーザが指定するなどすれば、計測範囲の面の情報を算出することができる。 Note that, for example, when measuring the area of the wall surface standing perpendicular to the floor, the overhead view may be created by projecting the three-dimensional position information of all measurement positions onto a plane parallel to the wall surface. At this time, when the marker is placed on the floor so that the X axis of the three-dimensional coordinate system of the marker is parallel to the wall surface of the measurement range, the XY plane of the marker may be used as the projection plane. The bird's-eye view in this case is not a bird's-eye view looking down from above, but a diagram seen in a direction perpendicular to the wall surface, and is a bird's-eye view for convenience of explanation. In addition, when the surface of the measurement range and any surface represented by any two axes of the marker are not parallel, the position information of the surface to be measured is calculated, and the 3D coordinate system of the marker What is necessary is just to obtain | require positional relationship and to project three-dimensional positional information on the surface of a measurement range based on positional relationship. Here, since the surface information can be calculated if the three-dimensional position information of three or more points existing on the surface is known, the user designates three or more points on the surface of the measurement range in the image. By doing so, it is possible to calculate information on the surface of the measurement range.
 なお、図10Aの状態では各計測位置のつながりが分からないため、計測範囲が不明である。そこで、画像処理装置1は、図10Aに示す各計測位置をユーザが指定した順につなぐことで、図10Bに示すような俯瞰図を作製して表示部13に表示する。ただし、指定した順序と計測位置のつながりは必ずしも一致しないため、ユーザが修正を行う。例えば、ユーザは表示部13の表示を確認しながら、接続の誤っている部分、あるいは接続されていない部分を入力部14によって指定して修正し、図10Cに示すような正しく計測位置が接続された俯瞰図を作製する。このように、ユーザが指定した順に計測点を接続しておき、誤っている部分のみを修正するようにすれば、ユーザの作業が減少して好適である。なお、計測位置の接続方法は上記に限られるものではなく、画像情報や三次元位置情報に基づいて、計測位置とその周囲の物体の位置関係を推定するようにしてもよい。例えば、2つの計測位置の間に壁が存在すれば、その壁を通過して計測位置が接続されることはない、といったように計測位置の接続順を推定することができる。 In addition, since the connection of each measurement position is not known in the state of FIG. 10A, the measurement range is unknown. Therefore, the image processing apparatus 1 connects the measurement positions shown in FIG. 10A in the order specified by the user, thereby creating an overhead view as shown in FIG. 10B and displaying it on the display unit 13. However, since the connection between the specified order and the measurement position does not necessarily match, the user makes corrections. For example, while confirming the display on the display unit 13, the user specifies and corrects an erroneously connected part or an unconnected part using the input unit 14, and the measurement position is correctly connected as shown in FIG. 10C. Create an overhead view. In this way, it is preferable that the measurement points are connected in the order designated by the user and only the erroneous part is corrected so that the work of the user is reduced. Note that the connection method of the measurement position is not limited to the above, and the positional relationship between the measurement position and surrounding objects may be estimated based on image information or three-dimensional position information. For example, if there is a wall between two measurement positions, the connection order of the measurement positions can be estimated such that the measurement positions are not connected through the wall.
 そして、面積を算出するために、画像処理装置1は各計測位置の三次元位置情報と、図10Cの俯瞰図の計測位置の接続情報とに基づいて、図10Dに示すように計測範囲を分割する。図10Dのように三角形領域に分割することで、分割した領域毎に3つの頂点の三次元位置情報から面積を算出し、それらを加算して全体の面積を求めることができる。なお、領域の分割方法はどのような方法でも構わないが、全ての範囲がいずれかの分割領域に含まれ、かつ、それぞれの分割領域が重複しないようにする必要がある。また、図10Dの俯瞰図を表示部13に表示すれば、画像処理装置1の領域分割が誤り、同じ範囲が複数の分割領域に含まれているような場合でも、その部分をユーザが入力14によって指定して修正することができる。 Then, in order to calculate the area, the image processing apparatus 1 divides the measurement range as shown in FIG. 10D based on the three-dimensional position information of each measurement position and the connection information of the measurement position in the overhead view of FIG. 10C. To do. By dividing into triangular areas as shown in FIG. 10D, the area can be calculated from the three-dimensional position information of the three vertices for each divided area, and the total area can be obtained by adding them. Any method may be used for dividing the area, but it is necessary that the entire range is included in any divided area and that the divided areas do not overlap each other. If the overhead view of FIG. 10D is displayed on the display unit 13, even if the area division of the image processing apparatus 1 is incorrect and the same range is included in a plurality of divided areas, the user can input the portion 14. Can be specified and corrected.
 上記のように、画像処理装置1によって自動的に計測位置を接続し、領域分割するようにすれば、ユーザが手動により計測点を接続して、計測範囲の領域分割を行う手間を省くことができる。 As described above, if the measurement position is automatically connected and the region is divided by the image processing apparatus 1, the user can manually connect the measurement points and save the trouble of dividing the region of the measurement range. it can.
 最後に、画像処理装置1は分割された各三角形領域の面積を、計測位置の三次元位置情報に基づいて算出し、全ての算出結果を加算することで計測範囲全体の面積を算出する。なお、面積の算出方法は、上記のように領域分割する方法に限られるものではなく、計測位置の三次元位置情報に基づいて計測範囲全体の面積が算出できれば、どのような方法であっても構わない。また、計測位置を接続する線分の長さ、すなわち、各辺の長さや、辺と辺の間隔などを計測する場合は、上記のように領域分割する必要はなく、該当する計測位置の三次元位置情報に基づいて算出すればよい。 Finally, the image processing apparatus 1 calculates the area of each divided triangular region based on the three-dimensional position information of the measurement position, and calculates the area of the entire measurement range by adding all the calculation results. Note that the area calculation method is not limited to the method of dividing the area as described above, and any method can be used as long as the area of the entire measurement range can be calculated based on the three-dimensional position information of the measurement position. I do not care. In addition, when measuring the length of the line segment connecting the measurement positions, that is, the length of each side, the interval between sides, etc., there is no need to divide the area as described above, and the tertiary of the corresponding measurement position. What is necessary is just to calculate based on original position information.
 以上のように、2つのマーカーを用いることで、複数回の計測が必要になるような計測範囲であっても、一度の計測で一括して全体の面積を算出することができる。また、全ての三次元位置情報が、基準マーカーを基準とする1つの三次元座標系で表されるので、複数回の計測を行う場合のように、ユーザが計測位置の位置関係を正確に把握して複数の結果を統合する手間を無くすことができる。 As described above, by using two markers, the entire area can be calculated in a single measurement even if the measurement range requires multiple measurements. In addition, since all 3D position information is expressed in one 3D coordinate system with the reference marker as a reference, the user accurately grasps the positional relationship of the measurement positions as in the case of performing multiple measurements. Thus, the trouble of integrating a plurality of results can be eliminated.
 また、第1の実施形態で述べたように、ユーザにより指定される計測位置を、画像情報や距離情報に基づいて画像処理部11で自動的に修正すれば、ユーザの指定精度が低い場合でも、高精度に位置を指定して計測が可能となる。 In addition, as described in the first embodiment, if the measurement position designated by the user is automatically corrected by the image processing unit 11 based on the image information and the distance information, even when the user's designation accuracy is low. Measurement is possible by specifying the position with high accuracy.
 さらに、上記のように撮像とマーカーの移動を繰り返せば、図9の例よりもさらに広い範囲や複雑な形状の範囲であっても計測が可能である。このとき、移動させるマーカーと固定のマーカーとを交互に入れ替えながら撮像していくことで、マーカーからマーカーへの座標系変換を繰り返すことが可能となる。最初に設置したマーカーを基準マーカーに設定して撮像し、その後、基準マーカーとそれ以外のマーカー(非基準マーカー)とを交互に移動させながら撮像していけば、各画像から選択される全ての計測位置の三次元位置情報を、移動前の基準マーカーの三次元座標系へと変換することができる。そして、座標系が統合された三次元位置情報に基づいて、俯瞰図を作製して計測することができる。 Furthermore, if the imaging and the movement of the marker are repeated as described above, it is possible to measure even a wider range or a complicated shape range than the example of FIG. At this time, it is possible to repeat the coordinate system conversion from the marker to the marker by imaging while alternately exchanging the marker to be moved and the fixed marker. If you set the first marker as the reference marker and take an image, then move the reference marker and the other markers (non-reference markers) while moving them alternately. The three-dimensional position information of the measurement position can be converted into the three-dimensional coordinate system of the reference marker before movement. And based on the three-dimensional position information with which the coordinate system was integrated, an overhead view can be produced and measured.
 なお、最初に設定された基準マーカーを移動させた場合、移動後の基準マーカーはその他の非基準マーカーと同様の扱いとなる。つまり、移動後の基準マーカーの三次元座標系は、移動前の基準マーカーの三次元座標系とは異なる位置に存在するため、非基準マーカーの三次元座標系と同様に扱うことができる。したがって、移動後の基準マーカーの三次元座標系から、非基準マーカーの三次元座標系へと座標系を変換し、さらに移動前の基準マーカーの三次元座標系へと変換することが可能である。 Note that when the first set reference marker is moved, the moved reference marker is handled in the same manner as other non-reference markers. That is, since the three-dimensional coordinate system of the reference marker after movement exists at a different position from the three-dimensional coordinate system of the reference marker before movement, it can be handled in the same manner as the three-dimensional coordinate system of the non-reference marker. Therefore, it is possible to convert the coordinate system from the three-dimensional coordinate system of the reference marker after movement to the three-dimensional coordinate system of the non-reference marker, and further to the three-dimensional coordinate system of the reference marker before movement. .
 また、上記では2つのマーカーを用いた方法を述べたが、マーカーの数はこれに限られるものではない。上記の例では、マーカー22はマーカー20と同一のものとしたが、マーカー22をマーカー20または21とは別のマーカーとしてもよい。つまり3つのマーカーを用いてもよい。その場合も、画像に写るマーカー間の位置関係から座標変換が可能であり、基準とする1つのマーカーの三次元座標系へと変換することができる。さらに多くのマーカーを用いても上記と同様にして座標変換を行えば、全ての計測位置を1つの三次元座標系の三次元位置情報へ変換し、計測を行うことができる。ただし、マーカーの数が増えることでコストが増加したり、持ち運ぶ手間がかかったりするため、最小限の数のマーカーを使用することが望ましい。 In the above description, the method using two markers is described, but the number of markers is not limited to this. In the above example, the marker 22 is the same as the marker 20, but the marker 22 may be a marker different from the marker 20 or 21. That is, three markers may be used. Also in that case, coordinate conversion is possible from the positional relationship between the markers appearing in the image, and conversion into a three-dimensional coordinate system of one marker as a reference is possible. Even if more markers are used, if coordinate conversion is performed in the same manner as described above, all measurement positions can be converted into three-dimensional position information of one three-dimensional coordinate system, and measurement can be performed. However, it is desirable to use a minimum number of markers because the number of markers increases the cost and takes time and effort.
 なお、上記の例では2つのマーカーを移動させながら、撮像位置90から95の順に一定の方向に移動しながら撮像を行うことで計測を行ったが、撮像の順序はこれに限られるものではない。例えば、撮像位置90と91の撮像順序が逆になってもよいし、または、上記とは逆順にマーカーを移動しながら撮像位置95から90の順に撮像を行ってもよく、どのような撮像順序であっても構わない。 In the above example, the measurement is performed by performing the imaging while moving the two markers and moving in a certain direction in the order of the imaging positions 90 to 95, but the imaging order is not limited to this. . For example, the imaging order of the imaging positions 90 and 91 may be reversed, or imaging may be performed in the order of the imaging positions 95 to 90 while moving the marker in the reverse order to the above, and any imaging order It does not matter.
 ただし、上記の例では撮像位置92、93、94の画像に写る2つのマーカーの位置関係に基づいて座標系の変換を行ったが、一度移動したマーカーを移動前と完全に同じ位置・姿勢に戻すことは困難である。したがって、例えば、撮像位置92での撮像前にマーカー20を移動してしまうと、マーカー20とマーカー21の座標系の対応をとることができなくなってしまう。そのため、全てのマーカーの座標系を対応づけて変換できるように、マーカーの移動タイミングや移動位置、撮像順序、撮像位置などを決める必要がある。 However, in the above example, the coordinate system is converted based on the positional relationship between the two markers that appear in the images at the imaging positions 92, 93, and 94. However, once the marker is moved, it is completely in the same position and orientation as before the movement. It is difficult to return. Therefore, for example, if the marker 20 is moved before the imaging at the imaging position 92, the correspondence between the coordinate system of the marker 20 and the marker 21 cannot be achieved. For this reason, it is necessary to determine the movement timing and movement position of the marker, the imaging order, the imaging position, and the like so that the coordinate systems of all the markers can be associated with each other.
 上記の例のようにマーカーと撮像位置の移動方向が一定である場合には、画像処理装置1は、撮像された順に画像に写るマーカーを対応づけていくことで、座標系変換することが可能である。また、撮像順やマーカーの移動タイミングが上記の例のように一定でない場合には、どの画像に写るマーカーが移動していない同一のものであるかなどの情報を、ユーザが画像処理装置1に入力することで、全てのマーカーを対応づけて座標系変換するようにすればよい。 When the moving direction of the marker and the imaging position is constant as in the above example, the image processing apparatus 1 can convert the coordinate system by associating the markers appearing in the image in the order of imaging. It is. In addition, when the imaging order and the marker movement timing are not constant as in the above example, the user informs the image processing apparatus 1 of information such as in which image the marker in the image is the same that has not moved. By inputting, all the markers are associated with each other and the coordinate system is converted.
 なお、上記では全ての三次元位置情報を、基準マーカーの三次元座標系に変換するようにしたが、画像処理装置1を基準とする三次元座標系に変換するようにしてもよい。つまり、基準とする三次元座標系を画像処理装置1の三次元座標系としてもよい。この場合、上記と同様に全ての三次元位置情報をマーカーの三次元座標系を介して変換していき、さらに、最初に基準マーカーを撮像したときの画像処理装置1の座標系へと変換すればよい。画像に写る基準マーカーの座標系から、画像処理装置1の座標系への変換は、カメラパラメータに基づいて行うことができる。全ての三次元位置情報が画像処理装置1の座標系へと変換されれば、上記と同様にして計測が可能である。また、画像処理装置1の三次元座標系の軸方向は、基準とする撮像部100の姿勢によって変化する。例えば、俯瞰図を作製する際の投影面が床面などの水平面であるときは、撮像部100の姿勢情報を3軸加速度センサ等によって検出することで、画像処理装置1の姿勢情報がわかり、投影面との位置関係を把握することができる。これにより、画像から計測範囲の面の情報を算出し、面と基準とする三次元座標系との位置関係を推定する手間を省くことができる。なお、3軸加速度センサは、MEMS(Micro Electro Mechanical Systems:微小電気機械素子およびその創製技術)を応用した、静電容量型、ピエゾ抵抗型、熱検知型などのセンサなどがある。静電容量型はセンサ素子可動部と固定部の間の容量変化を検出する方式であり、ピエゾ抵抗型はセンサ素子可動部と固定部をつなぐバネ部分に配置したピエゾ抵抗素子により、加速度でバネ部分に発生した歪みを検出する方式である。また、熱検知型はヒーターにより筐体内に熱気流を発生させ、加速度による対流変化を熱抵抗等で検出する方式である。 In the above description, all the three-dimensional position information is converted into the three-dimensional coordinate system of the reference marker. However, the information may be converted into a three-dimensional coordinate system based on the image processing apparatus 1. That is, the reference three-dimensional coordinate system may be the three-dimensional coordinate system of the image processing apparatus 1. In this case, in the same manner as described above, all the three-dimensional position information is converted through the marker three-dimensional coordinate system, and further converted into the coordinate system of the image processing apparatus 1 when the reference marker is first imaged. That's fine. Conversion from the coordinate system of the reference marker shown in the image to the coordinate system of the image processing apparatus 1 can be performed based on the camera parameters. If all the three-dimensional position information is converted into the coordinate system of the image processing apparatus 1, measurement can be performed in the same manner as described above. Further, the axial direction of the three-dimensional coordinate system of the image processing apparatus 1 changes depending on the orientation of the imaging unit 100 as a reference. For example, when the projection plane when creating the overhead view is a horizontal plane such as a floor surface, the attitude information of the image processing apparatus 1 can be determined by detecting the attitude information of the imaging unit 100 using a triaxial acceleration sensor or the like, The positional relationship with the projection plane can be grasped. Thereby, it is possible to save the trouble of calculating the information of the surface in the measurement range from the image and estimating the positional relationship between the surface and the reference three-dimensional coordinate system. Examples of the three-axis acceleration sensor include a capacitance type, a piezoresistive type, a heat detection type sensor, and the like to which MEMS (Micro Electro Electro Mechanical Systems: a micro electro mechanical element and its creation technology) is applied. The capacitance type is a method for detecting a change in capacitance between the movable part of the sensor element and the fixed part, and the piezoresistive type is a spring that is accelerated by a piezoresistive element placed in the spring part that connects the movable part of the sensor element and the fixed part. This is a method for detecting distortion generated in a portion. Further, the heat detection type is a method in which a hot air current is generated in a housing by a heater and a convection change due to acceleration is detected by a thermal resistance or the like.
(2つのマーカーによる計測方法の処理手順)
 以下に、上述した2つのマーカーによる計測方法の処理手順について、図11のフローチャート図と、図1Cの画像処理部11aの機能ブロック図とを用いて説明する。画像処理部11aは、計測位置指定入力受付部11a-1と、三次元位置情報算出部11a-2と、マーカー間対応付け処理部11a-3と、三次元位置情報変換部11a-4と、マーカー移動指示受付部11a-5と、俯瞰図作成表示制御部11a-6と、俯瞰図修正制御部11a-7と、計測範囲領域分割処理部11a-8と、面積算出部11a-9と、表示制御部11a-10と、を有している。
(Measurement procedure using two markers)
Hereinafter, the processing procedure of the measurement method using the two markers described above will be described with reference to the flowchart in FIG. 11 and the functional block diagram of the image processing unit 11a in FIG. 1C. The image processing unit 11a includes a measurement position designation input receiving unit 11a-1, a three-dimensional position information calculating unit 11a-2, an inter-marker association processing unit 11a-3, a three-dimensional position information converting unit 11a-4, A marker movement instruction receiving unit 11a-5, an overhead view creation / display control unit 11a-6, an overhead view correction control unit 11a-7, a measurement range region division processing unit 11a-8, an area calculation unit 11a-9, Display control unit 11a-10.
 まず、ユーザは、2つのマーカー20および21を、例えば、図9Aに示す位置へ設置する(ステップS201)。このとき、マーカー20は図9Aの下側の領域の中央付近に、マーカー21はマーカー20と同時に撮像可能な位置で、かつ、図9Aの上側の領域の中央付近に設置する。 First, the user installs the two markers 20 and 21 at, for example, the position shown in FIG. 9A (step S201). At this time, the marker 20 is installed in the vicinity of the center of the lower region of FIG. 9A, and the marker 21 is installed at a position where it can be imaged simultaneously with the marker 20 and in the vicinity of the center of the upper region of FIG.
 次に、ユーザは画像処理装置1を撮像位置90に移動し(ステップS202)、画像処理装置1によってマーカー20と計測範囲の一部とを撮像する(ステップ203)。撮像部100で撮像された画像情報と、撮像部100と撮像部101との画像情報に基づいて距離算出部102で算出された距離情報とは、画像処理部11aに出力される。また、取得部100の画像情報は表示部13に表示される。ユーザは表示部13に表示された画像を確認しながら、入力部14で計測位置900と901とを指定し(ステップS204)、計測位置指定入力受付部11a-1が入力を受け付け、画像処理部11aの三次元位置情報算出部11a-2は、上述した方法によって、各計測位置の三次元位置情報を算出する(ステップS205)。また、画像処理部11aのマーカー間対応付け処理部11a-3は、画像内のマーカーを検出し、マーカーの対応付けを行う(ステップS206)。ステップS206の詳細は後述する。次に画像処理部11aの三次元位置情報変換部11a-4は、ステップS205で算出された計測位置の三次元位置情報を、ステップS206で対応づけられた基準マーカーの三次元座標系へと変換する(ステップS207)。 Next, the user moves the image processing apparatus 1 to the imaging position 90 (step S202), and the image processing apparatus 1 images the marker 20 and a part of the measurement range (step 203). The image information captured by the imaging unit 100 and the distance information calculated by the distance calculation unit 102 based on the image information of the imaging unit 100 and the imaging unit 101 are output to the image processing unit 11a. Further, the image information of the acquisition unit 100 is displayed on the display unit 13. While confirming the image displayed on the display unit 13, the user designates the measurement positions 900 and 901 with the input unit 14 (step S204), and the measurement position designation input reception unit 11a-1 receives the input. The three-dimensional position information calculation unit 11a-2 of 11a calculates the three-dimensional position information of each measurement position by the method described above (step S205). Further, the marker-to-marker association processing unit 11a-3 of the image processing unit 11a detects a marker in the image, and associates the marker (step S206). Details of step S206 will be described later. Next, the three-dimensional position information conversion unit 11a-4 of the image processing unit 11a converts the three-dimensional position information of the measurement position calculated in step S205 into the three-dimensional coordinate system of the reference marker associated in step S206. (Step S207).
 次に、計測範囲の全ての計測位置が指定されたかどうかを確認し(ステップS208)、指定が終わっていなければステップS209に、指定が終わっていればステップS211に進む。ステップS208は、表示部13へ計測位置の指定を継続するかどうかの質問を表示するなどしてユーザに選択させる。 Next, it is confirmed whether or not all measurement positions in the measurement range have been designated (step S208). If the designation has not been completed, the process proceeds to step S209. If the designation has been completed, the process proceeds to step S211. In step S208, the user is allowed to select the display unit 13 by displaying a question as to whether or not to continue specifying the measurement position.
 この時点では、まだ全ての計測位置の指定が終わっていないため、ステップS209に進む。ステップS209では、マーカー移動指示受付部11a-5が、マーカーを移動するかどうか確認し(ステップS209)、マーカーを移動しない場合はステップS202に戻り、移動する場合はユーザがマーカーを移動し(ステップS210)、ステップ202に戻る。ステップS209の確認は、表示部13へ質問を表示してユーザに選択させる。 At this point, since all the measurement positions have not yet been specified, the process proceeds to step S209. In step S209, the marker movement instruction receiving unit 11a-5 confirms whether or not to move the marker (step S209). If the marker is not moved, the process returns to step S202. If the marker is moved, the user moves the marker (step S209). S210), the process returns to step 202. In step S209, the question is displayed on the display unit 13 and the user is allowed to select.
 次に、この時点では、まだ撮像位置91および92での撮像が終わっていないため、マーカー20を移動させずにステップS202に戻る。そして、ユーザは画像処理装置1を撮像位置91に移動し(ステップS202)、ステップS203からステップS208までを実行することで、マーカー20の三次元座標系に変換された計測位置910、911の三次元位置情報を取得する。この時点でもまだ全ての計測位置の指定が終わっていないためステップS209に進み、撮像位置92での撮像が終わっていないのでステップS202に戻る。上記と同様に、ユーザは画像処理装置1を撮像位置92に移動し(ステップS202)、ステップS203からステップS208までを実行することで、マーカー20の三次元座標系に変換された計測位置920、921、922の三次元位置情報を取得する。 Next, since imaging at the imaging positions 91 and 92 has not been completed at this point, the process returns to step S202 without moving the marker 20. Then, the user moves the image processing apparatus 1 to the imaging position 91 (step S202), and executes steps S203 to S208, whereby the tertiary of the measurement positions 910 and 911 converted into the three-dimensional coordinate system of the marker 20 is obtained. Get original location information. At this point in time, since all the measurement positions have not been specified yet, the process proceeds to step S209. Since the imaging at the imaging position 92 is not completed, the process returns to step S202. Similarly to the above, the user moves the image processing apparatus 1 to the imaging position 92 (Step S202), and executes Steps S203 to S208, whereby the measurement position 920 converted to the three-dimensional coordinate system of the marker 20, The three-dimensional position information 921 and 922 is acquired.
 次に、この時点でもまだ全ての計測位置の指定が終わっていないため、ステップS209に進む。図9Aの下側領域の撮像と計測位置の指定が終わっているため、ユーザはマーカー20を図9Bのマーカー22の位置へ移動させる(ステップS210)。このとき、表示部13に表示した撮像位置92の画像上で、移動させるマーカー20を指定しておくことで、移動マーカーがどちらのマーカーであるかという情報を画像処理部11に入力しておく。 Next, since all the measurement positions have not yet been specified at this point, the process proceeds to step S209. Since the imaging of the lower region in FIG. 9A and the designation of the measurement position have been completed, the user moves the marker 20 to the position of the marker 22 in FIG. 9B (step S210). At this time, by specifying the marker 20 to be moved on the image of the imaging position 92 displayed on the display unit 13, information indicating which marker is the moving marker is input to the image processing unit 11. .
 なお、実際の処理ではステップS209の確認を必ず行う必要はなく、ステップS208で計測位置の指定が終わっていないと判断した後、マーカーの移動が必要な場合は、ステップ203の撮像よりも前にユーザがマーカーを移動させておけばよい。このとき、どのマーカーを移動させたかの情報を、画像上で選択するなどして、画像処理部11aに入力しておく必要がある。 In actual processing, it is not always necessary to confirm step S209. If it is determined in step S208 that the designation of the measurement position has not been completed, and the marker needs to be moved, the imaging is performed before step 203. The user may move the marker. At this time, it is necessary to input information indicating which marker has been moved to the image processing unit 11a by selecting it on the image.
 次に、ユーザは画像処理装置1を撮像位置93に移動し(ステップS202)、ステップS203からステップS208までを実行することで、マーカー20の三次元座標系に変換された計測位置930、931、932の三次元位置情報を取得する。この時点でも全ての計測位置の指定が終わっていないため、再度ステップS209に進む。既にマーカー20は移動させているので、ステップS202に戻り、ユーザは画像処理装置1を撮像位置94に移動させる(ステップS202)。そして、ステップS203からステップS208を実行し、マーカー20の三次元座標系に変換された計測位置940、941の三次元位置情報を取得する。以上の処理で全ての計測位置が指定されたため、ステップS211に進む。 Next, the user moves the image processing apparatus 1 to the imaging position 93 (Step S202), and executes Steps S203 to S208, whereby the measurement positions 930, 931, converted to the three-dimensional coordinate system of the marker 20, are obtained. 932 three-dimensional position information is acquired. Since all the measurement positions have not been specified even at this time, the process proceeds to step S209 again. Since the marker 20 has already been moved, the process returns to step S202, and the user moves the image processing apparatus 1 to the imaging position 94 (step S202). Then, Steps S203 to S208 are executed, and the three-dimensional position information of the measurement positions 940 and 941 converted into the three-dimensional coordinate system of the marker 20 is acquired. Since all the measurement positions have been designated by the above processing, the process proceeds to step S211.
 ステップS211において、俯瞰図作成表示制御部11a-6は、全ての計測位置の三次元位置情報を、マーカー20のXZ平面へと投影し、XZ軸の二次元座標系で表される二次元位置情報を算出して俯瞰図を作製する。さらに、俯瞰図作成表示制御部11a-6は計測位置を接続して、図10Bに示すような俯瞰図を表示部13に表示する(ステップS211)。 In step S211, the overhead view creation / display control unit 11a-6 projects the three-dimensional position information of all measurement positions onto the XZ plane of the marker 20, and represents the two-dimensional position represented by the two-dimensional coordinate system of the XZ axis. Calculate the information and create an overhead view. Furthermore, the overhead view creation / display control unit 11a-6 connects the measurement positions and displays the overhead view as shown in FIG. 10B on the display unit 13 (step S211).
 ユーザは、表示部13に表示された俯瞰図の画像を確認しながら、入力部14により接続の誤っている部分、あるいは、接続されていない部分を指定し、俯瞰図修正制御部11a-7は指定情報に基づいて俯瞰図を修正する(ステップS212)。修正された俯瞰図は図10Cのようになる。 While confirming the bird's-eye view image displayed on the display unit 13, the user designates an erroneously connected portion or a non-connected portion by the input unit 14, and the overhead view correction control unit 11a-7 The overhead view is corrected based on the designation information (step S212). The corrected overhead view is as shown in FIG. 10C.
 修正された俯瞰図における計測位置の接続情報に基づいて、計測範囲領域分割処理部11a-8が、計測範囲を三角形の領域に分割し(ステップS213)、図10Dのような俯瞰図を作製して表示部13に表示する。領域分割が誤っている場合には、ユーザが入力部14によって誤っている部分を指定し、指定情報に基づいて計測範囲領域分割処理部11a-8は修正を行う。 Based on the connection information of the measurement position in the corrected overhead view, the measurement range region division processing unit 11a-8 divides the measurement range into triangular regions (step S213), and creates an overhead view as shown in FIG. 10D. Is displayed on the display unit 13. If the region division is incorrect, the user designates the erroneous portion by the input unit 14, and the measurement range region division processing unit 11a-8 performs correction based on the designation information.
 そして、ステップS213の領域分割に基づいて、面積算出部11a-9は分割領域毎に面積を算出し、全ての算出結果を加算することで、計測範囲全体の面積を算出する(ステップS214)。そして、算出した結果を、表示制御部11a-10が表示部13に表示させる(ステップS215)。 Then, based on the region division in step S213, the area calculation unit 11a-9 calculates an area for each divided region, and adds all the calculation results to calculate the area of the entire measurement range (step S214). Then, the display control unit 11a-10 displays the calculated result on the display unit 13 (step S215).
 以上の処理手順によって、本実施形態の画像処理装置1は、2つのマーカーを用いて、曲がり角を含む廊下のような遮蔽領域がある計測範囲であっても計測を行うことができる。 According to the above processing procedure, the image processing apparatus 1 according to the present embodiment can perform measurement using two markers even in a measurement range where there is a shielding area such as a corridor including a corner.
 なお、上記の処理手順では、全ての計測位置が指定されてから、ステップS211において俯瞰図を作製したが、このような処理に限られるものではない。例えば、計測位置が指定される度に、指定済みの計測位置のみで俯瞰図を作製して表示部13に表示する。このようにすることで、どの範囲の計測位置が指定済みで、どの範囲が指定されていないかを、ユーザが確認しながら計測を行うことができる。したがって、ユーザが俯瞰図を確認しながら計測位置や計測範囲を決定できるので、計測位置の指定漏れを無くすことが可能となる。 In the above processing procedure, the bird's-eye view is created in step S211 after all the measurement positions are specified, but the present invention is not limited to such processing. For example, each time a measurement position is designated, an overhead view is created only from the designated measurement position and displayed on the display unit 13. By doing in this way, it is possible to perform measurement while the user confirms which range of the measurement position has been specified and which range has not been specified. Therefore, since the user can determine the measurement position and measurement range while checking the overhead view, it is possible to eliminate the omission of the measurement position.
(ステップS206 マーカーの対応付けについて)
 ステップS206では、撮像された画像からマーカーを検出し、2つのマーカーの対応付けを行う。画像処理部11aは、最初に設定された基準マーカーと、それ以外の非基準マーカーとを区別する。そして、非基準マーカーから基準マーカーへの三次元座標系の変換行列を算出することで、対応付けを行う。
(Step S206: Marker association)
In step S206, a marker is detected from the captured image and two markers are associated with each other. The image processing unit 11a distinguishes the first set reference marker from the other non-reference markers. Then, association is performed by calculating a transformation matrix of a three-dimensional coordinate system from the non-reference marker to the reference marker.
 以下では、上記の図9の例を用いて、マーカーの対応付けについて説明する。 In the following, marker association will be described using the example of FIG. 9 described above.
 まず画像処理部11aは、最初に撮像される撮像位置90の画像に写るマーカー20を基準マーカーとして設定する。ただし、基準マーカーはユーザが画像上で指定して設定してもよく、例えば1つの画像に複数のマーカーが写っている場合には、基準とするマーカーをユーザが選択し、入力部14によって指定するようにするとよい。あるいは、最初の基準マーカーとして優先順位を付けておき、色などで判別するようにして設定してもよい。撮像位置91の画像にはマーカー20のみが写っているため、基準マーカーと同一であるとして対応づけられる。 First, the image processing unit 11a sets, as a reference marker, the marker 20 that appears in the image at the imaging position 90 that is first imaged. However, the reference marker may be specified and set on the image by the user. For example, when a plurality of markers are shown in one image, the user selects the reference marker and designates it using the input unit 14. It is good to do. Alternatively, a priority order may be given as the first reference marker, and it may be set so as to be discriminated by color or the like. Since only the marker 20 is shown in the image at the imaging position 91, it is associated with the same as the reference marker.
 次に、撮像位置92の画像には、マーカー20および21が写っているため、画像処理部11aは2つのマーカーを検出し、2つのマーカーの配色などの違いから、基準マーカーがどちらであるかを推定する。そして、後述する方法でマーカー21の三次元座標系から基準マーカーであるマーカー20の三次元座標系へ変換する変換行列を算出する。 Next, since the markers 20 and 21 are shown in the image at the imaging position 92, the image processing unit 11a detects the two markers, and which is the reference marker due to a difference in coloration of the two markers or the like. Is estimated. Then, a conversion matrix for converting from the three-dimensional coordinate system of the marker 21 to the three-dimensional coordinate system of the marker 20 as the reference marker is calculated by a method described later.
 次に、撮像位置93の画像には、マーカー21と、マーカー20を移動したマーカー22とが写っている。マーカー22はマーカー20と同一のものであるが、ステップS209とステップS210においてユーザがどのマーカーを移動したかがわかっているため、撮像位置93の画像から検出される2つのマーカーは、どちらも基準マーカーではない(非基準マーカー)ことがわかる。撮像位置93の画像で検出されるマーカー21は、撮像位置92で撮像した時点から移動しておらず、マーカー22は撮像位置92で撮像した時点からマーカー20を移動したものであることがわかっている。 Next, in the image at the imaging position 93, the marker 21 and the marker 22 that has moved the marker 20 are shown. The marker 22 is the same as the marker 20, but since it is known which marker the user has moved in step S209 and step S210, the two markers detected from the image at the imaging position 93 are both standards. It can be seen that it is not a marker (non-reference marker). It can be seen that the marker 21 detected in the image at the imaging position 93 has not moved since the time when the image was captured at the imaging position 92, and the marker 22 has been moved from the time when the image was captured at the imaging position 92. Yes.
 画像処理部11aは、後述する方法でマーカー22からマーカー21へと三次元座標系を変換する変換行列を算出する。このとき、上記の通り既にマーカー21からマーカー20への三次元座標系の変換行列は算出されているため、2つの変換行列に基づいてマーカー22から基準マーカーであるマーカー20への三次元座標系の変換行列を算出することができる。 The image processing unit 11a calculates a conversion matrix for converting the three-dimensional coordinate system from the marker 22 to the marker 21 by a method described later. At this time, since the transformation matrix of the three-dimensional coordinate system from the marker 21 to the marker 20 has already been calculated as described above, the three-dimensional coordinate system from the marker 22 to the marker 20 that is the reference marker based on the two transformation matrices. Can be calculated.
 最後に、撮像位置94の画像にはマーカー22が写されており、マーカー22はマーカー20を移動したもので、撮像位置93で撮像した時点からは移動していないことがわかっている。したがって、上記のマーカー22からマーカー20への三次元座標系の変換行列を適用することで、基準マーカーへの変換が可能となる。 Finally, the marker 22 is shown in the image at the imaging position 94, and the marker 22 is obtained by moving the marker 20, and it is known that the marker 22 has not moved since the image was captured at the imaging position 93. Therefore, by applying the three-dimensional coordinate system conversion matrix from the marker 22 to the marker 20, the conversion to the reference marker is possible.
 以上の手順により、各画像に写るマーカーから、基準マーカーの三次元座標系へ変換する変換行列が算出される。すなわち、マーカーの対応付けが行われる。算出した変換行列に基づいて、ステップS207では三次元位置情報の座標系変換が行われる。 By the above procedure, a conversion matrix for converting from the marker shown in each image to the three-dimensional coordinate system of the reference marker is calculated. That is, the marker is associated. Based on the calculated transformation matrix, coordinate system transformation of the three-dimensional position information is performed in step S207.
 上記のように、基準マーカーと非基準マーカーを把握し、移動したマーカーと移動していないマーカーとを画像処理部11aが区別しておけば、さらに広い範囲を計測するために撮像位置や計測位置を増やす場合にも、基準マーカーへの変換が可能である。マーカーの数が2つ以上になった場合も同様に変換が可能である。ただし、移動していないマーカーを介して基準マーカーへの変換行列を算出する必要があるため、全てのマーカーを同時に移動させることはできない。したがって、移動させるマーカーと、移動させないマーカーを交互に入れ替えながら、撮像とマーカーの移動とを繰り返すようにすればよい。 As described above, if the image processing unit 11a recognizes the reference marker and the non-reference marker and distinguishes the moved marker and the non-moving marker, the imaging position and the measurement position can be set to measure a wider range. Even when the number is increased, conversion to a reference marker is possible. The same conversion is possible when the number of markers is two or more. However, since it is necessary to calculate a conversion matrix to a reference marker via a marker that has not moved, it is not possible to move all the markers simultaneously. Therefore, imaging and marker movement may be repeated while alternately exchanging the marker to be moved and the marker not to be moved.
 なお、上記では基準マーカーを最初に撮像された画像に写るマーカーから選択したが、これに限られるものではなく、各マーカーから基準マーカーへの変換行列が算出できればよい。例えば、撮像位置92で撮像されたマーカー21を基準マーカーとして、マーカー20およびマーカー22からマーカー21への三次元座標系の変換行列を算出するようにしてもよい。この場合、上述した俯瞰図などもマーカー21を基準に作製するようにすれば、面積の計測が可能である。 In the above description, the reference marker is selected from the markers that appear in the first captured image. However, the present invention is not limited to this, and it is only necessary to calculate a conversion matrix from each marker to the reference marker. For example, the conversion matrix of the three-dimensional coordinate system from the marker 20 and the marker 22 to the marker 21 may be calculated using the marker 21 imaged at the imaging position 92 as a reference marker. In this case, the area can be measured if the above-described overhead view and the like are prepared based on the marker 21.
(マーカー間の座標変換について)
 複数のマーカーを撮像して計測する場合に、一方のマーカーの三次元座標系から、他方のマーカーのる三次元座標系へ三次元位置情報を変換する方法について以下に説明する。
(About coordinate conversion between markers)
A method for converting the three-dimensional position information from the three-dimensional coordinate system of one marker to the three-dimensional coordinate system of the other marker when imaging by measuring a plurality of markers will be described below.
 例えば、撮像位置cで撮像した画像に、マーカーKおよびマーカーLの2つのマーカーが写っていた場合に、撮像位置cにおける画像処理装置1を基準とする三次元座標系をM、マーカーKを基準とする三次元座標系をO、マーカーLを基準とする三次元座標系をOとする。また、撮像される被写体上のある点をQとしたとき、上記3つの三次元座標系それぞれからみた点Qへのベクトルを、QMc、QOK、QOLとする。このとき、上述した方法により、式(3)を用いて2つのマーカーそれぞれが表す三次元座標系への変換行列VおよびVを算出することができる。したがって、QMcとQOKおよびQOLの関係は式(5)で表される。
Figure JPOXMLDOC01-appb-M000005
For example, if two images of the marker K and the marker L are captured in the image captured at the imaging position c, the three-dimensional coordinate system with the image processing apparatus 1 at the imaging position c as a reference is represented by Mc and the marker K. the three-dimensional coordinate system based O K, the three-dimensional coordinate system based on the marker L and O L. Further, when a certain point on the object to be imaged is Q, vectors to the point Q viewed from each of the three three-dimensional coordinate systems are Q Mc , Q OK , and Q OL . At this time, the transformation matrices V K and V L into the three-dimensional coordinate system represented by each of the two markers can be calculated by using the above-described method using the equation (3). Therefore, the relationship between Q Mc and Q OK and Q OL is expressed by equation (5).
Figure JPOXMLDOC01-appb-M000005
 また、基準マーカーをマーカーKとしたとき、マーカーLからマーカーKへの三次元座標系の変換は式(6)で表される。
Figure JPOXMLDOC01-appb-M000006
Further, when the reference marker is the marker K, the transformation of the three-dimensional coordinate system from the marker L to the marker K is expressed by Expression (6).
Figure JPOXMLDOC01-appb-M000006
 VKLが変換行列である。なお、-1は逆行列を表す。 V KL is a transformation matrix. Note that -1 represents an inverse matrix.
 さらに、撮像位置cでの撮像後、マーカーLを移動させずに、他の撮像位置からマーカーLと新たなマーカーJとを撮像したとき、マーカーJからマーカーLへの三次元座標系の変換行列VLJは、式(6)で算出することができる。このとき、マーカーJの三次元座標系からみた点QへのベクトルをQOJとすると、マーカーJの三次元座標系から基準マーカーであるマーカーKの三次元座標系への変換は式(7)で表される。
Figure JPOXMLDOC01-appb-M000007
Furthermore, after imaging at the imaging position c, when the marker L and a new marker J are imaged from other imaging positions without moving the marker L, a transformation matrix of the three-dimensional coordinate system from the marker J to the marker L V LJ can be calculated by equation (6). At this time, assuming that the vector from the three-dimensional coordinate system of the marker J to the point Q is Q OJ , the conversion from the three-dimensional coordinate system of the marker J to the three-dimensional coordinate system of the marker K, which is the reference marker, is expressed by Equation (7). It is represented by
Figure JPOXMLDOC01-appb-M000007
 VKJがマーカーJから基準マーカーへの三次元座標系の変換行列である。 V KJ is a transformation matrix of the three-dimensional coordinate system from the marker J to the reference marker.
 以上のようにして、画像に写る2つのマーカー間の変換行列を算出することができる。さらに、移動していないマーカーを介して、異なる撮像位置で撮像されたマーカーの三次元座標系を、基準マーカーの三次元座標系へ変換する変換行列を算出することができる。 As described above, a conversion matrix between two markers appearing in an image can be calculated. Furthermore, it is possible to calculate a conversion matrix for converting a three-dimensional coordinate system of a marker imaged at a different imaging position into a three-dimensional coordinate system of a reference marker via a marker that has not moved.
 以上により、第1の実施形態および第2の実施形態の画像処理装置1によれば、画像情報に基づいて複数のマーカー間の座標系の変換が可能で、全ての三次元位置情報を基準マーカーの座標系へと変換し、座標系を統合することができる。したがって、異なる位置で撮像された複数の画像に写る被写体の位置関係を、1つの座標系へ統合して対応付けることができる。これにより、ユーザが計測範囲などの位置関係を正確に把握して、自ら結果を統合する手間を無くすことができるため、ユーザの負担を軽減することができる。 As described above, according to the image processing apparatus 1 of the first embodiment and the second embodiment, the coordinate system between a plurality of markers can be converted based on the image information, and all the three-dimensional position information can be converted to the reference markers. Can be integrated into the coordinate system. Therefore, it is possible to integrate and associate the positional relationship of the subject appearing in a plurality of images captured at different positions into one coordinate system. Thereby, since a user can grasp | ascertain correctly positional relationships, such as a measurement range, and can eliminate the effort which integrates a result himself, a user's burden can be eased.
 また、ユーザが指定した計測位置を画像認識技術によって被写体の角や端などの特徴的な位置へ修正するため、ユーザの指定精度が低い場合でも計測位置のずれを無くして高精度に計測を行うことができる。 In addition, since the measurement position specified by the user is corrected to a characteristic position such as the corner or edge of the subject by image recognition technology, even if the user's specified accuracy is low, measurement position deviation is eliminated and measurement is performed with high accuracy. be able to.
 また、ユーザが指定した計測位置の三次元位置情報に基づいて俯瞰図を作製することで、指定済みの位置をユーザが確認しながら計測することができるため、計測位置の指定漏れを無くすことができる。 In addition, by creating a bird's-eye view based on the three-dimensional position information of the measurement position specified by the user, it is possible to measure while confirming the specified position by the user, thereby eliminating the omission of specification of the measurement position. it can.
(第3の実施形態)
 第1の実施形態および第2の実施形態で述べた例では、異なる撮像位置で撮像された画像において、全て異なる位置の計測位置を指定した。例えば第1の実施形態で述べた計測方法で実際に計測を行う場合には、撮像と計測位置の指定を繰り返す間に、以前に指定済みの計測位置をユーザが再度指定する可能性がある。また、複数の撮像位置から同一の計測位置を撮影し、それぞれで計測位置の三次元位置情報を算出し、最も高精度に算出された情報を利用して計測する場合も想定される。
(Third embodiment)
In the example described in the first embodiment and the second embodiment, measurement positions at different positions are all designated in images captured at different imaging positions. For example, when the measurement is actually performed by the measurement method described in the first embodiment, the user may specify a previously designated measurement position again while repeating the imaging and measurement position designation. It is also assumed that the same measurement position is photographed from a plurality of imaging positions, three-dimensional position information of the measurement position is calculated for each, and measurement is performed using the information calculated with the highest accuracy.
 そこで、本実施形態における画像処理装置5は、ユーザが同一の計測位置を指定した場合に、撮像位置からの距離が近いほうの計測位置を計測に用いるようにする。これは、ステレオ方式について上述したように、ステレオ方式によって距離情報を算出する場合、距離が近い被写体ほど高い分解能で距離情報が算出されるためである。つまり、距離が近いほど三次元位置情報の算出精度が高くなり、計測結果の精度も向上する。本実施形態の画像処理装置5は第1の実施形態の画像処理装置1と同一の構成であり、図1の画像処理部11を画像処理部51に代え、以下で述べる処理が追加される。 Therefore, when the user designates the same measurement position, the image processing apparatus 5 in the present embodiment uses the measurement position with the shorter distance from the imaging position for measurement. This is because, as described above with respect to the stereo system, when distance information is calculated using the stereo system, distance information is calculated with higher resolution for a subject with a shorter distance. That is, the closer the distance is, the higher the accuracy of calculating the three-dimensional position information and the accuracy of the measurement result is improved. The image processing apparatus 5 of the present embodiment has the same configuration as the image processing apparatus 1 of the first embodiment, and the processing described below is added instead of the image processing unit 11 of FIG.
 以下に図9の例を用いて、本実施形態の画像処理装置5について説明する。図9Aの場合、例えば計測位置921は、撮像位置90と撮像位置92の両方の画像に写るため、両方の画像で指定が可能である。ユーザが両方の画像で計測位置921を選択していた場合、それぞれの画像で指定された計測位置921の三次元位置情報を、マーカー20の三次元座標系に変換することで、指定された計測位置が同一位置であることがわかる。画像処理部51は、上記の処理ステップS207において三次元座標系の変換を行った際に、指定済みの計測位置であるかを判断する。ただし、ユーザの指定精度などの影響で、厳密には座標変換された三次元位置情報が一致しない可能性がある。そこで、閾値を設定し、2つの計測位置の間の距離が閾値以下となった場合に、同一の計測位置であると判断するようにする。同一位置が指定済みであった場合には、それぞれの撮像位置における距離情報を比較し、各撮像位置から計測位置までの距離が近いほうの計測位置を優先する。 Hereinafter, the image processing apparatus 5 of the present embodiment will be described with reference to the example of FIG. In the case of FIG. 9A, for example, the measurement position 921 is shown in both images of the imaging position 90 and the imaging position 92, and therefore can be specified in both images. When the user has selected the measurement position 921 in both images, the specified measurement is performed by converting the three-dimensional position information of the measurement position 921 specified in each image into the three-dimensional coordinate system of the marker 20. It can be seen that the positions are the same. The image processing unit 51 determines whether or not the measurement position has been designated when the transformation of the three-dimensional coordinate system is performed in the processing step S207. However, strictly speaking, there is a possibility that the coordinate-converted three-dimensional position information does not match due to the influence of the designation accuracy of the user. Therefore, a threshold value is set, and when the distance between the two measurement positions is equal to or less than the threshold value, it is determined that the measurement positions are the same. When the same position has been designated, the distance information at each imaging position is compared, and the measurement position with the shorter distance from each imaging position to the measurement position is prioritized.
 図9の例の場合、撮像位置90から計測位置921までの距離と、撮像位置92から計測位置921までの距離とを比較し、撮像位置90からの距離が近いため、撮像位置90の画像で指定された計測位置921の三次元位置情報を計測に用いるようにする。 In the case of the example in FIG. 9, the distance from the imaging position 90 to the measurement position 921 is compared with the distance from the imaging position 92 to the measurement position 921, and the distance from the imaging position 90 is short. The three-dimensional position information of the designated measurement position 921 is used for measurement.
 なお、距離の取得方法はステレオ方式に限られないため、距離の取得方法に合わせて、画像処理部51において、距離の算出精度が最も高い計測位置が選択されるようにすると好適である。例えば、各画像の計測位置で信頼度を算出し、信頼度が最も高い値を使用する。信頼度はステレオマッチングの評価値や、周辺距離値との相違などがある。 Since the distance acquisition method is not limited to the stereo method, it is preferable that the measurement position with the highest distance calculation accuracy is selected in the image processing unit 51 in accordance with the distance acquisition method. For example, the reliability is calculated at the measurement position of each image, and the value with the highest reliability is used. The reliability includes the evaluation value of stereo matching and the difference from the peripheral distance value.
 以上により、本実施形態の画像処理装置5によれば、ユーザが異なる画像上で三次元空間的に同一の位置を指定した場合にも、自動的に三次元位置情報の算出精度が高いほうの計測位置を選択するため、より精度の高い三次元位置情報に基づいて高精度に計測を行うことができる。 As described above, according to the image processing apparatus 5 of the present embodiment, even when the user designates the same position in three-dimensional space on different images, the calculation accuracy of the three-dimensional position information is automatically higher. Since the measurement position is selected, measurement can be performed with high accuracy based on more accurate three-dimensional position information.
 また、ユーザは指定済みの計測位置を全て記憶しておく必要がなく、意識せずに各画像で計測位置を指定できるため、ユーザの負担を軽減することができる。 Also, it is not necessary for the user to memorize all designated measurement positions, and the measurement position can be designated in each image without being conscious of it, so the burden on the user can be reduced.
(第4の実施形態)
 第2の実施形態の2つのマーカーによる計測方法では、基準マーカーともう一方のマーカーとのそれぞれを交互に移動させて撮像することで、撮像範囲内に収まらない計測範囲であっても計測が可能であることを述べた。
(Fourth embodiment)
In the measurement method using the two markers of the second embodiment, measurement is possible even in a measurement range that does not fit within the imaging range by alternately moving the reference marker and the other marker and imaging. Said that.
 これに対し本実施形態の画像処理装置7は、2つ以上のマーカーのうち少なくとも1つを、位置固定で移動させない絶対基準マーカーとし、それ以外のマーカーを移動させて撮像を行う。そして、指定される全ての計測位置の三次元位置情報を、絶対基準マーカーを基準とする三次元座標系に変換する。これにより、計測途中で撮像位置や計測位置の指定を誤ってしまった場合など、再度やり直しが必要な場合の処理量を減らし、ユーザの負担を軽減する。なお、本実施形態の画像処理装置7は、第1および第2の実施形態の画像処理装置1または第3の実施形態の画像処理装置5と同様の構成によって実現することができる。以下では画像処理装置7は画像処理装置1と同様の構成であるとして説明を行う。 On the other hand, the image processing apparatus 7 of the present embodiment performs imaging by moving at least one of two or more markers as an absolute reference marker that is not fixed and moved, and moving other markers. Then, the three-dimensional position information of all designated measurement positions is converted into a three-dimensional coordinate system based on the absolute reference marker. This reduces the amount of processing when re-execution is necessary, such as when the imaging position or measurement position is specified incorrectly during measurement, thereby reducing the burden on the user. Note that the image processing device 7 of the present embodiment can be realized by the same configuration as the image processing device 1 of the first and second embodiments or the image processing device 5 of the third embodiment. In the following description, it is assumed that the image processing device 7 has the same configuration as the image processing device 1.
 以下では例として、本実施形態の画像処理装置7により、絶対基準マーカーとそれ以外の2つの移動マーカーを用いて面積を計測する方法について説明する。 Hereinafter, as an example, a method for measuring the area using the absolute reference marker and the other two movement markers by the image processing apparatus 7 of the present embodiment will be described.
 まず、ユーザは計測範囲にマーカーを配置して画像処理装置7の取得部10により撮像を行い、画像処理装置7の画像処理部11は最初に撮像された画像に写る絶対基準マーカーを検出する。ユーザは絶対基準マーカー以外のマーカー(非基準マーカー)を、第1の実施形態と同様に交互に移動させながら撮像を繰り返し、計測範囲の全ての頂点位置を指定する。画像処理部11は指定された全ての計測位置の三次元位置情報を算出し、さらに、全ての三次元位置情報を絶対基準マーカーを基準とする三次元座標系へと変換する。座標系を変換された三次元位置情報に基づいて、第1の実施形態で述べた方法と同様にして面積を算出する。面積の算出は、画像処理部11で三次元位置情報に基づいて俯瞰図を作製し、ユーザが画像処理装置7の表示部13に表示された情報を確認しながら、画像処理装置7の入力部14で計測する範囲の指定や修正を行い、画像処理部が面積を算出する。 First, the user places a marker in the measurement range and takes an image with the acquisition unit 10 of the image processing device 7, and the image processing unit 11 of the image processing device 7 detects an absolute reference marker that appears in the first imaged image. The user repeats imaging while alternately moving markers (non-reference markers) other than the absolute reference marker as in the first embodiment, and designates all vertex positions in the measurement range. The image processing unit 11 calculates the three-dimensional position information of all the designated measurement positions, and further converts all the three-dimensional position information into a three-dimensional coordinate system based on the absolute reference marker. Based on the three-dimensional position information obtained by converting the coordinate system, the area is calculated in the same manner as the method described in the first embodiment. For calculating the area, the image processing unit 11 creates an overhead view based on the three-dimensional position information, and the user confirms the information displayed on the display unit 13 of the image processing device 7 while checking the information displayed on the input unit of the image processing device 7. The area to be measured is designated and corrected in step 14, and the image processing unit calculates the area.
 上記の処理においてマーカーを移動させながら撮像を行う際、画像処理部11は撮像した画像に絶対基準マーカーが写っているかどうかを検出し、検出された場合にはそのタイミングを画像処理装置7の記憶部12へと記憶しておく。 When imaging is performed while moving the marker in the above processing, the image processing unit 11 detects whether or not the absolute reference marker is included in the captured image, and if detected, the timing is stored in the image processing device 7. Store in the unit 12.
 最初に絶対基準マーカーを検出したときを第1検出タイミングとし、次に絶対基準マーカーを検出しときを第2検出タイミングとする。以降、検出する度に検出タイミングを記憶部に記憶しておく。 The first detection timing is when the absolute reference marker is first detected, and the second detection timing is when the absolute reference marker is detected next. Thereafter, the detection timing is stored in the storage unit every time it is detected.
 例えば、計6回の撮像を行った計測において、1回目、3回目、5回目の撮像で取得した画像のそれぞれで絶対基準マーカーが検出された場合には、1回目の撮像画像での絶対基準マーカーの検出が第1検出タイミング、3回目の撮像画像での検出が第2検出タイミング、5回目の撮像画像での検出が第3検出タイミングとなる。 For example, in the measurement with a total of six times of imaging, if an absolute reference marker is detected in each of the images acquired in the first, third, and fifth imaging, the absolute reference in the first captured image Detection of the marker is the first detection timing, detection at the third captured image is the second detection timing, and detection at the fifth captured image is the third detection timing.
 このとき、第1と第2の検出タイミングの間の、1~3回目の撮像画像において指定された計測位置は、1~3回目の撮像画像に写るマーカーを介して座標変換する。これらの計測位置の三次元位置情報は、第1検出タイミングの絶対基準マーカーを基準とする三次元座標系に変換される。 At this time, the measurement position designated in the first to third captured images between the first and second detection timings is coordinate-converted via a marker appearing in the first to third captured images. The three-dimensional position information of these measurement positions is converted into a three-dimensional coordinate system based on the absolute reference marker at the first detection timing.
 同様に、第2と第3の検出タイミングの間の、3~5回目の撮像画像において指定された計測位置は、3~5回目の撮像画像に写るマーカーを介して座標変換する。これらの計測位置の三次元位置情報は、第2検出タイミングの絶対基準マーカーを基準とする三次元座標系に変換される。 Similarly, the measurement position designated in the 3rd to 5th captured images between the second and third detection timings is coordinate-converted via a marker appearing in the 3rd to 5th captured images. The three-dimensional position information of these measurement positions is converted into a three-dimensional coordinate system based on the absolute reference marker at the second detection timing.
 そして、5回目と6回目の撮像画像において指定された計測位置は、それぞれの撮像画像に写るマーカーを介して座標変換し、第3検出タイミングの絶対基準マーカーを基準とする三次元座標系に変換される。 Then, the measurement positions specified in the fifth and sixth captured images are coordinate-converted via markers appearing in the respective captured images, and converted into a three-dimensional coordinate system based on the absolute reference marker at the third detection timing. Is done.
 なお、第4以降の検出タイミングがある場合には、上記と同様にして、2つの検出タイミングの間で指定された計測位置の三次元位置情報を、2つのうち先に検出された画像に写る絶対基準マーカーを基準とする座標系へと変換する。 If there is a fourth or later detection timing, the three-dimensional position information of the measurement position designated between the two detection timings is reflected in the first detected image in the same manner as described above. Convert to a coordinate system based on an absolute reference marker.
 上記のように、本実施形態の画像処理装置7は、絶対基準マーカーの検出タイミングに応じて、座標系変換に用いる画像、すなわち、座標変換時に介すマーカーを異ならせ、絶対基準マーカーを基準とする三次元座標系への変換を行う。 As described above, the image processing apparatus 7 according to the present embodiment varies the image used for coordinate system conversion, that is, the marker used during coordinate conversion, according to the detection timing of the absolute reference marker, and uses the absolute reference marker as a reference. Convert to a 3D coordinate system.
 撮像された画像が異なっても絶対基準マーカーの位置は常に固定であるため、変換された全ての三次元位置情報は同一の座標系を基準とする情報となる。これにより、画像処理部はそれら全ての三次元位置情報を対応付けて面積を算出することができる。 Since the position of the absolute reference marker is always fixed even if the captured images are different, all converted three-dimensional position information is information based on the same coordinate system. Thereby, the image processing unit can calculate the area by associating all the three-dimensional position information.
 したがって、例えば1~3回目の撮像で指定した計測位置を修正したい場合には、4~6回目の撮像はやり直さずに、必要な部分だけを撮像して計測位置を指定し直すことが可能となる。 Therefore, for example, when it is desired to correct the measurement position designated in the first to third imaging, it is possible to re-designate the measurement position by imaging only the necessary part without redoing the fourth to sixth imaging. Become.
 以上のように、全体基準マーカーの検出タイミングで区切って座標変換を行うことで、撮像位置や計測位置の指定などに誤りがあった場合でも、その検出タイミングの区間のみ修正することが可能となる。 As described above, by performing coordinate conversion by dividing at the detection timing of the overall reference marker, even if there is an error in the designation of the imaging position or measurement position, it is possible to correct only the detection timing section. .
 例えば、あるタイミングで移動させたマーカーの設置位置を誤っていた場合でも、誤って移動させたマーカーを撮像した画像が含まれる検出タイミングの区間のみ、マーカーの設置と撮像をやり直すことができる。 For example, even when the installation position of the marker moved at a certain timing is wrong, the marker installation and imaging can be performed again only in the detection timing section including the image obtained by imaging the erroneously moved marker.
 また、例えば、全ての撮像が終わった後に、計測範囲の一部分に撮像漏れがあることが分かった場合には、絶対基準マーカーと非基準マーカーとを写しながら、撮像漏れの部分を追加で撮像すればよい。絶対基準マーカーが移動していなければ、追加の撮像画像で指定される計測位置の三次元位置情報も、同一の絶対基準マーカーを基準とする座標系へと変換することができる。このように、計測漏れがあった場合にも、追加の撮像と計測位置の指定を容易に行うことが可能となる。 In addition, for example, if it is found that there is an imaging omission in a part of the measurement range after all the imaging is completed, the imaging omission portion should be additionally imaged while copying the absolute reference marker and the non-reference marker. That's fine. If the absolute reference marker has not moved, the three-dimensional position information of the measurement position specified by the additional captured image can also be converted into a coordinate system based on the same absolute reference marker. As described above, even when there is a measurement omission, it is possible to easily specify additional imaging and a measurement position.
 以下では、図12の例を用いて、具体的な処理手順を説明する。 Hereinafter, a specific processing procedure will be described using the example of FIG.
 図12は、2つの通路が繋がれたような曲がり角のある長い通路の俯瞰図を示している。図12には絶対基準マーカー60と、移動可能な非基準マーカー61および62が示されている。また、撮像位置70、71、72、73、74、75、76、77、78、79が破線の丸図形で示され、それぞれの撮像位置から延びる破線矢印は撮像方向を示している。また、一点鎖線矢印L11、12、13、14は、マーカー61および62を移動させる方向を示しており、一点鎖線矢印の先には移動後のマーカー61および62の位置を、それぞれ破線図形で示している。マーカー60、61、62は全て第1の実施形態の図2、図6で示したマーカー2と同様の形状を有する。ただし、撮像する画像内でそれぞれマーカーが異なるものであることを検出し易くするため、マーカーの配色を異ならせておく。また、×印は計測位置を示し、そのうちの1つである計測位置80が示されている。 FIG. 12 shows an overhead view of a long passage with a corner such that two passages are connected. FIG. 12 shows an absolute reference marker 60 and movable non-reference markers 61 and 62. In addition, the imaging positions 70, 71, 72, 73, 74, 75, 76, 77, 78, and 79 are indicated by broken-line circle shapes, and broken-line arrows extending from the respective imaging positions indicate the imaging directions. In addition, alternate long and short dash line arrows L11, 12, 13, and 14 indicate the directions in which the markers 61 and 62 are moved, and the positions of the moved markers 61 and 62 are indicated by broken line figures at the ends of the alternate long and short dash arrows, respectively. ing. The markers 60, 61, 62 all have the same shape as the marker 2 shown in FIGS. 2 and 6 of the first embodiment. However, in order to make it easier to detect that the markers are different in the image to be captured, the color scheme of the markers is different. Further, a cross indicates a measurement position, and one of them is a measurement position 80.
 図12の例の場合、まずユーザは絶対基準マーカー60を計測範囲の中央付近に配置する。また、マーカー61と62を図の位置に設置する。絶対基準マーカー60は、計測終了後に撮像のやり直しなど修正がないことを確認するまでは移動させない。 In the case of the example in FIG. 12, the user first places the absolute reference marker 60 near the center of the measurement range. Also, the markers 61 and 62 are installed at the positions shown in the figure. The absolute reference marker 60 is not moved until it is confirmed that there is no correction such as re-imaging after completion of the measurement.
 次に、撮像位置70で絶対基準マーカー60とマーカー61とを、撮像位置71でマーカー61と62とを、撮像位置72でマーカー62を順次撮像する。 Next, the absolute reference marker 60 and the marker 61 are sequentially imaged at the imaging position 70, the markers 61 and 62 at the imaging position 71, and the marker 62 at the imaging position 72.
 続いて、マーカー62を一点鎖線矢印L11の方向へ移動し、撮像位置73でマーカー61と62とを、撮像位置74でマーカー62を順次撮像する。 Subsequently, the marker 62 is moved in the direction of the one-dot chain line arrow L11, and the markers 61 and 62 are sequentially imaged at the imaging position 73, and the marker 62 is sequentially imaged at the imaging position 74.
 撮像位置70から74で撮像した画像において、図12の俯瞰図の右側領域の計測位置を指定する。 In the image captured at the imaging positions 70 to 74, the measurement position of the right region of the overhead view of FIG.
 次に、マーカー61と62とを俯瞰図の右側領域から左側領域へ一点鎖線矢印L12、L13の方向へ移動する。撮像位置75で絶対基準マーカー60とマーカー61とを、撮像位置76でマーカー61と62とを、撮像位置77でマーカー62を順次撮像する。 Next, the markers 61 and 62 are moved from the right side region to the left side region in the overhead view in the direction of dashed-dotted arrows L12 and L13. The absolute reference marker 60 and the marker 61 are sequentially imaged at the imaging position 75, the markers 61 and 62 are sequentially imaged at the imaging position 76, and the marker 62 is sequentially imaged at the imaging position 77.
 続いて、マーカー62を俯瞰図の左側下部から左側上部へと一点鎖線矢印L14の方向へ移動し、撮像位置78でマーカー61と62とを、撮像位置79でマーカー62を順次撮像する。 Subsequently, the marker 62 is moved from the lower left portion of the overhead view to the upper left portion in the direction of a one-dot chain line arrow L14, and the markers 61 and 62 are sequentially imaged at the imaging position 78 and the marker 62 is sequentially imaged at the imaging position 79.
 撮像位置75から79で撮像した画像において、図12の俯瞰図の左側領域の計測位置を指定する。 In the image captured at the imaging positions 75 to 79, the measurement position of the left region of the overhead view of FIG. 12 is designated.
 ここで、撮像位置70および75で撮像された画像に絶対基準マーカー60が写っており、画像処理部11は絶対基準マーカー60を検出して、第1および第2の検出タイミングとして記憶部12に記憶しておく。 Here, the absolute reference marker 60 is reflected in the images taken at the imaging positions 70 and 75, and the image processing unit 11 detects the absolute reference marker 60 and stores it in the storage unit 12 as the first and second detection timings. Remember.
 以上の手順によってマーカーの移動と撮像を行い、ユーザは撮像した画像において計測位置を指定する。マーカー検出や計測位置の指定は、第1の実施形態で述べた方法と同様にして行う。 The marker is moved and imaged by the above procedure, and the user designates the measurement position in the captured image. Marker detection and measurement position designation are performed in the same manner as described in the first embodiment.
 画像処理部71は、それぞれの画像で指定された計測位置を、絶対基準マーカー60を基準とする三次元座標系へと変換する。このとき、第1検出タイミングから第2検出タイミングの間に撮像された画像、すなわち、撮像位置70から75までの画像において指定された計測位置は、撮像位置70から75までの画像に写る各マーカーを介して座標変換が行われる。また、第2検出タイミング以降に撮像された画像、すなわち、撮像位置75から79までで撮像された画像において指定された計測位置は、撮像位置75から79までの画像に写る各マーカーを介して座標変換が行われる。 The image processing unit 71 converts the measurement position designated in each image into a three-dimensional coordinate system with the absolute reference marker 60 as a reference. At this time, the image captured between the first detection timing and the second detection timing, that is, the measurement positions designated in the images from the imaging positions 70 to 75 are the markers shown in the images from the imaging positions 70 to 75. The coordinate conversion is performed via. In addition, the measurement positions designated in the images captured after the second detection timing, that is, in the images captured from the imaging positions 75 to 79, are coordinated via the markers shown in the images from the imaging positions 75 to 79. Conversion is performed.
 つまり、図12の俯瞰図の右側領域と左側領域とで座標変換に用いる画像、すなわち、座標変換時に介すマーカーが異なる。 That is, the image used for coordinate conversion, that is, the marker used during coordinate conversion differs between the right region and the left region in the overhead view of FIG.
 画像処理部11は、座標変換された三次元位置情報に基づいて俯瞰図を作製し表示部13に表示する。ユーザは表示を確認しながら入力部14で計測範囲などを指定および修正し、画像処理部11は面積を算出する。座標変換や俯瞰図作製、ユーザによる指定、面積の算出などの各処理は、第1の実施形態で述べた方法と同様にして行う。 The image processing unit 11 creates an overhead view based on the coordinate-converted three-dimensional position information and displays it on the display unit 13. The user specifies and corrects the measurement range and the like with the input unit 14 while confirming the display, and the image processing unit 11 calculates the area. Each processing such as coordinate conversion, overhead view creation, user designation, and area calculation is performed in the same manner as the method described in the first embodiment.
 ここで、例えば、計測位置80の指定位置を修正したい場合、あるいは、計測位置80を指定し忘れていた場合には、マーカー61と62を、図12に実線図形で示す位置へと再度設置して、撮像位置70、71、72で撮像を行い、計測位置を指定し直す。そして、計測位置の三次元位置情報を絶対基準マーカー60の座標へと座標変換し、その他の指定済みの計測位置と統合して、面積を算出する。このように、絶対基準マーカー60が移動していないことによって、必要な部分だけを修正あるいは追加することができる。 Here, for example, when the designated position of the measurement position 80 is to be corrected, or when the measurement position 80 is forgotten to be designated, the markers 61 and 62 are set again at the positions indicated by the solid line figure in FIG. Then, imaging is performed at the imaging positions 70, 71, and 72, and the measurement position is designated again. Then, the three-dimensional position information of the measurement position is coordinate-converted to the coordinates of the absolute reference marker 60 and integrated with other designated measurement positions to calculate the area. As described above, since the absolute reference marker 60 is not moved, only necessary portions can be corrected or added.
 以上によって、本実施形態の画像処理装置7によれば、移動しない絶対基準マーカーの三次元座標系へと三次元位置情報を変換するようにしたことで、計測位置などの修正がある場合にも、全ての計測をやり直す必要がなく、ユーザの利便性が向上する。さらに、計測漏れがあった場合にも、最初から全ての計測をやり直す必要がなく、追加の撮像と計測位置の指定が可能であり、ユーザの負担を軽減することができる。 As described above, according to the image processing device 7 of the present embodiment, the three-dimensional position information is converted into the three-dimensional coordinate system of the absolute reference marker that does not move, so that the measurement position and the like can be corrected. This eliminates the need to redo all measurements, improving the convenience for the user. Furthermore, even if there is a measurement omission, it is not necessary to redo all measurements from the beginning, and additional imaging and measurement positions can be specified, reducing the burden on the user.
 なお、上記の例では、1つの絶対基準マーカーと、2つの移動可能な非基準マーカーを用いる場合について説明したが、これに限られるものではない。例えば、1つの絶対基準マーカーの周りで、1つの非基準マーカーを移動させながら撮像して計測を行ってもよいし、3つ以上の非基準マーカーを用いてもよい。さらに、広大な計測範囲の場合は、領域毎に1つの絶対基準マーカーと複数の移動マーカーとで計測し、最後に複数の絶対基準マーカー間の座標変換を行って、三次元位置情報を統合して計測するようにしてもよい。 In the above example, the case where one absolute reference marker and two movable non-reference markers are used has been described. However, the present invention is not limited to this. For example, measurement may be performed by imaging while moving one non-reference marker around one absolute reference marker, or three or more non-reference markers may be used. Furthermore, in the case of a large measurement range, measurement is performed with one absolute reference marker and a plurality of moving markers for each area, and finally coordinate conversion between the plurality of absolute reference markers is performed to integrate the three-dimensional position information. You may make it measure.
 以上で述べた各実施形態において、添付図面に図示されている構成等については、これらに限定されるものではなく、本発明の効果を発揮する範囲内で適宜変更することが可能である。その他、本発明の目的の範囲を逸脱しない限りにおいて適宜変更して実施することが可能である。 In each of the embodiments described above, the configuration and the like illustrated in the accompanying drawings are not limited to these, and can be appropriately changed within a range in which the effects of the present invention are exhibited. In addition, various modifications can be made without departing from the scope of the object of the present invention.
 また、本発明の各構成要素は、任意に取捨選択することができ、取捨選択した構成を具備する発明も本発明に含まれるものである。 Each component of the present invention can be arbitrarily selected, and an invention having a selected configuration is also included in the present invention.
 以上の各実施形態で述べた画像処理装置1、5または7は、例えば、DSP(Digital Signal Processor)やCPU(CentralProcessing Unit)などのプロセッサや、RAM(Random Access Memory)などの主記憶装置などを備え、記憶装置に格納されたプログラムを実行して上記した各処理部の処理を実現することができる。または、FPGA(Field Programmable Gate Array)などのプログラム可能な集積回路や、上記処理専用の集積回路を備えることで、ハードウェアにより上記処理を実現してもよい。 The image processing apparatus 1, 5 or 7 described in each of the above embodiments includes, for example, a processor such as a DSP (Digital Signal Processor) and a CPU (Central Processing Unit), a main storage device such as a RAM (Random Access Memory), and the like. The processing of each processing unit described above can be realized by executing a program stored in the storage device. Alternatively, the above processing may be realized by hardware by providing a programmable integrated circuit such as an FPGA (Field (Programmable Gate Array) or an integrated circuit dedicated to the above processing.
 なお、以上の各実施形態における取得部は、2つの撮像部と距離算出部とによって画像情報と距離情報を取得するようにしたが、画像情報と、画像情報の各画素に対応した被写体の距離情報とが画像処理部に入力されれば、どのような装置を用いてもよい。例えば、撮像装置と測距装置により画像情報と距離情報とを取得してもよい。測距装置としては、例えばTOF(Time of Flight)方式に代表される赤外線を利用した手法を用いても構わない。TOF方式は、LED(Light Emitting Diode)などの光源から人間が視認できない赤外線等の光を照射し、光を照射してから、被写体の表面で反射して返ってくるまでの経過時間を計測する。この経過時間に基づいて被写体までの距離を測定する。その計測を細かく分割された領域毎に計測することで、一点だけでなく被写体の様々な部分の測距が可能となる。上記の経過時間を計測する方法として、レーザー光をパルス照射し、被写体の表面で反射して返ってくるまでの時間を直接計測する方法や、照射光(例えば、赤外線)を変調し、照射時の光の位相と、反射して返って来た光の位相との位相差に基づいて算出する方法などがある。 The acquisition unit in each of the above embodiments acquires image information and distance information by the two imaging units and the distance calculation unit. However, the distance between the image information and the subject corresponding to each pixel of the image information. Any device may be used as long as information is input to the image processing unit. For example, image information and distance information may be acquired by an imaging device and a distance measuring device. As the distance measuring device, for example, a technique using infrared rays represented by TOF (Time of Flight) method may be used. The TOF method irradiates light such as infrared light that cannot be visually recognized by humans from a light source such as an LED (Light Emitting Diode), and measures the elapsed time from when the light is irradiated until it is reflected by the surface of the subject and returned. . The distance to the subject is measured based on this elapsed time. By measuring the measurement for each finely divided area, it is possible to measure not only one point but also various parts of the subject. As a method of measuring the above elapsed time, a method of directly measuring the time until the laser beam is pulsed and reflected by the surface of the object and returned, or the irradiation light (for example, infrared) is modulated and irradiated There is a method of calculating based on the phase difference between the phase of the light and the phase of the light reflected and returned.
 なお、以上の各実施形態で述べた画像処理装置1、5または7では、取得部で画像を取得し、距離算出部で距離情報算出してから、距離情報を画像処理部に入力していたが、画像取得後にユーザが画像上で計測位置などを指定し、指定された位置のみの距離情報を距離算出部で算出するようにしてもよい。これにより、計測に用いない範囲の被写体の距離情報を算出せずに計測できるため、処理量を削減することが可能となる。 In the image processing apparatuses 1, 5, or 7 described in the above embodiments, the image is acquired by the acquisition unit, the distance information is calculated by the distance calculation unit, and then the distance information is input to the image processing unit. However, after the image is acquired, the user may specify a measurement position on the image, and the distance calculation unit may calculate distance information for only the specified position. As a result, it is possible to measure without calculating distance information of a subject in a range not used for measurement, and thus it is possible to reduce the processing amount.
1、5、7…画像処理装置、
10…取得部、100、101…撮像部、102…距離算出部、
11、11a、51…画像処理部、11-1、11a-1…計測位置指定入力受付部、
11-2、11a-2…三次元位置情報算出部、
11-3、11a-4…三次元位置情報変換部、
11-4、11a-9…面積算出部、
11-5、11a-10…表示制御部
11a-3…マーカー間対応付け処理部、
11a-5…マーカー移動指示受付部、
11a-6…俯瞰図作成表示制御部、
11a-7…俯瞰図修正制御部、
11a-8…計測範囲領域分割処理部、
12…記憶部、
13…表示部、
14…入力部、
2、20、21、22、K、L、61,62…マーカー、
60…絶対基準マーカー
3、4…被写体、
M、Mc、O、O、O…三次元座標系、
m…二次元座標系、
P、Q…被写体上の点、
i、j、k…単位ベクトル、
、O、P、g0、g1、g2、g3、QMc、QOK、QOL…ベクトル
70、71、72、73、74、75、76、77、78、79、90、91、92、93、94…撮像位置
a1、a2、b1、b3、80、900、901、910、911、920、921、922、930、931、932、940、941…計測位置
1, 5, 7... Image processing device,
DESCRIPTION OF SYMBOLS 10 ... Acquisition part, 100, 101 ... Imaging part, 102 ... Distance calculation part,
11, 11a, 51 ... image processing unit, 11-1, 11a-1, ... measurement position designation input receiving unit,
11-2, 11a-2... 3D position information calculation unit,
11-3, 11a-4... 3D position information conversion unit,
11-4, 11a-9... Area calculation unit,
11-5, 11a-10 ... display control unit 11a-3 ... inter-marker association processing unit,
11a-5 ... Marker movement instruction reception part,
11a-6 ... overhead view creation display control unit,
11a-7 ... overhead view correction control unit,
11a-8... Measurement range area division processing unit,
12 ... storage unit,
13 ... display part,
14 ... input part,
2, 20, 21, 22, K, L, 61, 62 ... markers,
60 ... Absolute reference markers 3, 4 ... Subject,
M, M c, O, O K, O L ... three-dimensional coordinate system,
m ... 2D coordinate system,
P, Q: Points on the subject,
i, j, k ... unit vector,
P M , O M , P O , g0, g1, g2, g3, Q Mc , Q OK , Q OL ... vector 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 90, 91 , 92, 93, 94 ... imaging positions a1, a2, b1, b3, 80, 900, 901, 910, 911, 920, 921, 922, 930, 931, 932, 940, 941 ... measurement positions
 本明細書で引用した全ての刊行物、特許および特許出願をそのまま参考として本明細書にとり入れるものとする。 All publications, patents and patent applications cited in this specification shall be incorporated into the present specification as they are.

Claims (10)

  1.  計測対象である被写体とマーカーとを複数の撮像位置で撮像した複数の画像情報に基づいて、該被写体上の計測位置の三次元位置情報を算出する画像処理装置であって、
     前記マーカーは複数のポインターを有し、該複数のポインターの位置関係が三次元座標系を表し、
     前記複数の画像情報に基づいて、前記複数の撮像位置毎の三次元座標系で表される前記三次元位置情報を算出する三次元位置情報算出部と、
     前記三次元位置情報算出部で算出された前記三次元位置情報を表す座標系を、前記撮像位置毎の三次元座標系から前記マーカーが表す三次元座標系へと変換する三次元位置情報変換部と、
    を有することを特徴とする画像処理装置。
    An image processing apparatus that calculates three-dimensional position information of measurement positions on a subject based on a plurality of pieces of image information obtained by imaging a subject to be measured and a marker at a plurality of imaging positions.
    The marker has a plurality of pointers, and the positional relationship between the plurality of pointers represents a three-dimensional coordinate system,
    A three-dimensional position information calculation unit that calculates the three-dimensional position information represented in a three-dimensional coordinate system for each of the plurality of imaging positions based on the plurality of image information;
    A three-dimensional position information conversion unit that converts the coordinate system representing the three-dimensional position information calculated by the three-dimensional position information calculation unit from the three-dimensional coordinate system for each imaging position to the three-dimensional coordinate system represented by the marker When,
    An image processing apparatus comprising:
  2.  前記マーカーには、第1の三次元座標系を表す第1のマーカーと、第1の三次元座標系とは異なる第2の三次元座標系を表す第2のマーカーと、を含む複数のマーカーがあり、
     前記三次元位置情報算出部は、前記複数の画像情報のうち、前記第1のマーカーと前記第2のマーカーとが同時に撮像されている画像情報に基づいて、前記第1の三次元座標系と前記第2の三次元座標系との第1の位置関係を算出し、
     前記三次元位置情報変換部は、前記複数の画像情報のうち前記第2のマーカーが撮像されている画像情報に基づいて、前記三次元位置情報算出部で算出された前記三次元位置情報を表す座標系を前記第2の三次元座標系へと変換し、さらに、前記第1の位置関係に基づいて、前記第2の三次元座標系から前記第1の三次元座標系へと座標系を変換する
    ことを特徴とする請求項1に記載の画像処理装置。
    The marker includes a plurality of markers including a first marker representing a first three-dimensional coordinate system and a second marker representing a second three-dimensional coordinate system different from the first three-dimensional coordinate system. There is
    The three-dimensional position information calculation unit includes the first three-dimensional coordinate system based on image information in which the first marker and the second marker are simultaneously imaged among the plurality of image information. Calculating a first positional relationship with the second three-dimensional coordinate system;
    The three-dimensional position information conversion unit represents the three-dimensional position information calculated by the three-dimensional position information calculation unit based on image information in which the second marker is captured among the plurality of pieces of image information. A coordinate system is converted into the second three-dimensional coordinate system, and the coordinate system is further changed from the second three-dimensional coordinate system to the first three-dimensional coordinate system based on the first positional relationship. The image processing apparatus according to claim 1, wherein conversion is performed.
  3.  前記複数のマーカーには、前記第1の三次元座標系と前記第2の三次元座標系とは異なる第3の三次元座標系を表す第3のマーカーがあり、
     前記三次元位置情報算出部は、前記複数の画像情報のうち、前記第2のマーカーと前記第3のマーカーとが同時に撮像されている画像情報に基づいて、前記第2の三次元座標系と前記第3の三次元座標系との第2の位置関係を算出し、
     前記三次元位置情報変換部は、前記複数の画像情報のうち、前記3のマーカーが撮像されている画像情報に基づいて、前記三次元位置情報算出部で算出された前記三次元位置情報を表す座標系を前記第3の三次元座標系へと変換し、さらに、前記第2の位置関係に基づいて、前記第3の三次元座標系から前記第2の三次元座標系へと座標系を変換し、さらに、前記第1の位置関係に基づいて、前記第2の三次元座標系から前記第1の三次元座標系へと座標系を変換する
    ことを特徴とする請求項2に記載の画像処理装置。
    The plurality of markers includes a third marker representing a third three-dimensional coordinate system different from the first three-dimensional coordinate system and the second three-dimensional coordinate system,
    The three-dimensional position information calculation unit includes the second three-dimensional coordinate system based on image information in which the second marker and the third marker are simultaneously imaged among the plurality of image information. Calculating a second positional relationship with the third three-dimensional coordinate system;
    The three-dimensional position information conversion unit represents the three-dimensional position information calculated by the three-dimensional position information calculation unit based on image information in which the three markers are imaged among the plurality of pieces of image information. A coordinate system is converted into the third three-dimensional coordinate system, and the coordinate system is further changed from the third three-dimensional coordinate system to the second three-dimensional coordinate system based on the second positional relationship. The coordinate system is converted, and further, the coordinate system is converted from the second three-dimensional coordinate system to the first three-dimensional coordinate system based on the first positional relationship. Image processing device.
  4.  前記第3のマーカーは、設置位置が移動された前記第1のマーカーであることを特徴とする請求項3に記載の画像処理装置。 The image processing apparatus according to claim 3, wherein the third marker is the first marker whose installation position has been moved.
  5.  前記三次元位置情報算出部は、前記マーカーが表す三次元座標系と前記計測位置を含む前記被写体上の面を表す二次元座標系との第3の位置関係を算出し、
     前記三次元位置情報変換部において座標系を変換された前記三次元位置情報を、前記第3の位置関係に基づいて、前記被写体上の面あるいは前記被写体上の面と平行な面へと投影した二次元位置情報を算出し、前記二次元位置情報に基づいて俯瞰図を作製する俯瞰図作成表示制御部をさらに有する
    ことを特徴とする請求項1から4までのいずれか1項に記載の画像処理装置。
    The three-dimensional position information calculation unit calculates a third positional relationship between a three-dimensional coordinate system represented by the marker and a two-dimensional coordinate system representing a surface on the subject including the measurement position;
    The three-dimensional position information whose coordinate system has been converted by the three-dimensional position information conversion unit is projected onto a surface on the subject or a plane parallel to the surface on the subject based on the third positional relationship. 5. The image according to claim 1, further comprising an overhead view creation display control unit that calculates two-dimensional position information and creates an overhead view based on the two-dimensional position information. Processing equipment.
  6.  前記俯瞰図作成表示制御部は、
     前記三次元位置情報算出部で前記被写体上の異なる複数の計測位置それぞれの前記三次元位置情報が算出され、
     前記三次元位置情報変換部で前記複数の計測位置それぞれの前記三次元位置情報を表す座標系が変換される度に、
     それまでに座標系が変換されている前記三次元位置情報を、前記被写体上の面あるいは前記被写体上の面と平行な面へと投影した二次元位置情報を算出し、前記二次元位置情報に基づいて俯瞰図を作製することで、前記俯瞰図を更新していく
    ことを特徴とする請求項5に記載の画像処理装置。
    The overhead view creation display control unit
    The three-dimensional position information calculation unit calculates the three-dimensional position information of each of a plurality of different measurement positions on the subject,
    Each time the coordinate system representing the three-dimensional position information of each of the plurality of measurement positions is converted by the three-dimensional position information conversion unit,
    Two-dimensional position information obtained by projecting the three-dimensional position information whose coordinate system has been converted to the surface on the subject or a surface parallel to the surface on the subject is calculated, and the two-dimensional position information is The image processing apparatus according to claim 5, wherein the overhead view is updated by creating an overhead view based on the image.
  7.  前記三次元位置情報算出部は、前記複数の画像情報のうちの2つ以上の画像情報に写る前記被写体上の同一計測位置について、該2つ以上の画像情報のそれぞれに基づいて前記三次元位置情報を算出し、算出された前記同一計測位置の複数の前記三次元位置情報のうち、距離の分解能が最も高いものを前記同一計測位置の前記三次元位置情報として算出する
    ことを特徴とする請求項1から6までのいずれか1項に記載の画像処理装置。
    The three-dimensional position information calculation unit is configured to determine the three-dimensional position based on each of the two or more pieces of image information for the same measurement position on the subject that is captured in two or more pieces of image information of the plurality of pieces of image information. Information is calculated, and among the calculated three-dimensional position information of the same measurement position, the one having the highest distance resolution is calculated as the three-dimensional position information of the same measurement position. Item 7. The image processing apparatus according to any one of Items 1 to 6.
  8.  計測対象である被写体とマーカーとを複数の撮像位置で撮像した複数の画像情報に基づいて、該被写体上の計測位置の三次元位置情報を算出する画像処理方法であって、
     前記マーカーは複数のポインターを有し、該複数のポインターの位置関係が三次元座標系を表し、
     前記複数の画像情報に基づいて、前記複数の撮像位置毎の三次元座標系で表される前記三次元位置情報を算出する三次元位置情報算出ステップと、
     前記三次元位置情報算出ステップで算出された前記三次元位置情報を表す座標系を、前記撮像位置毎の三次元座標系から前記マーカーが表す三次元座標系へと変換する三次元位置情報変換ステップと、
    を有することを特徴とする画像処理方法。
    An image processing method for calculating three-dimensional position information of a measurement position on a subject based on a plurality of pieces of image information obtained by imaging a subject to be measured and a marker at a plurality of imaging positions,
    The marker has a plurality of pointers, and the positional relationship between the plurality of pointers represents a three-dimensional coordinate system,
    A three-dimensional position information calculating step for calculating the three-dimensional position information represented in a three-dimensional coordinate system for each of the plurality of imaging positions based on the plurality of image information;
    A three-dimensional position information conversion step for converting the coordinate system representing the three-dimensional position information calculated in the three-dimensional position information calculation step from the three-dimensional coordinate system for each imaging position to the three-dimensional coordinate system represented by the marker. When,
    An image processing method comprising:
  9.  前記マーカーには、第1の三次元座標系を表す第1のマーカーと、第1の三次元座標系とは異なる第2の三次元座標系を表す第2のマーカーと、を含む複数のマーカーがあり、
     前記三次元位置情報算出ステップは、前記複数の画像情報のうち、前記第1のマーカーと前記第2のマーカーとが同時に撮像されている画像情報に基づいて、前記第1の三次元座標系と前記第2の三次元座標系との第1の位置関係を算出し、
     前記三次元位置情報変換ステップは、前記複数の画像情報のうち前記第2のマーカーが撮像されている画像情報に基づいて、前記三次元位置情報算出ステップで算出された前記三次元位置情報を表す座標系を前記第2の三次元座標系へと変換し、さらに、前記第1の位置関係に基づいて、前記第2の三次元座標系から前記第1の三次元座標系へと座標系を変換する
    ことを特徴とする請求項8に記載の画像処理方法。
    The marker includes a plurality of markers including a first marker representing a first three-dimensional coordinate system and a second marker representing a second three-dimensional coordinate system different from the first three-dimensional coordinate system. There is
    The three-dimensional position information calculation step includes the first three-dimensional coordinate system based on image information in which the first marker and the second marker are simultaneously imaged among the plurality of image information. Calculating a first positional relationship with the second three-dimensional coordinate system;
    The three-dimensional position information conversion step represents the three-dimensional position information calculated in the three-dimensional position information calculation step based on image information in which the second marker is captured among the plurality of image information. A coordinate system is converted into the second three-dimensional coordinate system, and the coordinate system is further changed from the second three-dimensional coordinate system to the first three-dimensional coordinate system based on the first positional relationship. 9. The image processing method according to claim 8, wherein conversion is performed.
  10.  コンピュータに、上記請求項8または9に記載の画像処理方法を実行させるためのプログラム。 A program for causing a computer to execute the image processing method according to claim 8 or 9.
PCT/JP2015/061306 2014-04-17 2015-04-13 Image processing device, image processing method, and program WO2015159835A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-085659 2014-04-17
JP2014085659 2014-04-17

Publications (1)

Publication Number Publication Date
WO2015159835A1 true WO2015159835A1 (en) 2015-10-22

Family

ID=54324041

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/061306 WO2015159835A1 (en) 2014-04-17 2015-04-13 Image processing device, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2015159835A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017169365A1 (en) * 2016-03-29 2017-10-05 Kyb株式会社 Road surface displacement detection device and suspension control method
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
JP2019194625A (en) * 2019-08-01 2019-11-07 株式会社メルカリ Program, information processing method, and information processing apparatus
JP2021148566A (en) * 2020-03-18 2021-09-27 Jfeスチール株式会社 Inspection system and inspection method during coke oven furnace construction, and coke oven furnace construction method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08159762A (en) * 1994-12-01 1996-06-21 Asahi Koyo Kk Method and apparatus for extracting three-dimensional data and stereo image forming apparatus
JP2003185434A (en) * 2001-12-21 2003-07-03 Pentax Corp Photogrammetric system and method and recording medium storing photogrammetric program
JP2004110459A (en) * 2002-09-19 2004-04-08 Shigenori Tanaka Three-dimensional model space creating device, three-dimensional model space creating method, three-dimensional model space creating program, and content transmitting server
JP2008224626A (en) * 2007-03-15 2008-09-25 Canon Inc Information processor, method for processing information, and calibration tool
JP2010286450A (en) * 2009-06-15 2010-12-24 Matsuno Design Tenpo Kenchiku Kk Device, system and method for calculation of subject area
JP2011080845A (en) * 2009-10-06 2011-04-21 Topcon Corp Method and apparatus for creating three-dimensional data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08159762A (en) * 1994-12-01 1996-06-21 Asahi Koyo Kk Method and apparatus for extracting three-dimensional data and stereo image forming apparatus
JP2003185434A (en) * 2001-12-21 2003-07-03 Pentax Corp Photogrammetric system and method and recording medium storing photogrammetric program
JP2004110459A (en) * 2002-09-19 2004-04-08 Shigenori Tanaka Three-dimensional model space creating device, three-dimensional model space creating method, three-dimensional model space creating program, and content transmitting server
JP2008224626A (en) * 2007-03-15 2008-09-25 Canon Inc Information processor, method for processing information, and calibration tool
JP2010286450A (en) * 2009-06-15 2010-12-24 Matsuno Design Tenpo Kenchiku Kk Device, system and method for calculation of subject area
JP2011080845A (en) * 2009-10-06 2011-04-21 Topcon Corp Method and apparatus for creating three-dimensional data

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017169365A1 (en) * 2016-03-29 2017-10-05 Kyb株式会社 Road surface displacement detection device and suspension control method
JPWO2017169365A1 (en) * 2016-03-29 2019-02-07 Kyb株式会社 Road surface displacement detection device and suspension control method
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
JP2019194625A (en) * 2019-08-01 2019-11-07 株式会社メルカリ Program, information processing method, and information processing apparatus
JP2021148566A (en) * 2020-03-18 2021-09-27 Jfeスチール株式会社 Inspection system and inspection method during coke oven furnace construction, and coke oven furnace construction method
JP7287319B2 (en) 2020-03-18 2023-06-06 Jfeスチール株式会社 Inspection system and inspection method for building coke oven, and method for building coke oven

Similar Documents

Publication Publication Date Title
JP6465789B2 (en) Program, apparatus and method for calculating internal parameters of depth camera
JP5018980B2 (en) Imaging apparatus, length measurement method, and program
JP6537237B2 (en) INFORMATION PROCESSING APPARATUS AND METHOD
JP4774824B2 (en) Method for confirming measurement target range in three-dimensional measurement processing, method for setting measurement target range, and apparatus for performing each method
JP6503906B2 (en) Image processing apparatus, image processing method and image processing program
US10977857B2 (en) Apparatus and method of three-dimensional reverse modeling of building structure by using photographic images
KR20150112362A (en) Imaging processing method and apparatus for calibrating depth of depth sensor
JP2009053147A (en) Three-dimensional measuring method and three-dimensional measuring device
CN105378794A (en) 3d recording device, method for producing 3d image, and method for setting up 3d recording device
JP6282098B2 (en) Calibration apparatus and method
JP2012058076A (en) Three-dimensional measurement device and three-dimensional measurement method
JPH10221072A (en) System and method for photogrammetry
JP6413595B2 (en) Image processing apparatus, system, image processing method, and program
CN113538589A (en) System and method for efficient 3D reconstruction of objects using a telecentric line scan camera
CN110419208B (en) Imaging system, imaging control method, image processing apparatus, and computer readable medium
WO2015159835A1 (en) Image processing device, image processing method, and program
WO2017199285A1 (en) Image processing device and image processing method
US11295478B2 (en) Stereo camera calibration method and image processing device for stereo camera
JP2011203108A (en) Apparatus and method for measuring three-dimensional distance
US20210183092A1 (en) Measuring apparatus, measuring method and microscope system
JP5987549B2 (en) System and method for measuring installation accuracy of construction members
Wohlfeil et al. Automatic camera system calibration with a chessboard enabling full image coverage
WO2021070415A1 (en) Correction parameter calculation method, displacement amount calculation method, correction parameter calculation device, and displacement amount calculation device
JP2007033087A (en) Calibration device and method
JP6374812B2 (en) 3D model processing apparatus and camera calibration system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15780223

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15780223

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP