WO2006075528A1 - Dispositif de mesure d'objet tridimensionnel - Google Patents

Dispositif de mesure d'objet tridimensionnel Download PDF

Info

Publication number
WO2006075528A1
WO2006075528A1 PCT/JP2005/024098 JP2005024098W WO2006075528A1 WO 2006075528 A1 WO2006075528 A1 WO 2006075528A1 JP 2005024098 W JP2005024098 W JP 2005024098W WO 2006075528 A1 WO2006075528 A1 WO 2006075528A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
pedestal
dimensional object
mirror
Prior art date
Application number
PCT/JP2005/024098
Other languages
English (en)
Japanese (ja)
Inventor
Yoshitsugu Manabe
Kunihiro Chihara
Yuuki Uranishi
Original Assignee
National University Corporation NARA Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University Corporation NARA Institute of Science and Technology filed Critical National University Corporation NARA Institute of Science and Technology
Priority to JP2006552886A priority Critical patent/JP4742190B2/ja
Publication of WO2006075528A1 publication Critical patent/WO2006075528A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • the present invention captures a three-dimensional object from a plurality of angles, and based on the correspondence between a feature point in one image and a feature point (corresponding feature point) in another image corresponding to the feature point.
  • the present invention also relates to a 3D object measuring apparatus for performing 3D object stereo measurement processing using a plurality of images and generating 3D image data of the 3D object.
  • One of the typical educational contents is an electronic picture book of insects and animals.
  • electronic pictorial books that digitize information on insects and animals, which can be browsed via the Internet, but some of the current electronic pictorial books are two-dimensional planes published in existing pictorial books.
  • electronic pictorial books that only take photographs etc. as digital data and make them into a database that can be searched and browsed.
  • 3D electronic pictorial books will be able to see the structure and movement of insects and animals. Development of various electronic pictorial books is desired.
  • the former active measurement method includes a direct measurement method using a laser or an ultrasonic wave, and a pattern projection method in which a pattern is projected by a projector and the surface shape is measured by distortion of the pattern.
  • the latter passive measurement method includes a stereo measurement method that uses two or more cameras for stereo measurement. A method for stereo measurement using a single camera using a mirror has also been proposed.
  • Patent Document 1 JP 2001-241928 A
  • the conventional active measurement method has a problem that a measurement range is narrow as a first problem.
  • direct measurement methods using lasers and ultrasonic waves, and in pattern projection methods in which patterns are projected by projectors, etc. the range that is directly exposed to the irradiation light and the range that can be observed by a measuring instrument such as a camera The problem that only measurement is possible.
  • the conventional active measurement method has a problem that image data of patterns and colors cannot be obtained as a second problem.
  • the conventional active measurement method is for measuring the external shape of a three-dimensional object, and since it is impossible to obtain pattern and color data directly, it is necessary to prepare other patterns and color data.
  • this active measurement method is mainly a method for generating 3D wireframes and 3D polygons from 3D objects, and patterns and color data need to be prepared separately as texture images and developed images. In order to generate 3D image data, many processes and a lot of labor are required.
  • the conventional passive measurement method has a problem that the scale of the apparatus increases as a first problem. 3 based on the captured image of a 3D object that is essentially a 2D image In order to obtain 3D image data, it is necessary to have multiple 2D images of 3D objects taken from multiple viewpoints. It is necessary to prepare a large number of cameras around and to photograph the entire circumference of the 3D object up, down, left and right. In this way, there is a problem that the scale of the apparatus becomes large in order to obtain 3D image data that can be viewed as an electronic picture book.
  • the conventional passive measurement method has a problem that when a large number of images are prepared from a large number of viewpoints, it is difficult to match feature points.
  • the stereo measurement method it is necessary to perform stereo measurement between a large number of images taken from a large number of viewpoints.
  • feature points such as singular points and edge portions are selected, and Matching is performed using the feature points as clues.
  • 3D objects with complicated shapes such as insects and animals have many feature points, and as the number of feature points increases, the amount of computation for matching them increases rapidly. It becomes difficult to take
  • the first problem of the passive measurement method is to alleviate the force to some extent. In order to obtain an image that covers the entire circumference of the top, bottom, left, and right, multiple cameras are required. In addition, the second problem of passive measurement cannot be alleviated with this device! / Remains a problem.
  • a catadioptric stereo vision system that measures the three-dimensional shape of an object with a single camera using a reflection image or a refraction image by a mirror or a prism is known.
  • a catadioptric stereo vision system is a system in which multiple lights traveling on different paths starting from the same point on the surface of an object are incident on a single camera using optical equipment, and images observed by the camera from multiple viewpoints. This system realizes stereo vision with a single camera by taking equivalent images.
  • This catadioptric stereo vision system has the advantage that a camera with parallax can be obtained with a single camera, so that no camera parameter correction processing is required, and images of each viewpoint are photographed synchronously. Also, arrange the camera and mirror so that the epipolar line is the same as the scanning line. As in the stereo vision system using a plurality of cameras, it is possible to limit the range where the corresponding points exist.
  • the object circumference measurement method based on the catadioptric stereo vision system has a problem that it is difficult to obtain a correct circumference due to the occurrence of irregular reflection by a mirror and the expansion of the corresponding point search range. .
  • the present invention covers the entire upper, lower, left, and right circumferences of a three-dimensional object with a small number of captured images captured by a small number of cameras (one or two) with a small apparatus size. Images can be obtained, and correspondence between feature points between images can be easily taken. Multi-viewpoint images from all directions of an object can be obtained with a single shooting.
  • the object is to provide a 3D object measuring device that can measure the 3D circumference of an object.
  • a three-dimensional object measuring apparatus of the present invention surrounds a pedestal on which a three-dimensional object to be imaged is placed, an inner wall surface of the cylinder, and the inner wall surface of the cylinder is a mirror surface.
  • a specular cylindrical body, a fisheye lens facing the pedestal and arranged so that a lens optical axis coincides with a cylindrical central axis of the specular cylindrical body, and a photographic recording for recording an image obtained through the fisheye lens A direct image obtained by directly viewing the three-dimensional object on the pedestal, and the mirror image of the inner wall of the mirror cylinder on the pedestal. This is taken together with the reflection image of the 3D object.
  • a fish-eye lens can shoot an image with a full viewing angle radially centered on the lens optical axis, a direct image obtained by directly photographing a three-dimensional object placed on a pedestal facing the fish-eye lens, and the side surface of the mirror cylinder It is possible to take a picture together with the reflected image outside the 3D object reflected on the mirror of the inner wall. Due to the nature of the cylindrical mirror surface, this reflection image is a reflection of the front image seen from the omnidirectional viewpoint that surrounds the periphery of the 3D object. The front image can be obtained at once.
  • the 3D object to be photographed on the pedestal It may be fixedly supported by piercing with means for fixing and supporting, for example, a wire.
  • the tabletop may also be used as a pedestal.
  • the mirror image is reflected once and captured by the fisheye lens, or the so-called twice reflected image reflected twice by the mirror surface and captured by the fisheye lens, such as n times (n is a natural number)
  • n times there can be a reflection image that is reflected n times by the fisheye lens.
  • still images if still images that are continuous in time series are obtained, they can be handled as moving images, and 3D moving image data relating to the movement of the target object can also be obtained. Become.
  • the feature point extracting means for extracting and determining the feature points on the photographed image photographed by the photographing recording means of the camera, and the feature point extracting means
  • a corresponding feature point on the reflection image corresponding to the extracted feature point on the direct image is searched and determined on the reflection image, or on the reflection image extracted by the feature point extraction means.
  • a corresponding feature point search means for searching and determining a corresponding feature point on the direct image corresponding to the feature point of the direct image, and the direct relationship based on the correspondence between the feature point and the corresponding feature point.
  • Stereo measurement processing of the image and the reflected image is performed!
  • 3D image data generating means for generating 3D image data of the 3D object from the direct image and the reflected image
  • Corresponding feature point searching means searches for the corresponding feature point by searching for an extension line connecting the center of the image and the feature point to the photographed image. Is preferred.
  • the reflected image is an image developed radially from the center of the pedestal (cylindrical center), and the corresponding feature points are always on the same straight line. Therefore, there is an advantage that the corresponding point search is simplified.
  • the pedestal is formed of a transparent material.
  • the object is visible from below the pedestal, and the camera has two cameras, a first force camera and a second camera, and the fisheye lens of the first camera and the second force camera It is preferable to arrange so that the fish-eye lenses face each other across the pedestal.
  • the camera can move along the cylindrical central axis with respect to the mirror cylinder, and the distance between the pedestal and the fisheye lens It is preferable to make it variable.
  • the position at which the reflection image of the 3D object is reflected on the mirror surface of the side wall of the mirror cylinder varies depending on the size (height, thickness) and shape of the 3D object to be measured. It is preferable that the position of the fisheye lens (the distance between the pedestal and the fisheye lens) can be adjusted in order to appropriately capture the direct image and the reflected image. Therefore, the distance between the pedestal and the fisheye lens is variable.
  • the pedestal, the specular cylindrical body and the camera can be moved relative to each other, and the center of the pedestal is rotationally moved with the pedestal fixed. It is preferable that the mirror cylinder and the camera can be freely rotated and moved as a center in a three-dimensional space.
  • an image necessary for stereo measurement can be obtained for that portion.
  • a three-dimensional object has a sharp side shape
  • a fisheye lens cannot obtain an image directly with the mirror cylinder and pedestal fixed.
  • the 3D object has a slightly deep concave shape on the top surface
  • an image can be obtained directly with the mirror cylinder and camera fixed, but the concave part of the reflected image is You can't get it behind the wall. Therefore, with the pedestal fixed, the center of the pedestal is the center of rotational movement, and the mirror cylinder and the camera can be freely rotated and moved together in the three-dimensional space. Even if it is a shape or the latter concave shape, Therefore, it is possible to adjust the arrangement of the mirror cylinder and the camera so that the direct image and the reflected image of this part can be taken at different angles.
  • a fish-eye lens can shoot images of all viewing angles radially with the lens optical axis at the center, and a three-dimensional object placed on a pedestal in combination with a mirror cylindrical body.
  • a direct image obtained by directly photographing a 3D object placed on a pedestal facing the fisheye lens, and a reflected image of the 3D object reflected on the mirror surface of the side wall of the mirror cylinder You can shoot at the same time.
  • any means for supporting the three-dimensional object may be used as long as it can support the object, for example, a pedestal, a hanging thread, a wire, and a needle.
  • the pedestal is not limited to a small plate-like one incorporated in the apparatus casing, but includes a large one such as a desk or a table.
  • a mirror cylinder which will be described later, is placed on a desk as a pedestal as the support means.
  • the support means for the three-dimensional object will be described as a pedestal that is a small disk-like plate.
  • FIG. 1 is a diagram schematically illustrating the basic configuration of the three-dimensional object measuring apparatus 100 according to the first embodiment.
  • Reference numeral 10 denotes a pedestal on which a three-dimensional object to be photographed is placed.
  • the shape of the pedestal 10 is not limited, but is, for example, a disk shape.
  • the size of the pedestal 10 is the 3D object to be photographed. It should be large enough to place a jet.
  • Reference numeral 20 denotes a mirror cylindrical body.
  • the mirror surface cylindrical body 20 has a cylindrical shape with a perfect cross section, and its inner wall surface is a mirror surface. That is, the mirror surface cylindrical body referred to in the present invention refers to a cylindrical body whose inner wall surface is cylindrical and mirrored.
  • the outer wall surface of the cylinder does not have to be a force and does not have to be cylindrical or mirrored.
  • the outer shape is a square pillar, which is a plastic material, and includes a cylindrical hollow on the inside and a mirror surface on the inner wall.
  • the inner diameter of the mirror cylindrical body 20 is large enough to accommodate the pedestal 10 therein.
  • the pedestal 10 is accommodated in the bottom surface portion of the mirror cylindrical body 20.
  • the height of the mirror-cylindrical body 20 may be at least L from the base 10 that is the base, assuming that the height of the 3D object to be imaged is L.
  • the height of the base 10 from 2L to 10L is sufficient.
  • Reference numeral 30 denotes a camera.
  • the camera 30 includes a fisheye lens 31 and a photographing recording means 32.
  • the fisheye lens 31 faces the pedestal 10 and is arranged so that the lens optical axis coincides with the cylindrical central axis of the mirror cylindrical body 20.
  • the fisheye lens 31 is disposed on the upper surface of the mirror cylinder 20 and is installed downward so as to face the pedestal 10.
  • the central axis (lens optical axis) of the fisheye lens 31 is arranged so as to coincide with the central axis (cylindrical central axis) of the cylindrical mirror body 20.
  • the photographing and recording means 32 is not particularly limited as long as it can record the image obtained through the fisheye lens 31, and may be a so-called analog image recording means for exposure recording on an analog film, and a light receiving element such as a CCD (Charge Coupled Device).
  • the so-called digital recording means can accept the light and record it as digital data.
  • the photographing and recording means 32 is a digital image recording means.
  • the image data processing device 40 receives a two-dimensional image taken by the camera 30, and generates three-dimensional image data by stereo measurement processing.
  • the image data processing device 40 includes a feature point extracting unit 41, a corresponding feature point searching unit 42, and a three-dimensional image data generating unit 43 (not shown in FIG. 1), which will be described later.
  • step 1 obtain a 2D image of the target 3D object from multiple viewpoints (each at different angles).
  • procedure 2 is a matching process between feature points between two or more acquired 2D images, and each element of a 3D object is matched, and procedure 3 is a difference calculation or inference based on stereo measurement processing. It is necessary to follow the procedure for calculating 3D image data.
  • the direct image refers to an image obtained by receiving a light ray directly incident on the fisheye lens 31 from a 3D object on the pedestal 10, that is, an image of a 3D object in which the visual power of the fisheye lens 31 can be directly seen.
  • a reflected image is an image obtained by receiving a light beam reflected from the three-dimensional object on the base 10 and reflected on the inner wall mirror surface of the mirror cylinder 20 and entering the fisheye lens 31, that is, from the viewpoint of the fisheye lens 31.
  • a reflection image of a three-dimensional object reflected on the inner wall mirror of the mirror cylinder 20 is reflected once on the inner wall mirror surface of the mirror cylinder 20 and incident on the fisheye lens 31.
  • the reflected image R1 is reflected twice on the inner wall mirror surface of the mirror cylinder 20 and incident on the fisheye lens 31.
  • FIG. 2 is a diagram schematically showing a state of light reception of the direct image and the reflected image in the camera 30 in the longitudinal section of the three-dimensional object measuring apparatus 100.
  • Fig. 2 we focus only on point A where 3D object 1 exists.
  • Light emitted from a point A of the three-dimensional object 1 placed on the pedestal 10 and directly incident on the fisheye lens 31 is an optical path AO that directly forms an image.
  • Light that is emitted from the point A, reflected once on the inner wall mirror surface of the mirror cylindrical body 20, and incident on the fisheye lens 31 is an optical path A1 that forms a once reflected image.
  • Light emitted from point A, reflected twice on the inner wall mirror surface of the mirror cylinder 20 and incident on the fisheye lens 31 is reflected twice. This is an optical path A2 that forms a reflected image.
  • FIG. 2 shows the optical path A2 that reflects twice, and illustration of the other optical paths is omitted.
  • FIG. 2 shows an optical path focusing only on point A. Light emitted from all points on the surface of the object 1 enters the fisheye lens 31 according to the same principle as in FIG.
  • FIG. 3 is a diagram schematically showing an image obtained by photographing the object 1 having a cone shape placed on the pedestal 10 by the camera 30.
  • a point A on object 1 is shown as a small dot with a white circle for reference.
  • the image power seen in the center in Fig. 3 is a direct image DO of object 1 and shows the force directly above the cone shape.
  • the circle directly surrounding the image DO is the pedestal 10 and the edge E of the mirror cylinder 20.
  • the outer ring surrounding the edge E is a one-time reflected image R1 of the object 1 reflected on the inner wall mirror surface of the mirror cylinder 20.
  • the cone-shaped side surface is reflected around the inner wall mirror surface of the surface cylindrical body 20.
  • the outer ring-shaped object surrounding the once-reflected image R1 is the twice-reflected image R2 of the object 1 reflected on the inner wall mirror surface of the mirror cylinder 20.
  • the cone-shaped side surface is reflected around the inner wall mirror surface of the surface cylindrical body 20, but in the two-time reflection image R2, the image is a point-symmetric image.
  • the image of a point A on object 1 is on the right side in the direct image DO (image point GO), and on the right side in the single reflection image R1 (image point G1), but on the left side in the double reflection image R2. Reflected (image point G2).
  • the reflection images (reflection image R3, etc.) of three or more reflections are not shown.
  • a plurality of images that is, a direct image and a reflected image are obtained by one shooting as described above.
  • the reason why the direct image obtained by the three-dimensional object measurement apparatus 100 of the present invention and the reflected image force can be used for the three-dimensional stereo measurement as an image obtained by photographing a plurality of viewpoint forces at different angles.
  • FIG. 4 is equivalent to the photographing viewpoints of the optical path A0 that directly forms an image with respect to the point A of the object 1 described in FIG. 2, the optical path A1 that forms a once-reflection image, and the optical path A2 that forms a twice-reflection image. It is the figure which showed the mode that it expanded automatically.
  • the actual fisheye lens 31 Although it is the point FO, the optical path Al corresponding to the once reflected image Rl is equivalent to the optical path A1 ′ incident on the photographing viewpoint Fl. Similarly, the optical path A2 corresponding to the twice reflected image R2 is equivalent to the optical path A2 ′ incident on the photographing viewpoint F2.
  • the point A in the once reflected image R1 is equivalent to the point A in the image that would be obtained if the object 1 was taken at the shooting viewpoint F1 and the virtual camera 30 '. It can be said that point A in the reflected image R2 is equivalent to point A in the image that would be obtained if the object 1 was photographed by the virtual camera 30 'at the photographing viewpoint F2.
  • the photographing viewpoint F1 is a position 2r away from the fish-eye lens 31
  • the photographing viewpoint F2 is a position 4r away from the fish-eye lens 31.
  • the one-time reflected image R1 shown on the mirror surface of the inner wall 20 of the mirror cylinder 20 shown in Fig. 3 is 360 ° around the circumference of the radius 2r with the center axis of the cylinder as the center of rotation.
  • the twice-reflection image R2 is taken for object 1 while moving the virtual camera 30 "around the circumference of the radius 4r and moving the virtual camera 30" around the center axis of the cylinder, and only the center image of each captured image
  • the direct image DO, the reflected image Rl, and the reflected image R2 obtained by the three-dimensional object measuring apparatus 100 of the present invention are the actual images where the fisheye lens 31 is arranged. Taken from countless viewpoints (number of resolutions) arranged so as to surround a direct image from the viewpoint F0 and surround the circumference 2r from the center axis of the cylinder and the circumference 4r from the circumference.
  • the reflected image is a direct image of each shooting viewpoint, that is, a central image that is directly in front of the images shot from each shooting viewpoint. Equivalent to a collection of only images Therefore, if you focus on one point on object 1 (for example, point A), the number of image data that can be used for stereo measurement is not innumerable. (E.g., image point GO), a single reflection image R1 captured from a viewpoint with radius 2r (e.g. image point G1), and a double reflection image R2 captured from a viewpoint with radius 4r The image of the point shown in There are only three image data of image point G2). In other words, in this example, the image data used for the stereo measurement process has the same effect as that obtained when two-dimensional images from three viewpoints are obtained at one time.
  • the fisheye lens can express the relationship between the angle ⁇ formed by the ray incident on the fisheye lens with the optical axis and the distance d from the image center on the image plane onto which the ray is projected, as shown in Equation 1 below.
  • FIG. 9 shows a relationship diagram between the angle ⁇ and the distance d from the image center.
  • f is the focal length.
  • the function g ( ⁇ ) is the projection characteristic of the fisheye lens, and the characteristic varies from lens to lens.
  • the angle of the ray incident on the camera can be determined by the distance of the image center force at the point of interest in the captured image.
  • step 2 stereo measurement processing is performed between multiple acquired 2D images, and each 3D object is associated with each element, but it is not necessary to perform stereo measurement processing for all pixels. Since the cost becomes enormous, the image data is usually converted to the frequency domain by DCT transform or Fourier transform, and singular points and edges in the image are extracted and their representative ones are selected as feature points. Then, the feature points in the image are searched and matched. Here, feature points in other images corresponding to feature points in one image are called corresponding feature points.
  • the matching process between feature points can be executed using the geometric relationship between the direct image and the reflected image, which greatly increases the calculation cost. Can be reduced.
  • the image captured by the camera 30 includes the direct image DO and the reflected image (here, the once reflected image R1 and the twice reflected image R2).
  • the direct image DO and the reflected image there is an important geometric relationship between the direct image DO and the reflected image (one-time reflection image R1 and two-time reflection image R2).
  • the geometrical relationship mentioned here is a relationship in which corresponding feature points exist on the same straight line drawn through the center of the image.
  • the corresponding feature points corresponding to the feature points are efficiently searched using the geometric relationship between the feature points.
  • Fig. 5 shows an image of a pyramid-shaped object placed at the center of the pedestal, taken directly from the image, and a reflection image reflected on the inner wall mirror of the mirror cylinder.
  • the radiation drawn in the image is a line added to the image by post-processing so as to visually check the correspondence between the direct image and the reflected image, and such radiation does not reflect the initial force. Absent.
  • Figure 6 shows that the geometric relationship between the feature points is maintained even when the object is placed at a position where the central force of the pedestal is also deviated. Compared to Fig. 5, the object has a central force of the pedestal.
  • the reflection image reflected on the inner wall mirror surface of the mirror cylindrical body is placed at a position biased to the upper right.
  • the force is greatly distorted compared to Fig. 5.
  • the feature points may exist on a straight line passing through the center of the image. I understand. In Fig. 6, we focus on two feature points in the image directly, and draw two additional straight lines connecting the feature points and the center of the image in post-processing so that they can be easily divided. It can be seen that there are corresponding feature points in the once reflected image and the twice reflected image on the straight line.
  • Figure 7 shows an example of the trajectory of a ray incident on the camera after the point force of the object is reflected twice by the mirror.
  • Figure 7 (a) is observed from the side, and (b) is observed from directly above.
  • the optical axis of the camera and the central axis of the cylindrical mirror coincide, in order for a light ray to enter the camera, the light beam must pass through the central axis of the cylindrical mirror.
  • the 3D object measuring apparatus of the present invention is viewed from above as shown in Fig. 7 (b), the two tangents of the cylindrical mirrors facing each other across the central axis of the cylinder are as shown in Fig. 7 (b). Always parallel.
  • the light beam that passes through the center of the cylinder always enters the mirror surface with parallel tangents perpendicularly when an upward force is also observed. Since rays starting from the same point always exist on the same straight line, the epipolar line of the virtual camera is a straight line passing through the center of the cylinder.
  • FIG. 8 shows a simulation image obtained by photographing a virtual object.
  • An image can be obtained in which the image directly visible from the camera is in the center of the captured image and the image reflected by the cylindrical mirror is present around it.
  • the reflected images are arranged concentrically in order of decreasing image center force.
  • the object in FIG. 8 (a) has a conical shape
  • the light ray incident on the camera from the point A on the object surface moves only on the broken line connecting the image center and the point A.
  • the search range of the set of corresponding points existing in the captured image can be limited to a straight line passing through the center of the image, so that the amount of calculation can be reduced and false detection of the corresponding points can be reduced.
  • the object in Fig. 8 (b) has a quadrangular pyramid shape
  • the ray incident on the point B camera on the object surface moves only on the broken line connecting the image center and point B. It's a little bit.
  • step 3 stereo measurement processing of the direct image and the reflected image is performed based on the correspondence between the feature points obtained in step 2 and the corresponding feature points.
  • Generate 3D image data of Jetato For example, it is performed as 3D image data by direct calculation or inference calculation between direct image data and reflected image data.
  • the procedure 3 is not particularly limited, and any algorithm that performs stereo measurement processing can be widely applied.
  • FIG. 12 is a block diagram showing components of the 3D image data processing device 40 in the 3D object measuring device 100 of the present invention.
  • the above algorithm which can be used to concretely realize 3D image data generation processing based on stereo measurement methods using an information processing organization combined with general-purpose personal computer resources and software modules, is a semiconductor. It can be realized using dedicated hardware coded by a circuit.
  • [0044] 41 is a feature point extracting means for extracting and determining feature points on a photographed image photographed by the photographing recording means 32 of the camera 30.
  • Any effective feature point extraction algorithm is not particularly limited and can be widely applied.
  • DCT transform converts image data into the frequency domain, and extracts singular points and edges in the image. Use an algorithm to select
  • Reference numeral 42 denotes corresponding feature point searching means for searching and determining corresponding feature points on the reflected image corresponding to the feature points on the direct image extracted by the feature point extracting means 41 on the reflected image. Or a feature on the reflection image extracted by the feature point extraction means 41. This is means for searching for and determining corresponding feature points on the direct image corresponding to the scoring points on the direct image.
  • the corresponding feature point search means 42 incorporates an algorithm for performing a search using the above-described geometric relationship, that is, searching for a corresponding feature point by searching on an extension line connecting the center of the image and the feature point. Leave it in.
  • [0046] 43 is a three-dimensional image data generation means, which performs stereo measurement processing between the direct image and the reflected image based on the correspondence between the feature points obtained by the corresponding feature point search means 42 and the corresponding feature points. This is the part that generates the 3D image data of the object such as the direct image and the reflection image that are 2D images.
  • the corresponding point search means 42 can apply an existing stereo measurement algorithm, and can apply a wide range of algorithms such as DP matching, which is a nonlinear matching technique, in addition to SSD. .
  • the captured image is expanded in polar coordinates for easy processing, and then, from the captured image expanded in polar coordinates, a small area centered on one point is searched as the search source area. And a process for searching for an area considered to be most similar to the search source area is performed.
  • an SSD Sum of Squared Difference
  • the distortion of the reflected image should be corrected when determining the similarity between regions.
  • the magnitude of the distortion of the reflected image depends on the normal direction of the object plane, which is an unknown parameter, it is impossible to correct the distortion analytically. Therefore, in this embodiment, in addition to the movement of the search area, the most similar area is calculated by calculating the SSD value while dynamically scaling the reflected image for examining the similarity in the radial direction.
  • the set of the center points is the thread of the corresponding point.
  • the algorithm of the corresponding feature point search means 42 is described as follows: (1) polar coordinate expansion processing of the captured image, (2) size normalization processing of the search target region, (3) limitation of the search target region Processing will be described separately in (4) SSD value calculation processing.
  • Figure 10 (a) shows an example of the image that is the result of polar coordinate expansion.
  • a set of corresponding points always exists on the same straight line passing through the center of the image taken by the three-dimensional object measurement apparatus of the present invention. Therefore, the range when searching for a point corresponding to a point in the image by expanding the captured image into a polar coordinate image with the vertical axis being the distance from the image center and the horizontal axis being the angle is 10 If the coordinate value in the image before polar coordinate expansion is (u, V) and the coordinate value after polar coordinate expansion is (t, w), it will be limited to the area of the broken line shown in (b).
  • the size of the reflected image in the captured image in the w-axis direction varies depending on the incident angle of the light beam and the normal direction of the object plane.
  • the size of the w-axis direction of the image that is reflected twice by the mirror and incident on the camera is smaller than the image that is reflected once and incident on the camera.
  • the normal direction of the object plane is unknown at the measurement stage, it is impossible to analytically correct the change in the image size in the w-axis direction.
  • the similarity is examined, and the SSD value is calculated while dynamically expanding and reducing the local region in the w-axis direction.
  • the size of the reference area (Source) and the evaluation target area (Target) must be the same during the search.
  • the color of each pixel is expressed in the RGB color system.
  • (I r, g, b) and the target scale s when Source is set to 1.0, it can be calculated from Equation 5 below.
  • trunc (x) is a function that rounds off the decimal part of x.
  • W S and w S are the upper and lower values where the Source exists.
  • the lower end value is shown.
  • the point w n obtained by Equation 6 above represents the linear ratio of the position of Target corresponding to Source.
  • the distortion of the reflected image in the W-axis direction is generally non-linear. This is because the error when approximating non-linear distortion with a linear ratio is small locally.
  • Equation 8 a is a constant that represents the size of the search range.
  • the search range becomes wider c
  • this 3D object measurement device uses the SSD value between the source and target areas as the evaluation amount.
  • the SSD value d between the source area S and the target area T is obtained by the following formula 9. In the following formula, I s and 1 T
  • FIG. 9 shows an example in which the image directly observed from the camera is the entire source set S.
  • a direct image obtained by directly photographing a three-dimensional object placed on a pedestal facing the fisheye lens, and a mirror cylindrical body It is possible to shoot the reflection image of the 3D object reflected on the mirror of the inner wall of the side at the same time, and easily find the feature point and the corresponding feature point corresponding to the feature point between the direct image and the reflection image.
  • 3D image data can be generated from 2D image data based on stereo measurement processing.
  • the three-dimensional object measuring apparatus 100a of the second embodiment is different from the three-dimensional object measuring apparatus 100 of the first embodiment in that the pedestal 10 is formed of a transparent material, and as shown in FIG.
  • the second camera 30b is provided with two cameras, and the fisheye lens 3la of the first camera 30a and the fisheye lens 31b of the second camera 30b are arranged so as to face each other with the pedestal 10 in between.
  • the three-dimensional object measuring apparatus 100a of the second embodiment is configured to include not only the camera 30a above the mirror cylindrical body 20 but also the camera 30b below the mirror cylindrical body 20, so that the object 1 In addition to the upper image, the lower image as well as the direct image DO and the reflection image Rn are taken at the same time.
  • the pedestal 10 is made of a transparent material such as a glass plate so that the object 1 can be photographed from below, and the mirror cylindrical body 20 needs to be extended below the pedestal 10.
  • the three-dimensional object measuring apparatus 100b according to the third embodiment has a camera 30 that is movable with respect to the mirror cylindrical body 20 with respect to the three-dimensional object measuring apparatus 100 according to the first embodiment.
  • the camera 30 can move up and down along the central axis of the mirror cylinder 20, and the distance between the base 10 and the fisheye lens 31 is variable.
  • the camera 30 may be moved manually by the user. However, the camera 30 can be moved by hand in combination with a power camera casing and a stepping motor mechanism. If the movement can be controlled, the distance between the fisheye lens 31 and the object on the pedestal 10 can be adjusted accurately.
  • FIG. 15 is a diagram schematically illustrating advantages and disadvantages when the distance between the base 10 and the fisheye lens 31 is small.
  • the object When the distance between the pedestal 10 and the fisheye lens 31 is reduced, the object can be enlarged by the camera 30, and a detailed direct image DO of the object surface can be obtained.
  • the incident angle to the fisheye lens 31 of the optical path A1 that forms the one-time reflected image R1 of the object reflected on the mirror surface of the inner wall of the mirror cylindrical body 20 is increased. That is, the height of the shooting viewpoint F1 is lowered, and the once reflected image R1 corresponds to an image obtained by shooting the object from a viewpoint at a low position in the oblique direction.
  • the angle of incidence on the fisheye lens 31 of the optical path A2 that forms the twice reflected image R2 of the object reflected on the inner wall mirror surface of the mirror cylinder 20 becomes larger, and the twice reflected image R2 shows the object at a lower position in the oblique direction.
  • FIG. 16 is a diagram schematically illustrating merits and demerits when the distance between the base 10 and the fisheye lens 31 is large.
  • the incident angle to the fisheye lens 31 of the optical path A1 forming the one-time reflected image R1 of the object reflected on the mirror surface of the inner wall of the mirror cylindrical body 20 is smaller than that in the case of FIG.
  • the height of the photographing viewpoint F1 is increased, and the once reflected image R1 corresponds to an image obtained by photographing an object with a high viewpoint power in an oblique direction.
  • the angle of incidence on the fisheye lens 31 of the optical path A2 forming the R2 reflection image R2 of the object reflected on the mirror surface of the inner wall mirror 20 of the mirror cylinder 20 is large, and the twice reflection image R2 shows the object from the viewpoint of a low position in the diagonal direction.
  • the change in the incident angle of the once reflected image R1 and the incident angle of the twice reflected image is larger than in the case of FIG.
  • the difference between the two-time reflection image R2 and the one-time reflection image R1 is larger than that in the case of FIG. 15, and the amount of information for the two-dimensional image used for stereo measurement is larger than that in the case of FIG. .
  • the advantages and disadvantages that occur when the distance between the pedestal 10 and the fisheye lens 31 is large, and the advantages and disadvantages that occur when the distance between the pedestal 10 and the fisheye lens 31 is small are in a trade-off relationship. is there. Furthermore, as a variable amount, it is actually a 3D measurement. Since the size (height) of the bougietat affects, it is necessary to consider the distance between the fisheye lens 31 and the object 1, not the distance between the base 10 and the fisheye lens 31.
  • the three-dimensional object measuring apparatus 100b of the third embodiment is such that the camera 30 is movable up and down with respect to the mirror cylinder 20, and the camera 30 moves up and down along the central axis of the mirror cylinder 20.
  • the distance between the base 10 and the fisheye lens 31 is variable.
  • the size of the image DO in particular, the optical zoom mechanism can be directly mounted.
  • the resolution of the image DO can be directly manipulated, but the height of the photographing viewpoint F1 of the reflected image R1 and the photographing viewpoint F2 of the reflected image R2 cannot be manipulated.
  • the zoom mechanism when the zoom mechanism is used, the surrounding image cannot be captured instead of the center image appearing large, and the higher-order reflected image Rn, and in some cases the twice-reflected image R2 and the once-reflected image R1 are captured in the captured image. There is also a risk of being lost.
  • the three-dimensional object measuring apparatus 100b of the third embodiment attaches importance to manipulating the height of the shooting viewpoint F1 of the reflected image R1 and the shooting viewpoint F2 of the reflected image R2 while considering the height of the object 1,
  • the camera 30 can move up and down along the central axis of the mirror cylindrical body 20, and the distance between the base 10 and the fisheye lens 31 is variable.
  • the above description is not intended to exclude the camera 30 equipped with an optical zoom mechanism.
  • the camera 30 of the three-dimensional object measuring apparatus 100 of the present invention is not equipped with an optical zoom mechanism or a digital zoom mechanism.
  • the three-dimensional object measurement apparatus 100c of the fourth embodiment enables relative movement of the mirror cylindrical body 20 and the camera 30 with respect to the base 10 with respect to the three-dimensional object measurement apparatus 100 of the first embodiment.
  • the center of the pedestal 10 is the center of rotational movement, and the mirror cylinder 20 and the camera 30 are integrated in a three-dimensional space. It enables free rotational movement.
  • the mirror cylinder 20 may be rotated by the user by hand, but the mirror cylinder 20 may be rotated by hand, but the mirror cylinder 20 is combined with the stepping motor mechanism, and the mirror cylinder 20 according to the user's operation input. If the 20 movements can be controlled, the direction of the photographing viewpoint of the camera 30 can be accurately adjusted so as to obtain an angle that enables photographing of a powerful part that cannot be photographed in the basic posture.
  • the three-dimensional object measuring apparatus 100c has a blind spot depending on the shape and direction of the object 1 to be three-dimensionally measured in the basic posture, and even when a portion where the direct image DO or the reflected image Rn cannot be obtained occurs.
  • the mirror cylindrical body 20 and the camera 30 are integrated to change the angle with respect to the pedestal 10 to make another angle, and a direct image DO or reflected image Rn can be obtained even for a part that is a blind spot and a captured image cannot be obtained. Is given.
  • the first is a case where the object 1 has a slightly deep concave shape.
  • the image DO can be obtained directly for the part, but the reflected image R1 and the reflected image R2 obtained from the oblique photographing viewpoint F1 and photographing viewpoint F2 have a concave bottom surface portion. May not appear in the blind spot at the periphery.
  • the concave shape when the concave shape is on the side, the reflected image R1 and the reflected image R2 obtained from the oblique photographing viewpoint F1 and photographing viewpoint F2 facing directly to the part can be obtained, but the direct image DO In some cases, however, the bottom surface of the concave shape will not appear as a blind spot at the periphery.
  • the second is a case where the side surface of the object 1 is nearly vertical.
  • the reflected image R1 and the reflected image R2 obtained from the oblique photographing viewpoint F1 and the photographing viewpoint F2 facing the part can be obtained, but the direct image DO has an angle that is too large and an effective image can be obtained.
  • direct image DO is important information in 3D stereo measurement, so it is recommended to devise the shooting direction to obtain direct image DO. Don't say anything.
  • the third is a case where the shape of the object 1 is complicated and a part thereof is behind other parts.
  • the image DO can be obtained directly, the reflected image R1 and the reflected image R2 obtained from the oblique photographing viewpoint F1 and the photographing viewpoint F2 may not be reflected behind other parts.
  • the 3D object measuring device 1 OOc of the fourth embodiment integrates the mirror cylindrical body 20 and the camera 30 even when a part is generated because a blind spot is formed depending on the shape and direction of the object 1 and a direct image DO or reflection image Rn is not obtained. As a result, the angle with respect to the base 10 is changed, and an image of another angle is obtained for obtaining the necessary direct image DO or reflection image Rn for the part.
  • FIG. 18 is a diagram schematically showing light reception of the direct image and the reflected image in the camera 30 in the longitudinal section of the three-dimensional object measuring apparatus 100c, as in FIG.
  • the optical path B 0 corresponding to the direct image DO cannot directly enter the fish-eye lens 31 with respect to the point B on the side surface portion of the object 1.
  • the direct image DO does not include the image of the point B on the side surface portion, or even if it is included, effective information cannot be obtained because the shooting angle is extremely shallow.
  • the once reflected optical path B1 corresponding to the reflected image R1 is incident on the fisheye lens 31, and the image of the point B on the side surface portion is included in the once reflected image R1.
  • the twice-reflection optical path B2 corresponding to the reflection image R2 is incident on the fish-eye lens 31, and the image of the point B on the side surface portion is included in the twice-reflection image R2.
  • the plurality of reflection images R1 and R2 include image information regarding the point B, but information regarding the point B is not directly obtained in the image D0.
  • FIG. 19 shows the basic posture of FIG. 18, in the state where the pedestal 10 is fixed, the center of the pedestal 10 is the center of the rotational movement, and the mirror cylinder 20 and the camera 30 are integrally rotated clockwise in the vertical plane. Shows a state of rotational movement.
  • the point B on the side surface portion of the object 1 is a position that can be seen well from the photographing viewpoint of the camera 30, and the optical path B0 corresponding to the direct image D0 directly enters the fisheye lens 31. That is, the image of the point B on the side surface portion is effectively included in the direct image D0.
  • the once-reflected optical path B1 corresponding to the reflected image R1 is also directly incident on the fisheye lens 31.
  • the image of point B is included.
  • the image of the point B on the side portion is also included in the twice reflected image R2.
  • the posture in FIG. 19 is an example of rotation for obtaining a direct image DO of the point A on the right side of the object 1 that is sharp, but the left side of the object 1, the front (front side), To obtain a direct image DO of the other side such as the back (back side), separately rotate the mirror cylinder 20 and the camera 30 so that the target side faces the fisheye lens 31.
  • the rotation control means is not limited! However, for example, a stepping motor or the like may be used.
  • the object of the 3D object measurement apparatus of the present invention is to obtain a plurality of images used for 3D stereo measurement with a small number of imaging operations.
  • the image is taken one more time at different angles, but multiple images can be obtained by taking two images of a part that is affected by the shape and posture of the object and is difficult to shoot.
  • it can be said that it is in line with the object of the present invention to obtain a plurality of images used for three-dimensional stereo measurement with a small number of photographing.
  • Example 5 a prototype of the three-dimensional object measurement apparatus of the present invention is manufactured and the shape of the actual object is measured.
  • the object measurement environment of the fifth embodiment showing the result of shape measurement data using the actual measurement image will be described below.
  • the camera used for the measurement was a DepictDlE manufactured by Opteon, and used a 1024 X 1024 (pixel s) image cut out from an image taken at I 392 X 1 ( ⁇ 0 (pixels).
  • a ring light was used as the light source.
  • the cylindrical mirror has an inner diameter of 90 (mm) and a height of 100 (mm), but the camera used in this example can only obtain a grayscale image. Therefore, in Equation 9 above L S.
  • Equation 10 I s (t, w) and 1 T (t, w) are obtained from the source and target, respectively.
  • the range of possible values is 0 or more and 255 or less for both.
  • a conical object shown in FIG. 20 is used as a measurement target.
  • the cone-shaped object has a bottom diameter of 56 (mm) and a height of 34 (mm), and the cone surface has a grayscale scene image as a texture.
  • the cone-shaped object shown in FIG. 20 was photographed with the three-dimensional object measuring apparatus of the present invention, the photographed image shown in FIG. 21 was obtained.
  • the shape of the cone object was measured from this image.
  • the SSD window size in this embodiment was set to 5 ⁇ 5 (pixels).
  • the scale s in Equation 5 above was changed by 0.1 from 0.5 to 2.0, and the shape was measured by stereo vision using the image directly observed by the camera and the image reflected once by the cylindrical mirror.
  • FIG. 22 shows the result of cone shape measurement by the three-dimensional object measurement apparatus of the present invention.
  • FIG. 23 shows a scatter diagram in which the horizontal axis represents the distance from the center of the image and the vertical axis represents the height of the cone shape measured by the three-dimensional object measuring apparatus of the present invention.
  • this 3D object measurement device can measure the entire shape of an object with a simple device configuration consisting of only a camera and a cylindrical mirror and a simple shooting process of shooting only one image. It has been shown that it is useful for the measurement of the entire circumference.
  • Industrial applicability [0087]
  • the three-dimensional object measurement apparatus of the present invention can obtain a multi-viewpoint image from all directions of an object by one shooting by putting an object in a cylindrical mirror and shooting the object from above with a camera.
  • This 3D object measuring device can be used not only in the education field, but also in the medical field and academic research field.
  • the 3D object measuring device of the present invention allows the user to easily measure the shape of personal belongings or to efficiently measure the entire circumference of many objects because of the simplicity of the device configuration and imaging process. It is useful for the application.
  • the entire shape of the object can be measured from a single image, it is possible to record the entire shape of the object along with the movement of the animal if images are taken continuously. Can be improved.
  • FIG. 1 A diagram schematically showing the basic configuration of the three-dimensional object measuring apparatus according to the first embodiment.
  • FIG. 2 A longitudinal section of a direct image and a reflected image received by a camera. Schematic representation in
  • FIG. 3 A diagram schematically showing an image obtained by photographing a cone-shaped object placed on a pedestal with a camera.
  • FIG.5 Diagram showing a direct image of a pyramid-shaped object placed at the center of the pedestal and a reflection image reflected on the mirror surface of the inner wall of the mirror cylinder
  • FIG. 6 Diagram showing that the geometric relationship of feature points is maintained even when the object is placed at a position deviated from the center of the pedestal.
  • FIG.7 A diagram showing an example of the trajectory of a ray incident on the camera after the object's single-point force is reflected twice by the mirror.
  • Figure 19 Diagram showing a state where the mirror cylinder and the camera are rotated 45 degrees clockwise in the vertical plane with the center of the pedestal as the center of rotational movement.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

Le dispositif de mesure d'objet tridimensionnel comprend une base sur laquelle est placé un objet tridimensionnel à imager, un cylindre à surface miroir dont la paroi intérieure entoure la base et ayant une surface miroir, une lentille fish-eye opposée à la base et dont l'axe optique est aligné avec l'axe médian du cylindre à surface miroir, et une caméra possédant un moyen d'imager/enregistrer pour enregistrer l'image formée par la lentille fish-eye. Le moyen d'imager/enregistrer de la caméra capture une image directe telle que l'objet tridimensionnel est vu placé sur la base et une image réfléchie de l'objet tridimensionnel sur la base, réfléchi par la surface miroir de la paroi intérieure du cylindre à surface miroir. Grâce à cela, des images couvrant le haut et le bas, la droite et la gauche et la totalité de la circonférence de l'objet tridimensionnel utilisé pour des mesures stéréoscopiques peuvent être obtenues à partir de la prise d'une seule image, la correspondance entre les points caractéristiques des images peut se déterminer facilement, une image à multiples points de vue de l'objet depuis toutes les directions peut être obtenue par une seule imagerie et une forme circonférentielle tridimensionnelle de l'objet peut être mesurée à partir d'une image par stéréoscopie à réfraction et à réflexion.
PCT/JP2005/024098 2005-01-13 2005-12-28 Dispositif de mesure d'objet tridimensionnel WO2006075528A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006552886A JP4742190B2 (ja) 2005-01-13 2005-12-28 3次元オブジェクト計測装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-006538 2005-01-13
JP2005006538 2005-01-13

Publications (1)

Publication Number Publication Date
WO2006075528A1 true WO2006075528A1 (fr) 2006-07-20

Family

ID=36677556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/024098 WO2006075528A1 (fr) 2005-01-13 2005-12-28 Dispositif de mesure d'objet tridimensionnel

Country Status (2)

Country Link
JP (1) JP4742190B2 (fr)
WO (1) WO2006075528A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013539273A (ja) * 2010-08-09 2013-10-17 クゥアルコム・インコーポレイテッド 立体カメラのためのオートフォーカス
US9294672B2 (en) 2014-06-20 2016-03-22 Qualcomm Incorporated Multi-camera system using folded optics free from parallax and tilt artifacts
US9374516B2 (en) 2014-04-04 2016-06-21 Qualcomm Incorporated Auto-focus in low-profile folded optics multi-camera system
US9386222B2 (en) 2014-06-20 2016-07-05 Qualcomm Incorporated Multi-camera system using folded optics free from parallax artifacts
US9383550B2 (en) 2014-04-04 2016-07-05 Qualcomm Incorporated Auto-focus in low-profile folded optics multi-camera system
US9398264B2 (en) 2012-10-19 2016-07-19 Qualcomm Incorporated Multi-camera system using folded optics
US9438889B2 (en) 2011-09-21 2016-09-06 Qualcomm Incorporated System and method for improving methods of manufacturing stereoscopic image sensors
US9541740B2 (en) 2014-06-20 2017-01-10 Qualcomm Incorporated Folded optic array camera using refractive prisms
US9549107B2 (en) 2014-06-20 2017-01-17 Qualcomm Incorporated Autofocus for folded optic array cameras
US9819863B2 (en) 2014-06-20 2017-11-14 Qualcomm Incorporated Wide field of view array camera for hemispheric and spherical imaging
US9832381B2 (en) 2014-10-31 2017-11-28 Qualcomm Incorporated Optical image stabilization for thin cameras
US10013764B2 (en) 2014-06-19 2018-07-03 Qualcomm Incorporated Local adaptive histogram equalization
US10178373B2 (en) 2013-08-16 2019-01-08 Qualcomm Incorporated Stereo yaw correction using autofocus feedback
CN113688846A (zh) * 2021-08-24 2021-11-23 成都睿琪科技有限责任公司 物体尺寸识别方法、可读存储介质及物体尺寸识别系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102517054B1 (ko) * 2020-08-13 2023-03-31 연세대학교 산학협력단 원통형 컨볼루션 네트워크 연산 장치와 이를 이용한 객체 인식 및 시점 추정장치 및 방법
CN114279361B (zh) * 2021-12-27 2023-08-22 哈尔滨工业大学芜湖机器人产业技术研究院 一种筒形零件内壁缺陷尺寸三维测量系统及其测量方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001330915A (ja) * 2000-05-23 2001-11-30 Olympus Optical Co Ltd 立体画像撮影方法及び撮影補助具
JP2002095016A (ja) * 2000-09-20 2002-03-29 Fuji Photo Film Co Ltd 画像撮像装置及び画像撮像方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001330915A (ja) * 2000-05-23 2001-11-30 Olympus Optical Co Ltd 立体画像撮影方法及び撮影補助具
JP2002095016A (ja) * 2000-09-20 2002-03-29 Fuji Photo Film Co Ltd 画像撮像装置及び画像撮像方法

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9485495B2 (en) 2010-08-09 2016-11-01 Qualcomm Incorporated Autofocus for stereo images
JP2013539273A (ja) * 2010-08-09 2013-10-17 クゥアルコム・インコーポレイテッド 立体カメラのためのオートフォーカス
US9438889B2 (en) 2011-09-21 2016-09-06 Qualcomm Incorporated System and method for improving methods of manufacturing stereoscopic image sensors
US9838601B2 (en) 2012-10-19 2017-12-05 Qualcomm Incorporated Multi-camera system using folded optics
US10165183B2 (en) 2012-10-19 2018-12-25 Qualcomm Incorporated Multi-camera system using folded optics
US9398264B2 (en) 2012-10-19 2016-07-19 Qualcomm Incorporated Multi-camera system using folded optics
US10178373B2 (en) 2013-08-16 2019-01-08 Qualcomm Incorporated Stereo yaw correction using autofocus feedback
US9374516B2 (en) 2014-04-04 2016-06-21 Qualcomm Incorporated Auto-focus in low-profile folded optics multi-camera system
US9383550B2 (en) 2014-04-04 2016-07-05 Qualcomm Incorporated Auto-focus in low-profile folded optics multi-camera system
US9973680B2 (en) 2014-04-04 2018-05-15 Qualcomm Incorporated Auto-focus in low-profile folded optics multi-camera system
US9860434B2 (en) 2014-04-04 2018-01-02 Qualcomm Incorporated Auto-focus in low-profile folded optics multi-camera system
US10013764B2 (en) 2014-06-19 2018-07-03 Qualcomm Incorporated Local adaptive histogram equalization
US9819863B2 (en) 2014-06-20 2017-11-14 Qualcomm Incorporated Wide field of view array camera for hemispheric and spherical imaging
US9843723B2 (en) 2014-06-20 2017-12-12 Qualcomm Incorporated Parallax free multi-camera system capable of capturing full spherical images
US9854182B2 (en) 2014-06-20 2017-12-26 Qualcomm Incorporated Folded optic array camera using refractive prisms
US9733458B2 (en) 2014-06-20 2017-08-15 Qualcomm Incorporated Multi-camera system using folded optics free from parallax artifacts
US9549107B2 (en) 2014-06-20 2017-01-17 Qualcomm Incorporated Autofocus for folded optic array cameras
US9541740B2 (en) 2014-06-20 2017-01-10 Qualcomm Incorporated Folded optic array camera using refractive prisms
US10084958B2 (en) 2014-06-20 2018-09-25 Qualcomm Incorporated Multi-camera system using folded optics free from parallax and tilt artifacts
US9386222B2 (en) 2014-06-20 2016-07-05 Qualcomm Incorporated Multi-camera system using folded optics free from parallax artifacts
US9294672B2 (en) 2014-06-20 2016-03-22 Qualcomm Incorporated Multi-camera system using folded optics free from parallax and tilt artifacts
US9832381B2 (en) 2014-10-31 2017-11-28 Qualcomm Incorporated Optical image stabilization for thin cameras
CN113688846A (zh) * 2021-08-24 2021-11-23 成都睿琪科技有限责任公司 物体尺寸识别方法、可读存储介质及物体尺寸识别系统
CN113688846B (zh) * 2021-08-24 2023-11-03 成都睿琪科技有限责任公司 物体尺寸识别方法、可读存储介质及物体尺寸识别系统

Also Published As

Publication number Publication date
JP4742190B2 (ja) 2011-08-10
JPWO2006075528A1 (ja) 2008-08-07

Similar Documents

Publication Publication Date Title
JP4742190B2 (ja) 3次元オブジェクト計測装置
CN111060023B (zh) 一种高精度3d信息采集的设备及方法
CN111292364B (zh) 一种三维模型构建过程中图像快速匹配的方法
CN110543871B (zh) 基于点云的3d比对测量方法
Grossberg et al. The raxel imaging model and ray-based calibration
US7176960B1 (en) System and methods for generating spherical mosaic images
JP5872818B2 (ja) 測位処理装置、測位処理方法、および画像処理装置
US20130335535A1 (en) Digital 3d camera using periodic illumination
JP6862569B2 (ja) 仮想光線追跡方法および光照射野の動的リフォーカス表示システム
US7042508B2 (en) Method for presenting fisheye-camera images
JP2004536351A (ja) 長方形の画像センサによりパノラマ画像を撮像する方法
CN111340959B (zh) 一种基于直方图匹配的三维模型无缝纹理贴图方法
JP2019525509A (ja) 水平視差ステレオパノラマ取込方法
WO2008034942A1 (fr) Procédé et appareil de formation d'images stéréoscopiques panoramiques
JPH1195344A (ja) 全方位ステレオ画像撮影装置
CN106934110B (zh) 一种由聚焦堆栈重建光场的反投影方法和装置
CN113538552B (zh) 一种基于图像排序的3d信息合成图像匹配的方法
US10222596B2 (en) Omnidirectional catadioptric lens structure
Bazeille et al. Light-field image acquisition from a conventional camera: design of a four minilens ring device
Zhao et al. Removal of parasitic image due to metal specularity based on digital micromirror device camera
JPH05346950A (ja) 三次元シーンを感知するための方法及び装置
WO2001022728A1 (fr) Systemes et procedes de generation d'images mosaiques spheriques
JP7166607B2 (ja) 全方位カメラ装置の製造方法、全方位カメラ装置の設計方法
CN208353467U (zh) 一种全景拍摄图像数据处理合成系统
Kontogianni et al. Evaluating the Effect of Using Mirrors in 3D Reconstruction of Small Artefacts

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2006552886

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05822273

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 5822273

Country of ref document: EP