WO2020059181A1 - Imaging device and imaging method - Google Patents

Imaging device and imaging method Download PDF

Info

Publication number
WO2020059181A1
WO2020059181A1 PCT/JP2019/010263 JP2019010263W WO2020059181A1 WO 2020059181 A1 WO2020059181 A1 WO 2020059181A1 JP 2019010263 W JP2019010263 W JP 2019010263W WO 2020059181 A1 WO2020059181 A1 WO 2020059181A1
Authority
WO
WIPO (PCT)
Prior art keywords
pattern
image
imaging device
sensor
imaging
Prior art date
Application number
PCT/JP2019/010263
Other languages
French (fr)
Japanese (ja)
Inventor
悠介 中村
啓太 山口
和幸 田島
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2020059181A1 publication Critical patent/WO2020059181A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to an imaging device and an imaging method.
  • the present invention claims the priority of Japanese Patent Application No. 2018-173930 filed on September 18, 2018, and the contents described in the application for the designated countries in which weaving by reference to a document is permitted. Incorporated by reference into this application.
  • Patent Document 1 JP-A-2018-61109.
  • This publication states that “a modulator having a first pattern and modulating the intensity of light, an image sensor that converts light transmitted through the modulator into image data and outputs the image data, And an image processing unit for restoring an image based on a cross-correlation operation with pattern data indicating the second pattern.
  • An object of the present invention is to provide a technique for obtaining a correct developed image by correcting an assembly error of an imaging device, a manufacturing error of a photographing pattern, and distortion of focus adjustment.
  • an imaging device converts an light into an electric signal, generates an image of a sensor, and an image sensor that detects the light based on an imaging pattern.
  • a modulator that modulates intensity; and a parameter storage unit that stores a parameter used to execute a predetermined correction process on a plurality of the sensor images captured by the different capturing patterns among the sensor images.
  • FIG. 1 is a diagram illustrating a configuration example of an imaging device according to a first embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a configuration example of an imaging module according to the first embodiment.
  • FIG. 4 is a diagram illustrating a configuration example of another imaging module according to the first embodiment.
  • FIG. 4 is a diagram illustrating an example of a photographing pattern and a developing pattern according to the first embodiment.
  • FIG. 6 is a diagram illustrating another example of the photographing pattern and the developing pattern according to the first embodiment.
  • FIG. 9 is a diagram illustrating an example in which an in-plane shift occurs in a projected image from a pattern substrate surface to an image sensor due to obliquely incident parallel light.
  • FIG. 9 is a diagram illustrating an example in which an in-plane shift occurs in a projected image from a pattern substrate surface to an image sensor due to obliquely incident parallel light.
  • FIG. 3 is a diagram illustrating an example of a projected image of a photographing pattern.
  • FIG. 4 is a diagram illustrating an example of a development pattern.
  • FIG. 6 is a diagram illustrating an example of a developed image by a correlation development method. It is a figure showing an example of a moire fringe by a moire development system.
  • FIG. 4 is a diagram illustrating an example of a developed image by a moiré developing method.
  • FIG. 9 is a diagram illustrating an example of a combination of imaging patterns of an initial phase in a fringe scan.
  • FIG. 4 is a diagram illustrating an example of a shooting pattern of a space division fringe scan.
  • FIG. 9 is a diagram illustrating an example of a processing flow of fringe scan.
  • FIG. 7 is a diagram illustrating an example of a processing flow of a development process by a correlation development method.
  • FIG. 4 is a diagram illustrating an example of a processing flow of a development process by a moire development method.
  • FIG. 4 is a diagram illustrating an example of projection of a shooting pattern when an object is at an infinite distance.
  • FIG. 9 is a diagram illustrating an example of an enlarged projection of a photographing pattern when an object is at a finite distance.
  • FIG. 6 is a diagram illustrating a configuration example of an imaging device according to a second embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of a positional relationship between a point light source and an imaging device.
  • FIG. 9 is a diagram illustrating an example of pattern center coordinates when a point light source is at infinity.
  • FIG. 9 is a diagram illustrating an example of pattern center coordinates when a point light source is at a finite distance.
  • FIG. 9 is a diagram illustrating an example of correcting a sensor image when a point light source is at a finite distance.
  • FIG. 5 is a diagram illustrating an example of a sensor image when a shooting pattern is rotating.
  • FIG. 9 is a diagram illustrating an example of pattern center coordinates when a shooting pattern is rotating. It is a figure showing the example of the state where the pattern for photography is inclined in the thickness direction.
  • FIG. 9 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention.
  • FIG. 14 is a diagram illustrating a configuration example of an imaging device according to a modification of the third embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an example of a processing flow of calibration.
  • FIG. 11 is a diagram illustrating an example of a sensor image distortion correction processing flow.
  • FIG. 14 is a diagram illustrating a configuration example of an imaging device according to a fourth embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an example of a distortion correction processing flow of a development pattern. It is a figure showing the example of composition of the imaging device concerning a 5th example of the present invention.
  • FIG. 9 is a diagram illustrating an example of an image when dirt or the like is attached to a shooting pattern. It is a figure showing an example of a luminance distribution of an ideal sensor image.
  • FIG. 7 is a diagram illustrating an example of a luminance distribution of a defective sensor image.
  • FIG. 9 is a diagram illustrating an example of defect detection by inversion pattern synthesis.
  • the number of elements when referring to the number of elements (including the number, numerical value, amount, range, etc.), a case where it is particularly specified and a case where it is clearly limited to a specific number in principle, etc. Except, the number is not limited to the specific number, and may be more than or less than the specific number.
  • constituent elements are not necessarily essential unless otherwise specified or considered to be essential in principle. Needless to say.
  • an imaging method which realizes a thinner and lower cost by obtaining an object image without using a lens.
  • the operation for solving the inverse problem by signal processing becomes complicated, and the processing load is high, so that the hardware requirement specification of the information equipment is increased.
  • FIG. 1 is a diagram showing a configuration example of an imaging apparatus according to a first embodiment of the present invention.
  • the image capturing apparatus 101 acquires an image of an external object without using a lens to form an image.
  • the image capturing module 102 As shown in FIG. 1, the image capturing module 102, a fringe scan processing unit 106, an image processing unit 107, and a controller 108 It is composed of FIG. 2 shows an example of the imaging module 102.
  • FIG. 2 is a diagram illustrating a configuration of the imaging module according to the first embodiment.
  • the imaging module 102 includes an image sensor 103, a pattern substrate 104, and an imaging pattern 105.
  • the pattern substrate 104 is fixed in close contact with the light receiving surface of the image sensor 103, and a pattern 105 for photographing is formed on the pattern substrate 104.
  • the pattern substrate 104 is made of a material that is transparent to visible light, such as glass or plastic.
  • the photographing pattern 105 is a concentric lattice pattern in which the distance between the lattice patterns, that is, the pitch, is reduced in inverse proportion to the radius from the center toward the outside.
  • the imaging pattern 105 is formed by depositing a metal such as aluminum or chromium by, for example, a sputtering method used in a semiconductor process. Shading is given by the pattern with and without the metal deposited. Note that the formation of the photographing pattern 105 is not limited to this, and may be formed by shading, for example, by printing with an inkjet printer or the like. Further, here, the visible light has been described as an example.
  • the pattern substrate 104 is made of a material that is transparent to far-infrared rays such as germanium, silicon, and chalcogenide.
  • a material that is transparent to the wavelength to be used may be used, and the imaging pattern 105 may be made of a material that blocks light such as metal.
  • the pattern substrate 104 and the imaging pattern 105 can also be said to be modulators that modulate the intensity of light incident on the image sensor 103. Note that, here, a method of forming the imaging pattern 105 on the pattern substrate 104 in order to realize the imaging module 102 has been described. However, the imaging module 102 may be realized by a configuration as shown in FIG.
  • FIG. 3 is a diagram illustrating a configuration example of another imaging module according to the first embodiment.
  • the imaging pattern 105 is formed in a thin film and is held by the support member 301.
  • the angle of view can be changed depending on the thickness of the pattern substrate 104. Therefore, for example, if the pattern substrate 104 has the configuration shown in FIG. 3 and has a function of changing the length of the support member 301, it is possible to change the angle of view during shooting and shoot.
  • the pixels 103a which are light receiving elements, are regularly arranged in a grid pattern.
  • the image sensor 103 converts a light image received by the pixel 103a into an image signal which is an electric signal.
  • the intensity of light transmitted through the imaging pattern 105 is modulated by the pattern, and the transmitted light is received by the image sensor 103.
  • the image sensor 103 is, for example, a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
  • the image signal output from the image sensor 103 is subjected to processing such as noise removal by the fringe scan processing unit 106, and the data processed by the image processing unit 107 is output to the controller 108.
  • the controller 108 converts the data format so as to conform to an interface such as USB (Universal Serial Bus) and outputs the converted data.
  • the photographing pattern 105 is a concentric pattern in which the pitch becomes finer in inverse proportion to the radius from the center, and using the radius r and the coefficient ⁇ from the reference coordinates that is the center of the concentric circle,
  • GZP Gabor Zone Plate
  • FZP Fresnel Zone Plate
  • FIG. 4 is a diagram showing an example of a photographing pattern and a developing pattern according to the first embodiment. Specifically, FIG. 4 is an example of a Gabor zone plate represented by the above equation (1).
  • FIG. 5 is a diagram showing another example of the photographing pattern and the developing pattern according to the first embodiment. Specifically, this is an example of a Fresnel zone plate using a pattern obtained as a result of binarizing Expression (1) with a threshold value of 1.
  • the patterned substrate 104 having a thickness of d that photographing pattern 105 is formed, a parallel beam by an angle theta 0 in the x-axis direction as shown in FIG. 6 is a incident.
  • a parallel beam by an angle theta 0 in the x-axis direction as shown in FIG. 6 is a incident.
  • the refraction angle in the pattern substrate 104 is ⁇
  • FIG. 7 shows an example of a projected image of the photographing pattern 105.
  • FIG. 7 is a view showing an example of a projected image of the photographing pattern. As shown in FIG. 7, when parallel light is incident as shown in FIG. 6 using the photographing pattern 105, the image is projected onto the image sensor 103 by shifting by k as in the above equation (2). Is done. This is the output from the imaging module 102.
  • the image processing unit 107 performs a developing process. Among them, the developing process by the correlation developing method and the moiré developing method will be described.
  • the image processing unit 107 calculates a cross-correlation function between the projected image of the photographing pattern 105 shown in FIG. 7 and the developing pattern 801 shown in FIG. A bright spot having a shift amount k can be obtained.
  • the cross-correlation calculation is performed by a two-dimensional convolution calculation, the calculation amount increases.
  • FIG. 8 is a diagram showing an example of a development pattern.
  • the development pattern 801 has a pattern similar to the Gabor zone plate shown in FIG. 4 or the FZP (Fresnel zone plate) shown in FIG. That is, in the present embodiment, the development pattern 801 does not need to have a physical entity, and may be present as information used in image processing.
  • FIG. 9 is a diagram showing an example of a developed image by the correlation developing method.
  • the image is developed by the correlation development method, as described above, a developed image in which a certain bright point is shifted by k can be obtained.
  • the development pattern 801 uses a Gabor zone plate or a Fresnel zone plate similarly to the imaging pattern 105, the development pattern 801 uses the initial phase ⁇ ,
  • F represents an operation of Fourier transform
  • u is a frequency coordinate in the x direction
  • ⁇ with parentheses is a delta function.
  • the equation after the Fourier transform is also a Fresnel zone plate or a Gabor zone plate. Therefore, the development pattern after Fourier transform may be directly generated based on this equation. As a result, the amount of calculation can be reduced.
  • exp (-iku) represented by this exponential function is a signal component, and when this term is Fourier-transformed,
  • This bright point indicates a light beam at infinity, and is nothing but a captured image obtained by the imaging device 101 in FIG.
  • a pattern not limited to the Fresnel zone plate or the Gabor zone plate, for example, a random pattern may be realized as long as the autocorrelation function of the pattern has a single peak.
  • a moiré fringe as shown in FIG. 10 is generated by multiplying the projected image of the photographing pattern 105 shown in FIG. 7 by the developing pattern 801 shown in FIG. Is subjected to Fourier transform, so that a bright spot having a shift amount of (k ⁇ / ⁇ ) as shown in FIG. 11 can be obtained.
  • FIG. 10 is a diagram showing an example of moire fringes by the moire development method. Specifically, as shown in FIG. 10, the result of multiplication of the projected image of the photographing pattern 105 shown in FIG. 7 and the development pattern 801 shown in FIG. 8 is obtained as moire fringes.
  • the third term of this expansion formula is a signal component, and that a striped pattern that is straight and equidistant in the direction of displacement between the two patterns is formed over the entire overlapping region.
  • a fringe generated at a relatively low spatial frequency due to the superposition of such fringes is called a Moire fringe.
  • F represents an operation of Fourier transform
  • u is a frequency coordinate in the x direction
  • ⁇ with parentheses is a delta function
  • FIG. 11 is a view showing an example of a developed image by the moiré developing method.
  • the moire fringe may be realized by a pattern other than the Fresnel zone plate or the Gabor zone plate, for example, an elliptical pattern.
  • FIG. 12 shows an example of a plurality of patterns.
  • FIG. 12 is a diagram showing an example of a combination of imaging patterns of an initial phase in a fringe scan.
  • a complex sensor image complex sensor image
  • 0
  • may be set so as to equally divide an angle between 0 and 2 ⁇ , and is not limited to this phase.
  • a method of switching patterns by time division in fringe scan processing and a method of switching patterns by space division can be considered.
  • a liquid crystal display element which can electrically switch and display a plurality of initial phases shown in FIG. 12 may be used as the photographing pattern 105 in FIG.
  • the switching timing of the liquid crystal display element and the shutter timing of the image sensor 103 are controlled in synchronization, and after acquiring four images in time series, the fringe scan processing unit 106 performs a fringe scan operation.
  • the fringe scan processing unit 106 divides the image into four images corresponding to the respective patterns of the initial phase, and performs a fringe scan operation.
  • FIG. 13 is a diagram showing an example of a photographing pattern of the space division fringe scan.
  • FIG. 14 is a diagram showing an example of a processing flow of the fringe scan.
  • the fringe scan processing unit 106 acquires sensor images based on a plurality of photographing patterns output from the image sensor 103.
  • the acquired sensor images are converted according to individual photographing patterns. Since it is necessary to divide the image, a plurality of sensor images are obtained by dividing the image into a predetermined area (step 1401).
  • time-division fringe scan is adopted, division is not performed because a plurality of sensor images with different photographing patterns are obtained as time passes.
  • the fringe scan processing unit 106 initializes a complex sensor image for output (step 1402).
  • the fringe scan processing unit 106 acquires the sensor image of the first initial phase ⁇ (step 1403).
  • the fringe scan processing unit 106 multiplies exp (i ⁇ ) according to the initial phase ⁇ (step 1404).
  • the fringe scan processing unit 106 adds the multiplication result to the complex sensor image (step 1405).
  • the fringe scan processing unit 106 outputs a complex sensor image (step 1407).
  • the processing by the fringe scan processing unit 106 in steps 1401 to 1407 described above corresponds to the above equation (10).
  • image processing in the image processing unit 107 will be described.
  • FIG. 15 is a diagram showing an example of a processing flow of the development processing by the correlation development method.
  • the image processing unit 107 obtains a complex sensor image output from the fringe scan processing unit 106, and performs a two-dimensional fast Fourier transform (FFT) on the complex sensor image (Step 1501). .
  • FFT fast Fourier transform
  • the image processing unit 107 generates a predetermined development pattern 801 to be used for the development process, and multiplies the complex sensor image subjected to the two-dimensional FFT operation (step 1502).
  • the image processing unit 107 performs an inverse two-dimensional FFT operation (step 1503).
  • the result of this operation is a complex number.
  • the image processing unit 107 converts the result of the inverse two-dimensional FFT operation into an absolute value or a real part, converts the image to be photographed into a real number, and develops the image (step 1504).
  • the image processing unit 107 performs contrast enhancement processing on the obtained developed image (step 1505). Further, the image processing unit 107 performs color balance adjustment (step 1506) and the like, and outputs the captured image.
  • the above is the development processing by the correlation development method.
  • FIG. 16 is a diagram showing an example of a processing flow of a developing process by the moiré developing method.
  • the image processing unit 107 acquires the complex sensor image output from the fringe scan processing unit 106, generates a predetermined development pattern 801 used for the development process, and multiplies the complex sensor image by the complex sensor image (step 1601). ).
  • the image processing unit 107 obtains a frequency spectrum by a two-dimensional FFT operation (step 1602).
  • the image processing unit 107 cuts out data of a necessary frequency region from the frequency spectrum obtained in step 1602 (step 1603).
  • step 1504 The subsequent realization processing in step 1504, the contrast enhancement processing in step 1505, and the color balance adjustment processing in step 1506 are the same as the processing in steps 1504 to 1506 shown in FIG.
  • FIG. 17 shows the manner in which the photographic pattern 105 is projected onto the image sensor 103 when the subject described above is sufficiently far (in the case of infinity).
  • FIG. 17 is a diagram illustrating an example of projection of a photographing pattern when an object is at an infinite distance.
  • a spherical wave from a point 1701 constituting a distant object becomes a plane wave while propagating a sufficiently long distance and irradiates the imaging pattern 105.
  • the projected image 1702 is projected on the image sensor 103, the projected image becomes It has almost the same shape as the pattern 105 for photographing.
  • a single luminescent spot can be obtained by performing development processing on the projection image 1702 using the development pattern.
  • imaging of an object at a finite distance will be described.
  • FIG. 18 is a diagram showing an example of an enlarged projection of the photographing pattern when the object is at a finite distance.
  • the projection of the imaging pattern 105 onto the image sensor 103 is enlarged more than the imaging pattern 105.
  • a spherical wave from a point 1801 constituting an object irradiates the imaging pattern 105 and a projected image 1802 is projected on the image sensor 103, the projected image is enlarged substantially uniformly. Note that this enlargement factor ⁇ is calculated using the distance f from the photographing pattern 105 to the point 1801.
  • the developing pattern 801 is enlarged in accordance with the uniformly enlarged projection image of the photographing pattern 105, a single bright point can be obtained again for the enlarged projection image 1802. it can.
  • the coefficients of the development patterns 801 beta it is possible to correct by the beta / alpha 2.
  • light from point 1801 at a distance that is not necessarily infinity can be selectively reproduced.
  • FIG. 19 shows the configuration in this case.
  • FIG. 19 is a diagram showing a configuration of an imaging device according to the second embodiment of the present invention.
  • the imaging device according to the second embodiment basically has the same configuration as the imaging device according to the first embodiment. However, what differs from the first embodiment is the presence of a focus setting unit 1901.
  • the focus setting unit 1901 receives the setting of the focus distance by using a knob provided on the imaging apparatus 101 or a GUI (Graphical User Interface) of a smartphone, and outputs the focus distance information to the image processing unit 107.
  • GUI Graphic User Interface
  • the fact that focus adjustment after shooting is possible means that the image processing unit 107 has depth information, and various functions such as autofocus and distance measurement can be realized in the image processing unit 107. .
  • the processing using the development pattern 801 can be performed independently, and the processing can be simplified.
  • FIG. 20 is a diagram illustrating an example of a positional relationship between a point light source and an imaging device. As shown in FIG. 20, a case where the object to be imaged is at a far distance f (point light source 2001) and a case where it is at a short distance f '(point light source 2001') are considered.
  • FIG. 21 is a diagram illustrating an example of a sensor image for each distance of a point light source.
  • light from an object to be imaged is applied to the imaging pattern 105 of the space division fringe scan, and the image projected when the shadow is projected on the image sensor 103 is the same as the object to be imaged. It changes according to the case of the distance f between the pattern and the case of the distance f ′.
  • the fringe scan processing unit 106 divides the image into four in correspondence with the respective patterns of the initial phase. Is changing. For example, the center of the concentric circle of the first quadrant 1302 is shifted to the upper right in the first quadrant 2102 at the distance f ′. As described above, if each quadrant is always divided at the same place, a shift occurs, and the effect of the fringe scan is reduced.
  • the data may be divided into four images in each quadrant.
  • the center of each pattern when the center of the sensor image in FIG. 21 is arranged at the origin is indicated by x.
  • FIG. 22 is a diagram showing an example of pattern center coordinates when the point light source is at infinity.
  • the coordinates of the pattern center 2201 in the first quadrant are represented by (x0, y0).
  • FIG. 23 is a diagram showing an example of the pattern center coordinates when the point light source is at a finite distance.
  • the coordinates of the pattern center 2301 in the first quadrant are represented by (x1, y1).
  • the coordinate conversion may be performed using a matrix M that specifies the amount of movement on the sensor surface as shown in FIG.
  • FIG. 24 shows a sensor image obtained as a result of the conversion.
  • FIG. 24 is a diagram illustrating an example of correcting a sensor image when the point light source is at a finite distance. As shown in FIG. 24, the image is reduced and the same position is always the center of the pattern. Therefore, in this example, if the image is divided into four in each quadrant, a sensor image in each quadrant can be obtained. Since the image is reduced, the obtained sensor image is smaller than the image size before correction. In this case, a constant such as 0 may be complemented in the margin area 2401 or may not be used as the NAN value.
  • FIG. 25 shows an example of a sensor image in a case where the photographing pattern 105 is attached with a shift.
  • FIG. 25 is a diagram illustrating an example of a sensor image when the shooting pattern is rotating.
  • an example of a sensor image in a case where the photographing pattern 105 is attached by being rotated by an angle ⁇ about the origin is shown.
  • the center coordinates 2501 to 2504 of each pattern are calculated by cross-correlation calculation with a reference pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance).
  • a reference pattern for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance.
  • the coordinates of the center of gravity O of four points of the center coordinates 2501 to 2504 are calculated, and are arranged so that the center of gravity O overlaps the origin as shown in FIG.
  • FIG. 26 is a diagram showing an example of the pattern center coordinates when the imaging pattern is rotating. At this time, to convert the coordinates (x1, y1) of the pattern center 2601 in the first quadrant of FIG. 26 to the coordinates (x0, y0) of the pattern center 2201 in FIG.
  • the coordinate transformation may be performed using a matrix H that specifies the amount of rotation movement on the sensor surface as shown in FIG.
  • the reason why the matrix M is used in the equation (18) is to correct an image according to the distance f of the point light source 2001 used.
  • the image of FIG. 25 is corrected by rotating the angle ⁇ , and the image always has the same pattern center. Therefore, in the example of FIGS. 25 and 26, the image may be divided in each quadrant.
  • the matrix H shown in Expressions (18) and (19) is a matrix having 2 ⁇ 2 elements, which is generally called an affine matrix, and has a relationship between four coordinates (in this example, the center coordinate of each pattern). Elements can be calculated from 2201 to 2204 and center coordinates 2501 to 2504). Although two points are correctly used for calculating the affine matrix, a more accurate affine matrix can be obtained by applying the least squares method using information of four points.
  • FIG. 27 shows an example of a sensor image when the point light source 2001 is photographed in this state.
  • FIG. 28 is a diagram illustrating an example of a sensor image in a state where the photographing pattern is inclined in the thickness direction.
  • the sensor image has a trapezoidal distortion.
  • the trapezoidal distortion cannot be completely corrected by the affine matrix as in the equation (18), but the trapezoidal distortion can be corrected more easily and accurately by using the homography matrix.
  • the center coordinates 2801 to 2804 of each pattern are calculated by cross-correlation calculation with a reference pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance).
  • a reference pattern for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance.
  • the coordinates of the center of gravity O of the four points of the center coordinates 2801 to 2804 are calculated, and are arranged so that the center of gravity O overlaps the origin as shown in FIG.
  • FIG. 29 is a diagram illustrating an example of pattern center coordinates in a state where the photographing pattern is inclined in the thickness direction. At this time, to convert the coordinates (x1, y1) of the pattern center 2801 in the first quadrant in FIG. 29 to the coordinates (x0, y0) of the pattern center 2201 in FIG.
  • the coordinates may be converted using a matrix H that specifies the amount of movement on the sensor surface as shown in FIG.
  • the matrix H is a matrix having a 3 ⁇ 3 element generally called a homography matrix, and elements are determined from a relationship of four coordinates (in this example, center coordinates 2201 to 2204 and center coordinates 2801 to 2804 of each pattern). It is possible to calculate.
  • the reason why the matrix M is used is to correct the image according to the distance f 'of the point light source 2001' used. As a result, the image in FIG. 28 is corrected for trapezoidal distortion (tilt) and rotation, and is always an image having the same pattern center. Therefore, in this example, the image may be divided in each quadrant.
  • FIG. 30 shows a configuration for realizing distortion correction.
  • FIG. 30 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention.
  • the imaging device shown in FIG. 30 is basically the same as the imaging device according to the second embodiment shown in FIG. 19, but is partially different. The following mainly describes the differences.
  • the imaging apparatus includes a correction value storage unit 3001 as a parameter storage unit that measures and stores correction parameters including a matrix H and a matrix M for distortion correction, and a distortion using the correction parameters.
  • a fringe scan processing unit 3002 for performing correction is provided.
  • the correction value storage unit 3001 may be included in the imaging module 102.
  • the correction value storage unit 3001 operates as a parameter output unit that outputs a correction value, that is, a parameter for correction, in response to a request via an API (Application, Programming, Interface).
  • API Application, Programming, Interface
  • FIG. 31 is a diagram illustrating a configuration example of an imaging device according to a modification of the third embodiment of the present invention.
  • the correction value storage unit is provided in the imaging module 102 as shown in FIG.
  • the form with 3001 is more versatile.
  • the method of calculating the correction value (the calibration method) is common. First, a correction value calculation method (calibration method) will be described.
  • FIG. 32 is a diagram illustrating an example of a processing flow of calibration.
  • the operator of the calibration irradiates a point light source 2001 on the optical axis of the imaging module 102 toward the imaging module 102 as shown in FIG. 20 (step 3201).
  • the point light source 2001 may be any light source such as an LED (Light Emitting Diode) light source or the sun that knows infinity or the distance to a subject, such as the sun.
  • the operator of the calibration causes the imaging module 102 to acquire a sensor image (Step 3202).
  • the fringe scan processing unit 3002 calculates the center coordinates of FZA (Fresnel ⁇ Zone ⁇ Aperture) (step 3203). More specifically, the fringe scan processing unit 3002 calculates the center coordinates of each pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement factor ⁇ obtained from the subject distance) by using a cross-correlation operation. : Center coordinates 2801 to 2804) are calculated.
  • the fringe scan processing unit 3002 calculates the center of gravity O from the central coordinate group (step 3204). Then, the fringe scan processing unit 3002 arranges the center of gravity O at the origin.
  • the fringe scan processing unit 3002 calculates a correction matrix H (step 3205). Specifically, the fringe scan processing unit 3002 calculates a correction matrix based on the relationship between the displacements of the four center coordinates (eg, the rotation relationship between the center coordinates 2201 to 2204 and the center coordinates 2801 to 2804 and the trapezoidal distortion). A 3 ⁇ 3 element of the homography matrix H that becomes H is calculated.
  • the fringe scan processing unit 3002 stores the correction matrix H and the position of the center of gravity O as correction parameters in the correction value storage unit 3001 (step 3206).
  • the matrix H for correcting the sensor image and the position of the center of gravity O can be specified, and stored in the correction value storage unit 3001 as a parameter.
  • the fringe scan processing unit 3002 performs distortion correction processing.
  • FIG. 33 is a diagram illustrating an example of a sensor image distortion correction processing flow.
  • the fringe scan processing unit 3002 reads the position of the center of gravity O from the correction value storage unit 3001, and moves the center of gravity O of the sensor image to the coordinates serving as the origin (step 3301).
  • the fringe scan processing unit 3002 performs correction using the matrix M (step 3302). Specifically, the fringe scan processing unit 3002 obtains information on the focusing distance f ′ from the focus setting unit 1901 and performs correction using a matrix M corresponding to the distance f ′.
  • the fringe scan processing unit 3002 performs correction using the matrix H (step 3303). Specifically, the fringe scan processing unit 3002 performs correction using the matrix H acquired from the correction value storage unit 3001.
  • the matrix H and the movement of the position of the center of gravity O are processed separately.
  • a homography matrix of a 3 ⁇ 3 matrix is used as in Expression (20)
  • the 3 ⁇ 3 matrix is used. Since it is possible to include the correction amount of the movement of the center of gravity O in the matrix, it is not always necessary to perform the processing separately.
  • the above is the imaging apparatus according to the third embodiment.
  • an error in assembling the imaging apparatus and an error in producing a photographic pattern are corrected, and distortion associated with focus adjustment is corrected, thereby achieving high-precision space division.
  • a fringe scan can be performed.
  • the distortion is corrected before the fringe scan of the sensor image (the processes of steps 3301 to 3303 are performed before the processes of steps 1401 to 1407 in the fringe scan processing unit 3002).
  • the sensor image is transmitted to another device wirelessly or by wire, it may be desirable that the information to be transmitted is unprocessed in consideration of preventing loss of information. That is, it may be preferable to perform the distortion correction processing after the fringe scan.
  • a method of correcting distortion after fringe scanning of a sensor image will be described with reference to FIGS.
  • FIG. 34 is a diagram illustrating a configuration example of an imaging device according to a fourth embodiment of the present invention.
  • the imaging device according to the fourth embodiment is substantially the same as the imaging device according to the third embodiment, but is partially different.
  • the imaging device according to the fourth embodiment is different from the imaging device according to the third embodiment in that the information in the correction value storage unit 3001 is read and used by the image processing unit 3401. That is, the imaging apparatus according to the fourth embodiment does not correct the distortion of the sensor image, but adds the distortion corresponding to the sensor image to the development pattern 801.
  • the matrix H for converting the coordinates (x1, y1) into the coordinates (x0, y0) is calculated.
  • a matrix H ′ and a matrix M ′ for transforming the coordinates (x0, y0) having the reverse transformation into the coordinates (x1, y1) are calculated.
  • the flow of the development process in this case is the same as the flow of the development process by the correlation development method in FIG. 15 described above, but is partially different.
  • the generation of the development pattern 801 in step 1502 of the development processing by the correlation development method is performed according to the processing flow shown in FIG.
  • FIG. 35 is a diagram showing an example of a distortion correction processing flow of a developing pattern.
  • the distortion correction processing for the development pattern is started in step 1502 of the development processing using the correlation development method.
  • the image processing unit 3401 converts the matrix H ′ of the equation (21) into a distortion adding processing in advance similarly to the example of the calibration processing flow illustrated in FIG. Is calculated in advance. Specifically, the image processing unit 3401 obtains a 3 ⁇ 3 element of a homography matrix H ′ that becomes a correction matrix H ′ based on a relationship between four coordinates (eg, center coordinates 2201 to 2204 and center coordinates 2801 to 2804). Is calculated in advance.
  • the image processing unit 3401 generates a concentric pattern image for space division fringe scanning as shown in FIG. 20 (step 3501).
  • the image processing unit 3401 moves the center of gravity O (step 3502). Specifically, the image processing unit 3401 reads the position of the center of gravity O from the correction value storage unit 3001, and moves the center of the pattern image of the concentric circle for the space division fringe scan by the position of the center of gravity O from the coordinates with the origin as the origin. .
  • the image processing unit 3401 performs the correction using the matrix H ′ acquired from the correction value storage unit 3001 (step 3503). Specifically, a concentric pattern image for space division fringe scanning is converted using the matrix H 'read from the correction value storage unit 3001.
  • the image processing unit 3401 performs the correction using the matrix M ′ obtained from the correction value storage unit 3001 (Step 3504). Specifically, the image processing unit 3401 obtains the focusing distance f ′ from the focus setting unit 1901, obtains information of a matrix M ′ corresponding to the distance f ′ from the correction value storage unit 3001, and performs correction. carry out.
  • the image processing unit 3401 divides the pattern image into four (Step 3505).
  • the subsequent processing is the same as the development processing of FIG. 15 in which the complex sensor image subjected to the two-dimensional FFT operation is multiplied.
  • the moiré development method since the development pattern 801 is distorted, the moiré development method does not produce moire at the time of point light source development at a single frequency but blurs the image. Therefore, it can be said that the correlation development method has higher affinity and is more desirable than the moiré development method.
  • the above is the imaging apparatus according to the fourth embodiment. According to the method and configuration according to the fourth embodiment, it is possible to correct an error in assembling an imaging device and an error in manufacturing a photographing pattern, and correct distortion related to focus adjustment, thereby achieving high-precision space. A divided fringe scan can be performed.
  • the method of correcting an error in assembling the imaging device and an error in manufacturing a photographing pattern and correcting a distortion related to focus adjustment has been described. However, no correction has been taken into account for defects, dirt, and dust in the imaging pattern and the image sensor.
  • a method of protecting the defect will be described with reference to FIGS. First, the problem of this defect will be clarified using FIG.
  • FIG. 37 is a diagram showing an example of an image in the case where dirt or the like is attached to the photographing pattern.
  • a defect appears at the same position in all four patterns, so that the defect is recorded at one position as a sensor image.
  • the space division fringe scan since a defect exists only in the quadrant of the pattern 3803 like the defect 3801, matching with other patterns cannot be obtained.
  • FIG. 38 is a diagram showing an example of the luminance distribution of an ideal sensor image. That is, as shown in the pattern 3802, a signal component in which the luminance decreases in the mask portion of the concentric pattern can be obtained.
  • FIG. 39 is a diagram illustrating an example of the luminance distribution of a defective sensor image. That is, as shown in a pattern 3803, a signal component whose luminance is reduced in the mask portion of the concentric pattern can be obtained, but the luminance is reduced irrespective of the mask portion due to a part of the defect 3801. Note that, of course, the degree of decrease in luminance differs depending on the degree of the defect 3801. That is, if the defect is minor, the degree of decrease in luminance is small, and if the defect is large, the degree of decrease in luminance is large.
  • FIG. 36 shows the configuration of the fifth embodiment.
  • FIG. 36 is a diagram illustrating a configuration example of an imaging device according to a fifth embodiment of the present invention.
  • the imaging device according to the fifth embodiment is basically the same as the imaging device according to the fourth embodiment, but is partially different.
  • the imaging apparatus differs from the imaging apparatus according to the fourth embodiment in that a defect detection unit 3601 is provided and an image processing unit 3602 performs a defect protection process.
  • the defect detection unit 3601 acquires the sensor image from the fringe scan processing unit 106 and, when detecting an area having a luminance lower than the defect detection threshold as shown in FIGS. 38 and 39, outputs a defect signal specifying the area.
  • it is stored in the correction value storage unit 3001. That is, information for specifying the position where the defect signal is output is stored in the correction value storage unit 3001.
  • Such information is not limited to, for example, information for specifying coordinates on a defective sensor one by one, but may be information for specifying a rectangular area on the sensor or the center and radius of a minimum circle including a defective part. Good.
  • the simplest defect detection method by the defect detection unit 3601 is as described above.
  • the defect may be determined by adding the two sensor images in which black and white are inverted. This processing is desirably performed between steps 1401 and 1402 of the fringe scan processing in FIG.
  • FIG. 40 shows the result of performing this addition on the signals shown in FIGS. 38 and 39.
  • FIG. 40 is a diagram showing an example of defect detection by inversion pattern synthesis.
  • the normal signal component is canceled by cancellation, and only the luminance fluctuation component remains.
  • the defect detection unit 3601 outputs a defect signal and stores a correction value in an area where the remaining luminance is below or above the defect detection threshold, using a defect detection threshold within the range of ⁇ constant of the average value. This is stored in the unit 3001. According to this method, stable defect detection can be performed regardless of the subject. This defect detection may be performed for each of all the sensor images, or may be performed for each pair with the inverted pattern as a pair.
  • the image processing unit 3602 performs a process of not using a portion corresponding to the defect signal of the development pattern as a NAN value, that is, a data mask process, using the defect signal output from the defect detection unit 3601.
  • the matrix H ′ and the matrix M ′ are used to correspond to the defect signal as in the fourth embodiment. It is desirable to correct the position at which it is performed.
  • the defect signal is input not to the image processing unit 3602 but to the fringe scan processing unit 3002 of FIG. 30 of the third embodiment, and the same position of all the patterns of the sensor image (the relative position between the pattern center and the defect signal is the same) May be performed without using the NAN value as the NAN value.
  • the fifth embodiment has been described above. According to the imaging apparatus according to the fifth embodiment, errors in assembling the imaging apparatus and errors in manufacturing the imaging pattern are corrected, and not only distortions related to focus adjustment are corrected, but also the imaging pattern is corrected. By performing correction for defects, dirt, and dust of the image sensor and the image sensor, it is possible to perform more accurate space division fringe scan.
  • each of the above-described configurations, functions, processing units, processing means, and the like may be partially or entirely realized by hardware, for example, by designing an integrated circuit.
  • the above-described configurations, functions, and the like may be realized by software by a processor interpreting and executing a program that realizes each function.
  • Information such as programs, tables, and files for realizing each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
  • control lines and information lines are shown as necessary for the description, and do not necessarily indicate all control lines and information lines on a product. In fact, it can be considered that almost all components are connected to each other.

Abstract

Provided is technology for obtaining accurate developed images by correcting imaging device assembly errors, photographing pattern manufacturing errors, and focus adjustment distortions. An imaging device characterized by comprising: an image sensor for converting light into electrical signals and generating sensor images; a modulator for modulating the intensity of the light detected by the image sensor on the basis of a photographing pattern; and a parameter storage unit for storing parameters used to execute a prescribed correction process upon a plurality of sensor images, from among said sensor images, that have been photographed using different photographing patterns.

Description

撮像装置および撮像方法Imaging device and imaging method
 本発明は、撮像装置および撮像方法に関する。本発明は2018年9月18日に出願された日本国特許の出願番号2018-173930の優先権を主張し、文献の参照による織り込みが認められる指定国については、その出願に記載された内容は参照により本出願に織り込まれる。 The present invention relates to an imaging device and an imaging method. The present invention claims the priority of Japanese Patent Application No. 2018-173930 filed on September 18, 2018, and the contents described in the application for the designated countries in which weaving by reference to a document is permitted. Incorporated by reference into this application.
 本技術分野の背景技術として、特開2018-61109号公報(特許文献1)がある。この公報には、「第1のパターンを有し、光の強度を変調する変調器と、前記変調器を透過した光を画像データに変換して出力する画像センサと、前記画像データと、第2のパターンを示すパターンデータとの相互相関演算に基づいて像を復元する画像処理部と、を有することを特徴とする」撮像装置についての記載があり、さらに「変調器の初期位相が異なるパターンを空間分割」し、「画像センサから出力される画像データを、変調器のパターン配置に応じた領域に分割」した画像からノイズキャンセルする技術についての記載がある。 背景 As a background art of this technical field, there is JP-A-2018-61109 (Patent Document 1). This publication states that “a modulator having a first pattern and modulating the intensity of light, an image sensor that converts light transmitted through the modulator into image data and outputs the image data, And an image processing unit for restoring an image based on a cross-correlation operation with pattern data indicating the second pattern. There is a description of a technique for noise-cancelling an image obtained by “space-dividing the image data” and “dividing the image data output from the image sensor into regions corresponding to the pattern arrangement of the modulator”.
特開2018-61109号公報JP 2018-61109 A
 上記特許文献1に記載された技術では、初期位相が異なるパターンがずれて配置される(例えば、パターンA、B,C,Dのうち、パターンBの中心位置がずれる、パターンCが回転している等)、あるいは前記基板上面がセンサ面と平行ではない(例えば、傾いている等)場合、その画像センサにて受光される射影パターンに位置ずれやゆがみが発生し、像を現像しても正しい現像画像が得られないという問題がある。 In the technique described in Patent Document 1, patterns having different initial phases are arranged with a shift (for example, among patterns A, B, C, and D, the center position of pattern B is shifted, and pattern C is rotated. If the upper surface of the substrate is not parallel to the sensor surface (for example, is inclined), the projection pattern received by the image sensor will be displaced or distorted, and the image will not be developed. There is a problem that a correct developed image cannot be obtained.
 本発明の課題は、撮像装置の組立誤差、撮影用パターンの製造誤差、フォーカス調整の歪みを補正することで正しい現像画像を得る技術を提供することである。 An object of the present invention is to provide a technique for obtaining a correct developed image by correcting an assembly error of an imaging device, a manufacturing error of a photographing pattern, and distortion of focus adjustment.
 本願は、上記課題の少なくとも一部を解決する手段を複数含んでいるが、その例を挙げるならば、以下のとおりである。上記課題を解決すべく、本発明の一態様に係る撮像装置は、光を電気信号に変換し、センサ画像を生成する画像センサと、撮影用パターンに基づいて上記画像センサで検出される光の強度を変調する変調器と、上記センサ画像のうち異なる上記撮影用パターンによって撮影された複数の上記センサ画像に対する所定の補正処理の実行に用いるパラメータを記憶するパラメータ記憶部と、を有する。 Although the present application includes a plurality of means for solving at least a part of the above-described problems, examples thereof are as follows. In order to solve the above problem, an imaging device according to one embodiment of the present invention converts an light into an electric signal, generates an image of a sensor, and an image sensor that detects the light based on an imaging pattern. A modulator that modulates intensity; and a parameter storage unit that stores a parameter used to execute a predetermined correction process on a plurality of the sensor images captured by the different capturing patterns among the sensor images.
 本発明によれば、撮像装置の組立誤差、撮影用パターンの製造誤差、フォーカス調整の歪みを補正することで正しい現像画像を得る技術を提供できる。上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。 According to the present invention, it is possible to provide a technique for obtaining a correct developed image by correcting an assembling error of the imaging device, a manufacturing error of a photographing pattern, and a distortion of focus adjustment. Problems, configurations, and effects other than those described above will be apparent from the following description of the embodiments.
本発明の第一の実施例に係る撮像装置の構成例を示す図である。FIG. 1 is a diagram illustrating a configuration example of an imaging device according to a first embodiment of the present invention. 第一の実施例に係る撮像モジュールの構成例を示す図である。FIG. 3 is a diagram illustrating a configuration example of an imaging module according to the first embodiment. 第一の実施例に係る別の撮像モジュールの構成例を示す図である。FIG. 4 is a diagram illustrating a configuration example of another imaging module according to the first embodiment. 第一の実施例に係る撮影用パターンおよび現像用パターンの例を示す図である。FIG. 4 is a diagram illustrating an example of a photographing pattern and a developing pattern according to the first embodiment. 第一の実施例に係る撮影用パターンおよび現像用パターンの別の例を示す図である。FIG. 6 is a diagram illustrating another example of the photographing pattern and the developing pattern according to the first embodiment. 斜め入射平行光によるパターン基板表面から画像センサへの射影像に面内ずれが生じる例を示す図である。FIG. 9 is a diagram illustrating an example in which an in-plane shift occurs in a projected image from a pattern substrate surface to an image sensor due to obliquely incident parallel light. 撮影用パターンの投影像の例を示す図である。FIG. 3 is a diagram illustrating an example of a projected image of a photographing pattern. 現像用パターンの例を示す図である。FIG. 4 is a diagram illustrating an example of a development pattern. 相関現像方式による現像画像の例を示す図である。FIG. 6 is a diagram illustrating an example of a developed image by a correlation development method. モアレ現像方式によるモアレ縞の例を示す図である。It is a figure showing an example of a moire fringe by a moire development system. モアレ現像方式による現像画像の例を示す図である。FIG. 4 is a diagram illustrating an example of a developed image by a moiré developing method. フリンジスキャンにおける初期位相の撮影用パターンの組み合わせの例を示す図である。FIG. 9 is a diagram illustrating an example of a combination of imaging patterns of an initial phase in a fringe scan. 空間分割フリンジスキャンの撮影用パターンの例を示す図である。FIG. 4 is a diagram illustrating an example of a shooting pattern of a space division fringe scan. フリンジスキャンの処理フローの例を示す図である。FIG. 9 is a diagram illustrating an example of a processing flow of fringe scan. 相関現像方式による現像処理の処理フローの例を示す図である。FIG. 7 is a diagram illustrating an example of a processing flow of a development process by a correlation development method. モアレ現像方式による現像処理の処理フローの例を示す図である。FIG. 4 is a diagram illustrating an example of a processing flow of a development process by a moire development method. 物体が無限距離にある場合の撮影用パターンの投影例を示す図である。FIG. 4 is a diagram illustrating an example of projection of a shooting pattern when an object is at an infinite distance. 物体が有限距離にある場合の撮影用パターンの拡大投影例を示す図である。FIG. 9 is a diagram illustrating an example of an enlarged projection of a photographing pattern when an object is at a finite distance. 本発明の第二の実施例に係る撮像装置の構成例を示す図である。FIG. 6 is a diagram illustrating a configuration example of an imaging device according to a second embodiment of the present invention. 点光源と撮像装置の位置関係の例を示す図である。FIG. 3 is a diagram illustrating an example of a positional relationship between a point light source and an imaging device. 点光源の距離別のセンサ画像の例を示す図である。It is a figure showing the example of the sensor image according to the distance of the point light source. 点光源が無限遠にある場合のパターン中心座標の例を示す図である。FIG. 9 is a diagram illustrating an example of pattern center coordinates when a point light source is at infinity. 点光源が有限距離にある場合のパターン中心座標の例を示す図である。FIG. 9 is a diagram illustrating an example of pattern center coordinates when a point light source is at a finite distance. 点光源が有限距離にある場合のセンサ画像の補正例を示す図である。FIG. 9 is a diagram illustrating an example of correcting a sensor image when a point light source is at a finite distance. 撮影用パターンが回転している場合のセンサ画像の例を示す図である。FIG. 5 is a diagram illustrating an example of a sensor image when a shooting pattern is rotating. 撮影用パターンが回転している場合のパターン中心座標の例を示す図である。FIG. 9 is a diagram illustrating an example of pattern center coordinates when a shooting pattern is rotating. 撮影用パターンが厚み方向に傾いている状態の例を示す図である。It is a figure showing the example of the state where the pattern for photography is inclined in the thickness direction. 撮影用パターンが厚み方向に傾いている状態のセンサ画像の例を示す図である。It is a figure showing the example of the sensor image of the state where the pattern for photography was inclined in the thickness direction. 撮影用パターンが厚み方向に傾いている状態のパターン中心座標の例を示す図である。FIG. 4 is a diagram illustrating an example of pattern center coordinates in a state where a shooting pattern is inclined in a thickness direction. 本発明の第三の実施例に係る撮像装置の構成例を示す図である。FIG. 9 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention. 本発明の第三の実施例の変形例に係る撮像装置の構成例を示す図である。FIG. 14 is a diagram illustrating a configuration example of an imaging device according to a modification of the third embodiment of the present invention. キャリブレーションの処理フローの例を示す図である。FIG. 9 is a diagram illustrating an example of a processing flow of calibration. センサ画像の歪み補正処理フローの例を示す図である。FIG. 11 is a diagram illustrating an example of a sensor image distortion correction processing flow. 本発明の第四の実施例に係る撮像装置の構成例を示す図である。FIG. 14 is a diagram illustrating a configuration example of an imaging device according to a fourth embodiment of the present invention. 現像用パターンの歪み補正処理フローの例を示す図である。FIG. 7 is a diagram illustrating an example of a distortion correction processing flow of a development pattern. 本発明の第五の実施例に係る撮像装置の構成例を示す図である。It is a figure showing the example of composition of the imaging device concerning a 5th example of the present invention. 撮影用パターンに汚れ等が付着している場合の画像の例を示す図である。FIG. 9 is a diagram illustrating an example of an image when dirt or the like is attached to a shooting pattern. 理想的なセンサ画像の輝度分布例を示す図である。It is a figure showing an example of a luminance distribution of an ideal sensor image. 欠陥のあるセンサ画像の輝度分布例を示す図である。FIG. 7 is a diagram illustrating an example of a luminance distribution of a defective sensor image. 反転パターン合成による欠陥検出の例を示す図である。FIG. 9 is a diagram illustrating an example of defect detection by inversion pattern synthesis.
 以下の実施の形態においては便宜上その必要があるときは、複数のセクションまたは実施の形態に分割して説明するが、特に明示した場合を除き、それらはお互いに無関係なものではなく、一方は他方の一部または全部の変形例、詳細、補足説明等の関係にある。 In the following embodiments, when necessary for the sake of convenience, the description will be made by dividing into a plurality of sections or embodiments, but unless otherwise specified, they are not unrelated to each other, and one is the other. Of some or all of the above, details, supplementary explanations, and the like.
 また、以下の実施の形態において、要素の数等(個数、数値、量、範囲等を含む)に言及する場合、特に明示した場合および原理的に明らかに特定の数に限定される場合等を除き、その特定の数に限定されるものではなく、特定の数以上でも以下でもよい。 Further, in the following embodiments, when referring to the number of elements (including the number, numerical value, amount, range, etc.), a case where it is particularly specified and a case where it is clearly limited to a specific number in principle, etc. Except, the number is not limited to the specific number, and may be more than or less than the specific number.
 さらに、以下の実施の形態において、その構成要素(要素ステップ等も含む)は、特に明示した場合および原理的に明らかに必須であると考えられる場合等を除き、必ずしも必須のものではないことは言うまでもない。 Further, in the following embodiments, the constituent elements (including element steps, etc.) are not necessarily essential unless otherwise specified or considered to be essential in principle. Needless to say.
 同様に、以下の実施の形態において、構成要素等の形状、位置関係等に言及するときは特に明示した場合および原理的に明らかにそうではないと考えられる場合等を除き、実質的にその形状等に近似または類似するもの等を含むものとする。このことは、上記数値および範囲についても同様である。 Similarly, in the following embodiments, when referring to the shapes, positional relationships, and the like of the components, the shapes are substantially the same, unless otherwise specified, and in cases where it is considered that it is not clearly apparent in principle. And the like. This is the same for the above numerical values and ranges.
 また、実施の形態を説明するための全図において、同一の部材には原則として同一の符号を付し、その繰り返しの説明は省略する。以下、本発明の実施例について図面を用いて説明する。 In addition, in all the drawings for describing the embodiments, the same members are denoted by the same reference numerals in principle, and the repeated description thereof will be omitted. Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 一般に、車載カメラ、ウェアラブルデバイス、スマートフォンなどの情報機器に搭載するデジタルカメラには薄型化と低コスト化が求められることが多い。例えば、レンズを用いることなく物体像を得ることで薄型化と低コスト化を実現する撮像方式が提案されている。そのような技術では、画像センサの前に特殊な格子パターンを貼り付け、その画像センサにて受光される射影パターンから像を現像するための逆問題を解くことにより、物体の像を得る撮像方式がある。この方式では、信号処理により逆問題を解く際の演算が複雑になり、処理負荷が高いために、情報機器のハードウェア要求仕様が高くなってしまう。 Generally, digital cameras mounted on information devices such as in-vehicle cameras, wearable devices, and smartphones are often required to be thinner and lower in cost. For example, an imaging method has been proposed which realizes a thinner and lower cost by obtaining an object image without using a lens. In such a technique, an imaging method for obtaining an image of an object by pasting a special lattice pattern in front of an image sensor and solving an inverse problem for developing an image from a projection pattern received by the image sensor. There is. In this method, the operation for solving the inverse problem by signal processing becomes complicated, and the processing load is high, so that the hardware requirement specification of the information equipment is increased.
 〈無限遠物体の撮影原理〉図1は、本発明の第一の実施例に係る撮像装置の構成例を示す図である。撮像装置101は、結像させるレンズを用いることなく、外界の物体の画像を取得するものであり、図1に示すように、撮像モジュール102、フリンジスキャン処理部106、画像処理部107およびコントローラ108から構成されている。図2に撮像モジュール102の一例を示す。 FIG. 1 is a diagram showing a configuration example of an imaging apparatus according to a first embodiment of the present invention. The image capturing apparatus 101 acquires an image of an external object without using a lens to form an image. As shown in FIG. 1, the image capturing module 102, a fringe scan processing unit 106, an image processing unit 107, and a controller 108 It is composed of FIG. 2 shows an example of the imaging module 102.
 図2は、第一の実施例に係る撮像モジュールの構成を示す図である。撮像モジュール102は、画像センサ103、パターン基板104、撮影用パターン105から構成されている。パターン基板104は、画像センサ103の受光面に密着して固定されており、パターン基板104に撮影用パターン105が形成される。パターン基板104は、例えばガラスやプラスティックなどの可視光に対して透明な材料からなる。 FIG. 2 is a diagram illustrating a configuration of the imaging module according to the first embodiment. The imaging module 102 includes an image sensor 103, a pattern substrate 104, and an imaging pattern 105. The pattern substrate 104 is fixed in close contact with the light receiving surface of the image sensor 103, and a pattern 105 for photographing is formed on the pattern substrate 104. The pattern substrate 104 is made of a material that is transparent to visible light, such as glass or plastic.
 撮影用パターン105は、外側に向かうほど中心からの半径に反比例して格子パターンの間隔、すなわちピッチが狭くなる同心円状の格子パターンからなる。撮影用パターン105は、例えば半導体プロセスに用いられるスパッタリング法などによってアルミニウム、クロムなどの金属を蒸着することによって形成される。金属が蒸着されたパターンと蒸着されていないパターンによって濃淡がつけられる。なお、撮影用パターン105の形成は、これに限定されるものでなく、例えばインクジェットプリンタなどによる印刷などによって濃淡をつけて形成してもよい。さらに、ここでは可視光を例に説明したが、例えば遠赤外線の撮影を行う際には、パターン基板104は例えばゲルマニウム、シリコン、カルコゲナイドなどの遠赤外線に対して透明な材料とするなど、撮影対象となる波長に対して透明な材料を用い、撮影用パターン105は金属等の遮断する材料を用いればよい。 (4) The photographing pattern 105 is a concentric lattice pattern in which the distance between the lattice patterns, that is, the pitch, is reduced in inverse proportion to the radius from the center toward the outside. The imaging pattern 105 is formed by depositing a metal such as aluminum or chromium by, for example, a sputtering method used in a semiconductor process. Shading is given by the pattern with and without the metal deposited. Note that the formation of the photographing pattern 105 is not limited to this, and may be formed by shading, for example, by printing with an inkjet printer or the like. Further, here, the visible light has been described as an example. However, for example, when performing imaging of far-infrared rays, the pattern substrate 104 is made of a material that is transparent to far-infrared rays such as germanium, silicon, and chalcogenide. A material that is transparent to the wavelength to be used may be used, and the imaging pattern 105 may be made of a material that blocks light such as metal.
 また、パターン基板104と撮影用パターン105とは、画像センサ103に入射する光の強度を変調する変調器であるともいえる。なお、ここでは撮像モジュール102を実現するために、撮影用パターン105をパターン基板104に形成する方法について述べたが、図3に示すような構成によっても実現できる。 The pattern substrate 104 and the imaging pattern 105 can also be said to be modulators that modulate the intensity of light incident on the image sensor 103. Note that, here, a method of forming the imaging pattern 105 on the pattern substrate 104 in order to realize the imaging module 102 has been described. However, the imaging module 102 may be realized by a configuration as shown in FIG.
 図3は、第一の実施例に係る別の撮像モジュールの構成例を示す図である。図3に示す構成例では、撮影用パターン105を薄膜に形成し、支持部材301により保持するようにしている。 FIG. 3 is a diagram illustrating a configuration example of another imaging module according to the first embodiment. In the configuration example shown in FIG. 3, the imaging pattern 105 is formed in a thin film and is held by the support member 301.
 なお、この装置において、撮影画角はパターン基板104の厚さによって変更可能である。よって、例えばパターン基板104が図3の構成であり支持部材301の長さを変更可能な機能を有していれば、撮影時に画角を変更して撮影することも可能となる。 In this apparatus, the angle of view can be changed depending on the thickness of the pattern substrate 104. Therefore, for example, if the pattern substrate 104 has the configuration shown in FIG. 3 and has a function of changing the length of the support member 301, it is possible to change the angle of view during shooting and shoot.
 画像センサ103の表面には、受光素子である画素103aが格子状に規則的に配置されている。この画像センサ103は、画素103aが受光した光画像を電気信号である画像信号に変換する。 画素 On the surface of the image sensor 103, the pixels 103a, which are light receiving elements, are regularly arranged in a grid pattern. The image sensor 103 converts a light image received by the pixel 103a into an image signal which is an electric signal.
 撮影用パターン105を透過する光は、そのパターンによって光の強度が変調され、透過した光は画像センサ103にて受光される。画像センサ103は、例えばCCD(Charge Coupled Device)イメージセンサまたはCMOS(Complementary Metal Oxide Semiconductor)イメージセンサなどからなる。 (4) The intensity of light transmitted through the imaging pattern 105 is modulated by the pattern, and the transmitted light is received by the image sensor 103. The image sensor 103 is, for example, a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
 画像センサ103から出力された画像信号は、フリンジスキャン処理部106によってノイズ除去等の処理がなされ、画像処理部107によって画像処理されたデータがコントローラ108に出力される。コントローラ108は、出力をホストコンピュータや外部記録媒体に出力する場合には、USB(Universal Serial Bus)等のインターフェイスに適合するようデータ形式を変換し出力する。 (4) The image signal output from the image sensor 103 is subjected to processing such as noise removal by the fringe scan processing unit 106, and the data processed by the image processing unit 107 is output to the controller 108. When outputting the output to a host computer or an external recording medium, the controller 108 converts the data format so as to conform to an interface such as USB (Universal Serial Bus) and outputs the converted data.
 続いて、撮像装置101における撮影原理について説明する。まず、撮影用パターン105は、中心からの半径に対して反比例してピッチが細かくなる同心円状のパターンであり、同心円の中心である基準座標からの半径r、係数βを用いて、 Next, the principle of photographing in the imaging device 101 will be described. First, the photographing pattern 105 is a concentric pattern in which the pitch becomes finer in inverse proportion to the radius from the center, and using the radius r and the coefficient β from the reference coordinates that is the center of the concentric circle,
Figure JPOXMLDOC01-appb-M000001
と定義することができる。撮影用パターン105は、この式に比例して透過率変調されているものとする。なお、以降、簡単化のためにx軸方向についてのみ数式で説明するが、同様にy軸方向について考慮することで2次元に展開して考えることが可能である。
Figure JPOXMLDOC01-appb-M000001
Can be defined as It is assumed that the photographing pattern 105 is transmittance-modulated in proportion to this equation. In the following, for simplicity, only the x-axis direction will be described using mathematical expressions. However, it is possible to consider two-dimensionally by similarly considering the y-axis direction.
 このような縞を持つプレートは、ガボールゾーンプレート(GZP:Gabor Zone Plate)やフレネルゾーンプレート(FZP:Fresnel Zone Plate)と呼ばれる。 プ レ ー ト A plate having such stripes is called a Gabor zone plate (GZP: Gabor Zone Plate) or a Fresnel zone plate (FZP: Fresnel Zone Plate).
 図4は、第一の実施例に係る撮影用パターンおよび現像用パターンの例を示す図である。具体的には、図4は、上式(1)で表されるガボールゾーンプレートの例である。 FIG. 4 is a diagram showing an example of a photographing pattern and a developing pattern according to the first embodiment. Specifically, FIG. 4 is an example of a Gabor zone plate represented by the above equation (1).
 図5は、第一の実施例に係る撮影用パターンおよび現像用パターンの別の例を示す図である。具体的には、上式(1)を閾値1で2値化した結果得られるパターンを用いたフレネルゾーンプレートの例である。 FIG. 5 is a diagram showing another example of the photographing pattern and the developing pattern according to the first embodiment. Specifically, this is an example of a Fresnel zone plate using a pattern obtained as a result of binarizing Expression (1) with a threshold value of 1.
 撮影用パターン105が形成された厚さdのパターン基板104に、図6に示すようにx軸方向に角度θで平行光が入射したとする。パターン基板104中の屈折角をθとして幾何光学的には、表面の格子の透過率が乗じられた光が、k=d・tanθだけずれて画像センサ103に入射する。このとき、 The patterned substrate 104 having a thickness of d that photographing pattern 105 is formed, a parallel beam by an angle theta 0 in the x-axis direction as shown in FIG. 6 is a incident. In terms of geometrical optics, when the refraction angle in the pattern substrate 104 is θ, the light multiplied by the transmittance of the surface grating enters the image sensor 103 with a shift of k = d · tan θ. At this time,
Figure JPOXMLDOC01-appb-M000002
のような強度分布を持つ投影像が画像センサ103上で検出される。なお、Φは上式(1)の透過率分布の初期位相を示す。この撮影用パターン105の投影像の例を図7に示す。
Figure JPOXMLDOC01-appb-M000002
The projection image having the intensity distribution as described above is detected on the image sensor 103. Note that Φ indicates the initial phase of the transmittance distribution of the above equation (1). FIG. 7 shows an example of a projected image of the photographing pattern 105.
 図7は、撮影用パターンの投影像の例を示す図である。図7に示すように、撮影用パターン105を用いて図6のような平行光の入射があった場合に、画像センサ103上には、上式(2)のようにkだけシフトして投影される。これが撮像モジュール102からの出力となる。 FIG. 7 is a view showing an example of a projected image of the photographing pattern. As shown in FIG. 7, when parallel light is incident as shown in FIG. 6 using the photographing pattern 105, the image is projected onto the image sensor 103 by shifting by k as in the above equation (2). Is done. This is the output from the imaging module 102.
 次に、画像処理部107においては、現像処理が行われるが、そのうち相関現像方式とモアレ現像方式による現像処理について説明する。 (4) Next, the image processing unit 107 performs a developing process. Among them, the developing process by the correlation developing method and the moiré developing method will be described.
 相関現像方式では、画像処理部107が、図7に示した撮影用パターン105の投影像と、図8に示された現像用パターン801との相互相関関数を演算することにより、図9に示すシフト量kの輝点を得ることができる。なお、一般的に、相互相関演算を2次元畳込み演算で行うと演算量が大きくなる。 In the correlation developing method, the image processing unit 107 calculates a cross-correlation function between the projected image of the photographing pattern 105 shown in FIG. 7 and the developing pattern 801 shown in FIG. A bright spot having a shift amount k can be obtained. In general, when the cross-correlation calculation is performed by a two-dimensional convolution calculation, the calculation amount increases.
 図8は、現像用パターンの例を示す図である。具体的には、現像用パターン801は、図4に示したガボールゾーンプレートや図5に示したFZP(フレネルゾーンプレート)と同様のパターンを有する。すなわち、本実施形態では、現像用パターン801は、物理的に実体を有する必要はなく、画像処理において用いる情報として存在すればよい。 FIG. 8 is a diagram showing an example of a development pattern. Specifically, the development pattern 801 has a pattern similar to the Gabor zone plate shown in FIG. 4 or the FZP (Fresnel zone plate) shown in FIG. That is, in the present embodiment, the development pattern 801 does not need to have a physical entity, and may be present as information used in image processing.
 図9は、相関現像方式による現像画像の例を示す図である。相関現像方式により現像すると、上述のとおり、ある輝点がkだけシフトした現像画像を得ることができる。 FIG. 9 is a diagram showing an example of a developed image by the correlation developing method. When the image is developed by the correlation development method, as described above, a developed image in which a certain bright point is shifted by k can be obtained.
 ここで、相互相関演算を2次元畳込み演算で行わず、フーリエ変換を用いて演算する例について、数式を用いて原理を説明する。まず、現像用パターン801は、撮影用パターン105と同様にガボールゾーンプレートやフレネルゾーンプレートを用いるため、現像用パターン801は初期位相Φを用いて、 Here, the principle of an example in which the cross-correlation operation is not performed by the two-dimensional convolution operation but is performed by using the Fourier transform will be described using mathematical expressions. First, since the development pattern 801 uses a Gabor zone plate or a Fresnel zone plate similarly to the imaging pattern 105, the development pattern 801 uses the initial phase Φ,
Figure JPOXMLDOC01-appb-M000003
と表せる。なお、現像用パターン801は画像処理内で使用するため、上式(1)のように1でオフセットさせる必要はなく、負の値を有していても問題ない。
Figure JPOXMLDOC01-appb-M000003
Can be expressed as Since the development pattern 801 is used in the image processing, it is not necessary to offset it by 1 as in the above equation (1), and there is no problem even if it has a negative value.
 上式(2)、上式(3)のフーリエ変換はそれぞれ、 フ ー The Fourier transforms of the above equations (2) and (3) are respectively
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000005
のようになる。ここで、Fはフーリエ変換の演算を表し、uはx方向の周波数座標、括弧を伴うδはデルタ関数である。この式で重要なことはフーリエ変換後の式もまたフレネルゾーンプレートやガボールゾーンプレートとなっている点である。よって、この数式に基づいてフーリエ変換後の現像用パターンを直接的に生成してもよい。これにより演算量を低減可能である。
Figure JPOXMLDOC01-appb-M000005
become that way. Here, F represents an operation of Fourier transform, u is a frequency coordinate in the x direction, and δ with parentheses is a delta function. What is important in this equation is that the equation after the Fourier transform is also a Fresnel zone plate or a Gabor zone plate. Therefore, the development pattern after Fourier transform may be directly generated based on this equation. As a result, the amount of calculation can be reduced.
 次に、上式(4)と上式(5)を乗算すると、 Next, when the above equation (4) and the above equation (5) are multiplied,
Figure JPOXMLDOC01-appb-M000006
となる。この指数関数で表された項exp(-iku)が信号成分であり、この項をフーリエ変換すると、
Figure JPOXMLDOC01-appb-M000006
Becomes The term exp (-iku) represented by this exponential function is a signal component, and when this term is Fourier-transformed,
Figure JPOXMLDOC01-appb-M000007
のように変換され、元のx軸においてkの位置に輝点を得ることができる。この輝点が無限遠の光束を示しており、図1の撮像装置101により得られる撮影像にほかならない。
Figure JPOXMLDOC01-appb-M000007
, And a bright spot can be obtained at the position of k on the original x-axis. This bright point indicates a light beam at infinity, and is nothing but a captured image obtained by the imaging device 101 in FIG.
 なお、このような相関現像方式では、パターンの自己相関関数が単一のピークを有するものであれば、フレネルゾーンプレートやガボールゾーンプレートに限定されないパターン、例えばランダムなパターンで実現してもよい。 In such a correlation development system, a pattern not limited to the Fresnel zone plate or the Gabor zone plate, for example, a random pattern may be realized as long as the autocorrelation function of the pattern has a single peak.
 次に、モアレ現像方式では、図7に示した撮影用パターン105の投影像と図8に示した現像用パターン801とを乗算することにより、図10に示すようなモアレ縞を生成し、これをフーリエ変換することにより図11に示すようなシフト量が(kβ/π)の輝点を得ることができる。 Next, in the moiré developing method, a moiré fringe as shown in FIG. 10 is generated by multiplying the projected image of the photographing pattern 105 shown in FIG. 7 by the developing pattern 801 shown in FIG. Is subjected to Fourier transform, so that a bright spot having a shift amount of (kβ / π) as shown in FIG. 11 can be obtained.
 図10は、モアレ現像方式によるモアレ縞の例を示す図である。具体的には、図10に示すように、図7に示した撮影用パターン105の投影像と、図8に示した現像用パターン801と、の乗算の結果がモアレ縞として得られる。 FIG. 10 is a diagram showing an example of moire fringes by the moire development method. Specifically, as shown in FIG. 10, the result of multiplication of the projected image of the photographing pattern 105 shown in FIG. 7 and the development pattern 801 shown in FIG. 8 is obtained as moire fringes.
 このモアレ縞を数式で示すと、 と When this moiré fringe is shown by a formula
Figure JPOXMLDOC01-appb-M000008
となる。この展開式の第3項が信号成分であり、2つのパターンのずれの方向にまっすぐな等間隔の縞模様を重なり合った領域一面に作ることがわかる。このような縞と縞の重ね合わせによって相対的に低い空間周波数で生じる縞をモアレ縞と呼ぶ。
Figure JPOXMLDOC01-appb-M000008
Becomes It can be seen that the third term of this expansion formula is a signal component, and that a striped pattern that is straight and equidistant in the direction of displacement between the two patterns is formed over the entire overlapping region. A fringe generated at a relatively low spatial frequency due to the superposition of such fringes is called a Moire fringe.
 この第3項の2次元フーリエ変換は、 2 The two-dimensional Fourier transform of the third term is
Figure JPOXMLDOC01-appb-M000009
のようになる。ここで、Fはフーリエ変換の演算を表し、uはx方向の周波数座標、括弧を伴うδはデルタ関数である。この結果から、モアレ縞の空間周波数スペクトルにおいて、空間周波数のピークが(u=±kβ/π)の位置に生じることがわかる。この輝点が無限遠の光束を示しており、図1の撮像装置101により得られる撮影像にほかならない。
Figure JPOXMLDOC01-appb-M000009
become that way. Here, F represents an operation of Fourier transform, u is a frequency coordinate in the x direction, and δ with parentheses is a delta function. From this result, it can be seen that in the spatial frequency spectrum of the moiré fringes, the peak of the spatial frequency occurs at the position of (u = ± kβ / π). This bright point indicates a light beam at infinity, and is nothing but a captured image obtained by the imaging device 101 in FIG.
 図11は、モアレ現像方式による現像画像の例を示す図である。図11に示す現像画像では、u=±kβ/πの位置に輝点が得られている。 FIG. 11 is a view showing an example of a developed image by the moiré developing method. In the developed image shown in FIG. 11, bright points are obtained at the positions of u = ± kβ / π.
 なお、モアレ現像方式ではパターンのシフトによって得られるモアレ縞が単一の周波数を有するものであれば、フレネルゾーンプレートやガボールゾーンプレートに限定されないパターン、例えば楕円状のパターンで実現してもよい。 In the moiré developing method, as long as the moiré fringes obtained by shifting the pattern have a single frequency, the moire fringe may be realized by a pattern other than the Fresnel zone plate or the Gabor zone plate, for example, an elliptical pattern.
 〈ノイズキャンセル〉上式(6)から上式(7)への変換、また上式(8)から上式(9)への変換において信号成分に着目して話を進めたが、実際には信号成分以外の項がノイズとして現像を阻害する。そこで、フリンジスキャンに基づくノイズキャンセルが効果的である。 <Noise Cancellation> In the conversion from the above equation (6) to the above equation (7) and the conversion from the above equation (8) to the above equation (9), the discussion focused on the signal component. Terms other than the signal component hinder development as noise. Therefore, noise cancellation based on fringe scanning is effective.
 フリンジスキャンのためには、撮影用パターン105として初期位相Φの異なる複数のパターンを使用する必要がある。図12に複数のパターンの例を示す。 For fringe scanning, it is necessary to use a plurality of patterns having different initial phases Φ as the imaging pattern 105. FIG. 12 shows an example of a plurality of patterns.
 図12は、フリンジスキャンにおける初期位相の撮影用パターンの組み合わせの例を示す図である。ここでは、(Φ=0)、(Φ=π/2)、(Φ=π)、(Φ=3π/2)となる4位相、すなわちπ/2ずつずれた位相を用いて撮影したセンサ画像を以下式に従って演算すると複素数のセンサ画像(複素センサ画像)が得られる。これを式で表すと、 FIG. 12 is a diagram showing an example of a combination of imaging patterns of an initial phase in a fringe scan. Here, sensor images taken using four phases of (Φ = 0), (Φ = π / 2), (Φ = π), and (Φ = 3π / 2), that is, phases shifted by π / 2, Is calculated according to the following equation, a complex sensor image (complex sensor image) is obtained. Expressing this as an equation,
Figure JPOXMLDOC01-appb-M000010
となる。ここで、複素数の現像用パターン801は、
Figure JPOXMLDOC01-appb-M000010
Becomes Here, the complex development pattern 801 is
Figure JPOXMLDOC01-appb-M000011
と表せる。現像用パターン801はフリンジスキャン処理内で使用するため、複素数であっても問題ない。
Figure JPOXMLDOC01-appb-M000011
Can be expressed as Since the developing pattern 801 is used in the fringe scan processing, there is no problem even if it is a complex number.
 モアレ現像方式において、上式(10)と上式(11)を乗算すると、 に お い て In the moiré developing method, when the above equation (10) and the above equation (11) are multiplied,
Figure JPOXMLDOC01-appb-M000012
となる。この指数関数で表された項exp(2iβkx)が信号成分であり、上式(8)のような不要な項が発生しないため、ノイズがキャンセルされていることが解る。
Figure JPOXMLDOC01-appb-M000012
Becomes Since the term exp (2iβkx) represented by the exponential function is a signal component and an unnecessary term such as the above equation (8) does not occur, it can be seen that noise has been canceled.
 同様に、相関現像方式についても確認すると、上式(10)、上式(11)のフーリエ変換はそれぞれ、 Similarly, when the correlation development method is also confirmed, the Fourier transforms of the above equations (10) and (11) are
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000014
のようになる。次に、上式(13)と上式(14)を乗算すると、
Figure JPOXMLDOC01-appb-M000014
become that way. Next, when the above equation (13) is multiplied by the above equation (14),
Figure JPOXMLDOC01-appb-M000015
となる。この指数関数で表された項exp(-iku)が信号成分であり、上式(8)のような不要な項が発生しないため、ノイズがキャンセルされていることが解る。
Figure JPOXMLDOC01-appb-M000015
Becomes Since the term exp (-iku) represented by the exponential function is a signal component and an unnecessary term as in the above equation (8) does not occur, it can be seen that noise has been canceled.
 なお、上記の例では、(Φ=0)、(Φ=π/2)、(Φ=π)、(Φ=3π/2)となるπ/2ずつずれた4位相の複数の現像用パターンを使用して説明したが、Φは0~2πの間の角度を等分するように設定すればよく、この位相に限定するものではない。 Note that, in the above example, a plurality of development patterns of four phases shifted by π / 2, such as (Φ = 0), (Φ = π / 2), (Φ = π), and (Φ = 3π / 2) Has been described, Φ may be set so as to equally divide an angle between 0 and 2π, and is not limited to this phase.
 以上の複数パターンによる撮影を実現する方法として、フリンジスキャンの処理において時分割でパターンを切り替える方法と、空間分割でパターンを切り替える方法が考えられる。 As a method of realizing the photographing using a plurality of patterns described above, a method of switching patterns by time division in fringe scan processing and a method of switching patterns by space division can be considered.
 時分割フリンジスキャンを実現するには、例えば電気的に図12に示す複数の初期位相を切り替えて表示することが可能な液晶表示素子などを、図1における撮影用パターン105として使用すればよい。この液晶表示素子の切替タイミングと画像センサ103のシャッタタイミングを同期して制御し、4枚の画像を時系列に取得後にフリンジスキャン処理部106においてフリンジスキャン演算を実施する。 In order to realize the time-division fringe scan, for example, a liquid crystal display element which can electrically switch and display a plurality of initial phases shown in FIG. 12 may be used as the photographing pattern 105 in FIG. The switching timing of the liquid crystal display element and the shutter timing of the image sensor 103 are controlled in synchronization, and after acquiring four images in time series, the fringe scan processing unit 106 performs a fringe scan operation.
 これに対して、空間分割フリンジスキャンを実現するには、図13に示すように複数の初期位相を有する撮影用パターン105を使用する必要がある。全体として1枚の画像を取得後、フリンジスキャン処理部106においてそれぞれの初期位相のパターンに対応して当該画像を4枚に分割し、フリンジスキャン演算を実施する。 On the other hand, in order to realize the space division fringe scan, it is necessary to use an imaging pattern 105 having a plurality of initial phases as shown in FIG. After obtaining one image as a whole, the fringe scan processing unit 106 divides the image into four images corresponding to the respective patterns of the initial phase, and performs a fringe scan operation.
 図13は、空間分割フリンジスキャンの撮影用パターンの例を示す図である。空間分割フリンジスキャンの撮影用パターン1300は、撮影用パターン105の応用例であるが、XY平面(画像センサ103と平行な面)上で空間的に分割して、複数の初期位相((Φ=0)、(Φ=π/2)、(Φ=π)、(Φ=3π/2)となる4位相)を含む撮影用パターンとしたものである。続いて、フリンジスキャン処理部106でのフリンジスキャン演算について説明する。 FIG. 13 is a diagram showing an example of a photographing pattern of the space division fringe scan. The imaging pattern 1300 of the space division fringe scan is an application example of the imaging pattern 105, and is spatially divided on an XY plane (a plane parallel to the image sensor 103) to form a plurality of initial phases ((Φ = 0), (Φ = π / 2), (Φ = π), (four phases with (Φ = 3π / 2)). Subsequently, the fringe scan calculation in the fringe scan processing unit 106 will be described.
 図14は、フリンジスキャンの処理フローの例を示す図である。まず、フリンジスキャン処理部106は、画像センサ103から出力される複数の撮影パターンによるセンサ画像を取得するが、空間分割フリンジスキャンを採用する場合には取得したセンサ画像を個別の撮影パターンに応じて分割する必要があるため、所定の領域に分割して複数のセンサ画像とする(ステップ1401)。なお、時分割フリンジスキャンを採用する場合には、時間の経過に応じて異なる撮影パターンによるセンサ画像を複数得るため、分割は実施しない。 FIG. 14 is a diagram showing an example of a processing flow of the fringe scan. First, the fringe scan processing unit 106 acquires sensor images based on a plurality of photographing patterns output from the image sensor 103. In the case where space-division fringe scanning is adopted, the acquired sensor images are converted according to individual photographing patterns. Since it is necessary to divide the image, a plurality of sensor images are obtained by dividing the image into a predetermined area (step 1401). When time-division fringe scan is adopted, division is not performed because a plurality of sensor images with different photographing patterns are obtained as time passes.
 次に、フリンジスキャン処理部106は、出力用の複素センサ画像を初期化する(ステップ1402)。 Next, the fringe scan processing unit 106 initializes a complex sensor image for output (step 1402).
 そして、フリンジスキャン処理部106は、一つ目の初期位相Φのセンサ画像を取得する(ステップ1403)。 {Then, the fringe scan processing unit 106 acquires the sensor image of the first initial phase Φ (step 1403).
 そして、フリンジスキャン処理部106は、その初期位相Φに応じたexp(iΦ)を乗算する(ステップ1404)。 {Then, the fringe scan processing unit 106 multiplies exp (iΦ) according to the initial phase Φ (step 1404).
 そして、フリンジスキャン処理部106は、乗算結果を、複素センサ画像に加算する(ステップ1405)。 {Then, the fringe scan processing unit 106 adds the multiplication result to the complex sensor image (step 1405).
 フリンジスキャン処理部106は、このステップ1403からステップ1405までの処理を、使用した初期位相数だけ繰り返す(ステップ1406)。例えば、図12に示したような4位相を用いたフリンジスキャンでは、フリンジスキャン処理部106は、初期位相のそれぞれ(Φ=0)、(π/2)、(π)、(3π/2)の計4回繰り返す。 The fringe scan processing unit 106 repeats the processing from step 1403 to step 1405 for the number of initial phases used (step 1406). For example, in the fringe scan using four phases as shown in FIG. 12, the fringe scan processing unit 106 determines the initial phases (Φ = 0), (π / 2), (π), and (3π / 2), respectively. Repeat 4 times in total.
 そして、フリンジスキャン処理部106は、複素センサ画像を出力する(ステップ1407)。以上のステップ1401乃至ステップ1407のフリンジスキャン処理部106による処理は、上式(10)に相当する処理である。次に、画像処理部107での画像処理について説明する。 Then, the fringe scan processing unit 106 outputs a complex sensor image (step 1407). The processing by the fringe scan processing unit 106 in steps 1401 to 1407 described above corresponds to the above equation (10). Next, image processing in the image processing unit 107 will be described.
 図15は、相関現像方式による現像処理の処理フローの例を示す図である。まず、画像処理部107は、フリンジスキャン処理部106から出力される複素センサ画像を取得し、複素センサ画像に対して2次元高速フーリエ変換(FFT:Fast Fourier Transform)演算を実施する(ステップ1501)。 FIG. 15 is a diagram showing an example of a processing flow of the development processing by the correlation development method. First, the image processing unit 107 obtains a complex sensor image output from the fringe scan processing unit 106, and performs a two-dimensional fast Fourier transform (FFT) on the complex sensor image (Step 1501). .
 次に、画像処理部107は、現像処理に使用する予め定められた現像用パターン801を生成し、2次元FFT演算した複素センサ画像に乗算する(ステップ1502)。 Next, the image processing unit 107 generates a predetermined development pattern 801 to be used for the development process, and multiplies the complex sensor image subjected to the two-dimensional FFT operation (step 1502).
 そして、画像処理部107は、逆2次元FFT演算を行う(ステップ1503)。なお、この演算結果は複素数となる。 {Circle around (2)} Then, the image processing unit 107 performs an inverse two-dimensional FFT operation (step 1503). The result of this operation is a complex number.
 そのため、画像処理部107は、逆2次元FFT演算の演算結果から絶対値化もしくは実部を取り出して撮影対象の像を実数化して現像する(ステップ1504)。 Therefore, the image processing unit 107 converts the result of the inverse two-dimensional FFT operation into an absolute value or a real part, converts the image to be photographed into a real number, and develops the image (step 1504).
 その後、画像処理部107は、得られた現像画像に対してコントラスト強調処理を行う(ステップ1505)。さらに、画像処理部107は、カラーバランス調整(ステップ1506)等を実施し、撮影画像として出力する。以上が、相関現像方式による現像処理である。 Then, the image processing unit 107 performs contrast enhancement processing on the obtained developed image (step 1505). Further, the image processing unit 107 performs color balance adjustment (step 1506) and the like, and outputs the captured image. The above is the development processing by the correlation development method.
 図16は、モアレ現像方式による現像処理の処理フローの例を示す図である。まず、画像処理部107は、フリンジスキャン処理部106から出力される複素センサ画像を取得し、現像処理に使用する予め定められた現像用パターン801を生成し、複素センサ画像に乗算する(ステップ1601)。 FIG. 16 is a diagram showing an example of a processing flow of a developing process by the moiré developing method. First, the image processing unit 107 acquires the complex sensor image output from the fringe scan processing unit 106, generates a predetermined development pattern 801 used for the development process, and multiplies the complex sensor image by the complex sensor image (step 1601). ).
 そして、画像処理部107は、2次元FFT演算により周波数スペクトルを求める(ステップ1602)。 {Circle around (2)} Then, the image processing unit 107 obtains a frequency spectrum by a two-dimensional FFT operation (step 1602).
 そして、画像処理部107は、ステップ1602にて求めた周波数スペクトルのうち必要な周波数領域のデータを切り出す(ステップ1603)。 {Circle around (4)} Then, the image processing unit 107 cuts out data of a necessary frequency region from the frequency spectrum obtained in step 1602 (step 1603).
 以降のステップ1504の実数化処理、ステップ1505のコントラスト強調処理、ステップ1506のカラーバランス調整処理は、図15に示したステップ1504乃至ステップ1506の処理と同様であるため、説明を割愛する。 の The subsequent realization processing in step 1504, the contrast enhancement processing in step 1505, and the color balance adjustment processing in step 1506 are the same as the processing in steps 1504 to 1506 shown in FIG.
 〈有限距離物体の撮影原理〉次に、これまで述べた被写体が十分に遠い場合(無限遠の場合)における撮影用パターン105の画像センサ103への射影の様子を図17に示す。 <<< Principle of Photographing Finite-Distance Object> Next, FIG. 17 shows the manner in which the photographic pattern 105 is projected onto the image sensor 103 when the subject described above is sufficiently far (in the case of infinity).
 図17は、物体が無限距離にある場合の撮影用パターンの投影例を示す図である。遠方の物体を構成する点1701からの球面波は、十分に長い距離を伝搬する間に平面波となり撮影用パターン105を照射し、その投影像1702が画像センサ103に投影される場合、投影像は撮影用パターン105とほぼ同じ形状である。その結果、投影像1702に対して、現像用パターンを用いて現像処理を行うことにより、単一の輝点を得ることが可能である。これに比して、有限距離の物体に対する撮像について説明する。 FIG. 17 is a diagram illustrating an example of projection of a photographing pattern when an object is at an infinite distance. A spherical wave from a point 1701 constituting a distant object becomes a plane wave while propagating a sufficiently long distance and irradiates the imaging pattern 105. When a projected image 1702 is projected on the image sensor 103, the projected image becomes It has almost the same shape as the pattern 105 for photographing. As a result, a single luminescent spot can be obtained by performing development processing on the projection image 1702 using the development pattern. On the other hand, imaging of an object at a finite distance will be described.
 図18は、物体が有限距離にある場合の撮影用パターンの拡大投影例を示す図である。撮像する物体が有限距離にある場合には、撮影用パターン105の画像センサ103への射影が撮影用パターン105よりも拡大される。物体を構成する点1801からの球面波が撮影用パターン105を照射し、その投影像1802が画像センサ103に投影される場合、投影像はほぼ一様に拡大される。なお、この拡大率αは、撮影用パターン105から点1801までの距離fを用いて、 FIG. 18 is a diagram showing an example of an enlarged projection of the photographing pattern when the object is at a finite distance. When the object to be imaged is at a finite distance, the projection of the imaging pattern 105 onto the image sensor 103 is enlarged more than the imaging pattern 105. When a spherical wave from a point 1801 constituting an object irradiates the imaging pattern 105 and a projected image 1802 is projected on the image sensor 103, the projected image is enlarged substantially uniformly. Note that this enlargement factor α is calculated using the distance f from the photographing pattern 105 to the point 1801.
Figure JPOXMLDOC01-appb-M000016
のように算出できる。このため、有限距離の物体に対して、平行光に対して設計された現像用パターンをそのまま用いて現像処理したのでは、単一の輝点を得ることができない。
Figure JPOXMLDOC01-appb-M000016
It can be calculated as follows. For this reason, if an object at a finite distance is subjected to development processing using a development pattern designed for parallel light as it is, a single bright point cannot be obtained.
 そこで、一様に拡大された撮影用パターン105の投影像に合わせて、現像用パターン801を拡大させたならば、拡大された投影像1802に対して再び、単一の輝点を得ることができる。このためには、現像用パターン801の係数βを、β/αとすることで補正が可能である。これにより、必ずしも無限遠でない距離の点1801からの光を選択的に再生することができる。これによって、任意の位置に焦点を合わせて撮影を行うことができる。すなわち、焦点を被写体に合わせて撮影を行う場合に、現像用パターン801の拡大倍率を決定すればよい。 Therefore, if the developing pattern 801 is enlarged in accordance with the uniformly enlarged projection image of the photographing pattern 105, a single bright point can be obtained again for the enlarged projection image 1802. it can. For this purpose, the coefficients of the development patterns 801 beta, it is possible to correct by the beta / alpha 2. Thus, light from point 1801 at a distance that is not necessarily infinity can be selectively reproduced. As a result, it is possible to perform imaging while focusing on an arbitrary position. In other words, when photographing is performed with the focus on the subject, the magnification of the development pattern 801 may be determined.
 さらに、本構成によれば、撮影後に任意の距離にフォーカスを合わせることも可能となる。この場合の構成を図19に示す。 According to this configuration, it is also possible to focus on an arbitrary distance after shooting. FIG. 19 shows the configuration in this case.
 図19は、本発明の第二の実施例に係る撮像装置の構成を示す図である。第二の実施例に係る撮像装置は、基本的に第一の実施例に係る撮像装置と同様の構成を備える。ただし、第一の実施例と異なるのは、フォーカス設定部1901の存在である。フォーカス設定部1901は、撮像装置101に備え付けられたつまみや、スマートフォンのGUI(Graphical User Interface)などによってフォーカス距離の設定を受け付け、フォーカス距離情報を画像処理部107に出力する。 FIG. 19 is a diagram showing a configuration of an imaging device according to the second embodiment of the present invention. The imaging device according to the second embodiment basically has the same configuration as the imaging device according to the first embodiment. However, what differs from the first embodiment is the presence of a focus setting unit 1901. The focus setting unit 1901 receives the setting of the focus distance by using a knob provided on the imaging apparatus 101 or a GUI (Graphical User Interface) of a smartphone, and outputs the focus distance information to the image processing unit 107.
 さらに、このように撮影後のフォーカス調整が可能ということは、奥行情報を有しているということであり、オートフォーカスや測距といった様々な機能を画像処理部107において実現することが可能になる。 Further, the fact that focus adjustment after shooting is possible means that the image processing unit 107 has depth information, and various functions such as autofocus and distance measurement can be realized in the image processing unit 107. .
 このようなフォーカス設定をはじめとする機能を実現する上では、現像用パターン801の係数βを自由に変更することが必要となるが、本実施例で説明したフリンジスキャン処理部106における処理のようにフリンジスキャン演算を行うことにより、現像用パターン801を用いた処理を独立して行うことが可能となり、処理を簡易化することが可能となる。 To realize such a function as focus setting, it is necessary to freely change the coefficient β of the developing pattern 801. However, as in the processing in the fringe scan processing unit 106 described in the present embodiment, By performing the fringe scan operation, the processing using the development pattern 801 can be performed independently, and the processing can be simplified.
 〈空間分割フリンジスキャンの問題:被写体距離〉図18で説明したように、撮像する物体が有限距離にある場合には、撮影用パターン105の画像センサ103への射影が撮影用パターン105より拡大される。これは、空間分割フリンジスキャンを使用する際に問題となりうる。 <Problem of Space-Division Fringe Scan: Subject Distance> As described with reference to FIG. 18, when the object to be imaged is at a finite distance, the projection of the imaging pattern 105 onto the image sensor 103 is larger than that of the imaging pattern 105. You. This can be a problem when using space division fringe scanning.
 図20は、点光源と撮像装置の位置関係の例を示す図である。図20のように、撮像する物体が遠い距離fにある場合(点光源2001)と、近い距離f´(点光源2001´)にある場合と、を考える。 FIG. 20 is a diagram illustrating an example of a positional relationship between a point light source and an imaging device. As shown in FIG. 20, a case where the object to be imaged is at a far distance f (point light source 2001) and a case where it is at a short distance f '(point light source 2001') are considered.
 図21は、点光源の距離別のセンサ画像の例を示す図である。図21に示すように、撮像する物体からの光が空間分割フリンジスキャンの撮影用パターン105に照射され、その影が画像センサ103へ投影された際に投影される像は、撮像する物体と撮影パターンとの間の距離fの場合と、距離f´の場合とに応じて変化する。空間分割フリンジスキャンでは、フリンジスキャン処理部106においてそれぞれの初期位相のパターンに対応して4枚に分割する必要があるが、各象限のパターンの中心が被写体との距離fやf´に応じて変化している。例えば、第一象限1302の同心円の中心は、距離f´の場合の第一象限2102においては、右上にずれた状態となる。このように、各象限を常に同じ場所で分割しているとずれが発生し、フリンジスキャンの効果が減じられることになる。 FIG. 21 is a diagram illustrating an example of a sensor image for each distance of a point light source. As shown in FIG. 21, light from an object to be imaged is applied to the imaging pattern 105 of the space division fringe scan, and the image projected when the shadow is projected on the image sensor 103 is the same as the object to be imaged. It changes according to the case of the distance f between the pattern and the case of the distance f ′. In the space division fringe scan, it is necessary that the fringe scan processing unit 106 divides the image into four in correspondence with the respective patterns of the initial phase. Is changing. For example, the center of the concentric circle of the first quadrant 1302 is shifted to the upper right in the first quadrant 2102 at the distance f ′. As described above, if each quadrant is always divided at the same place, a shift occurs, and the effect of the fringe scan is reduced.
 本問題を解決するには、式16の拡大率αを用いて画像補正を行った後に、各象限ごとの4枚にデータを分割すればよい。図22、図23のそれぞれに、図21のセンサ画像の中心を原点に配置した際の各パターンの中心を×でそれぞれ示す。 を To solve this problem, after performing image correction using the enlargement ratio α in Expression 16, the data may be divided into four images in each quadrant. In each of FIGS. 22 and 23, the center of each pattern when the center of the sensor image in FIG. 21 is arranged at the origin is indicated by x.
 図22は、点光源が無限遠にある場合のパターン中心座標の例を示す図である。図22においては、第一象限のパターン中心2201の座標は、(x0,y0)で表されている。 FIG. 22 is a diagram showing an example of pattern center coordinates when the point light source is at infinity. In FIG. 22, the coordinates of the pattern center 2201 in the first quadrant are represented by (x0, y0).
 図23は、点光源が有限距離にある場合のパターン中心座標の例を示す図である。図23においては、第一象限のパターン中心2301の座標は、(x1,y1)で表されている。 FIG. 23 is a diagram showing an example of the pattern center coordinates when the point light source is at a finite distance. In FIG. 23, the coordinates of the pattern center 2301 in the first quadrant are represented by (x1, y1).
 図23のパターン中心2301の座標(x1,y1)は、図22のパターン中心2201の座標(x0,y0)から拡大率αで拡大されているので、座標(x1,y1)を座標(x0,y0)に変換するには、 Since the coordinates (x1, y1) of the pattern center 2301 in FIG. 23 are enlarged at the enlargement rate α from the coordinates (x0, y0) of the pattern center 2201 in FIG. 22, the coordinates (x1, y1) are changed to the coordinates (x0, y1). y0)
Figure JPOXMLDOC01-appb-M000017
 のようにセンサ面上の移動量を特定する行列Mを使用して座標変換すればよい。変換した結果得られるセンサ画像を図24に示す。
Figure JPOXMLDOC01-appb-M000017
The coordinate conversion may be performed using a matrix M that specifies the amount of movement on the sensor surface as shown in FIG. FIG. 24 shows a sensor image obtained as a result of the conversion.
 図24は、点光源が有限距離にある場合のセンサ画像の補正例を示す図である。図24に示すように、画像は縮小され、常に同じ位置がパターンの中心となる画像になるため、この例であれば各象限で画像を4分割すれば、各象限のセンサ画像を得られる。なお、画像を縮小することとなるため、得られるセンサ画像は補正前の画像サイズよりも小さくなる。この場合、余白領域2401には、0などの定数を補完するか、もしくはNAN値として使用しないようにしてもよい。 FIG. 24 is a diagram illustrating an example of correcting a sensor image when the point light source is at a finite distance. As shown in FIG. 24, the image is reduced and the same position is always the center of the pattern. Therefore, in this example, if the image is divided into four in each quadrant, a sensor image in each quadrant can be obtained. Since the image is reduced, the obtained sensor image is smaller than the image size before correction. In this case, a constant such as 0 may be complemented in the margin area 2401 or may not be used as the NAN value.
 このように、式(17)に示す行列Mを用いて同心円の中心を基準に座標変換することにより、点光源が有限距離にある場合、すなわち被写体距離が有限距離である場合に生じる射影の拡大問題を解消できる。 As described above, by performing coordinate transformation with reference to the center of the concentric circle using the matrix M shown in Expression (17), the projection enlargement that occurs when the point light source is at a finite distance, that is, when the subject distance is at the finite distance, Eliminate problems.
 〈空間分割フリンジスキャンの問題:パターン固定精度〉しかし、被写体距離によるずれを補正しただけでは不十分な場合がある。それは、撮像装置101を組み立てる際の精度や、撮影用パターン105の作製精度が十分でなく、ずれが生じている場合である。撮影用パターン105がずれて取り付けられた場合のセンサ画像の例を図25に示す。 <Problem of space division fringe scan: pattern fixation accuracy> However, it may not be enough to correct the displacement due to the subject distance. This is the case where the accuracy at the time of assembling the imaging device 101 and the accuracy of manufacturing the imaging pattern 105 are not sufficient, and a shift has occurred. FIG. 25 shows an example of a sensor image in a case where the photographing pattern 105 is attached with a shift.
 図25は、撮影用パターンが回転している場合のセンサ画像の例を示す図である。この例では、撮影用パターン105は、原点を中心に角度-θだけ回転して取り付けられた場合のセンサ画像の例が示されている。 FIG. 25 is a diagram illustrating an example of a sensor image when the shooting pattern is rotating. In this example, an example of a sensor image in a case where the photographing pattern 105 is attached by being rotated by an angle −θ about the origin is shown.
 これを補正する方法について説明する。まず、基準となるパターン(例えば、撮影用パターン105を被写体距離から求めた拡大率αで拡大した画像)との相互相関演算により各パターンの中心座標2501~2504を算出する。次に、中心座標2501~2504の4点の重心Oの座標を算出し、図26に示すように重心Oが原点と重なるように配置する。 方法 A method for correcting this will be described. First, the center coordinates 2501 to 2504 of each pattern are calculated by cross-correlation calculation with a reference pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio α obtained from the subject distance). Next, the coordinates of the center of gravity O of four points of the center coordinates 2501 to 2504 are calculated, and are arranged so that the center of gravity O overlaps the origin as shown in FIG.
 図26は、撮影用パターンが回転している場合のパターン中心座標の例を示す図である。この時、図26の第一象限のパターン中心2601の座標(x1,y1)を、図22のパターン中心2201の座標(x0,y0)に変換するには、 FIG. 26 is a diagram showing an example of the pattern center coordinates when the imaging pattern is rotating. At this time, to convert the coordinates (x1, y1) of the pattern center 2601 in the first quadrant of FIG. 26 to the coordinates (x0, y0) of the pattern center 2201 in FIG.
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000019
のようにセンサ面上の回転の移動量を特定する行列Hを使用して座標変換すればよい。なお、式(18)において行列Mを使用しているのは、使用する点光源2001の距離fに応じて画像を補正するためである。この結果、図25の画像は角度θ回転することで補正され、常に同じパターン中心となる画像になるため、図25、図26の例であれば各象限で画像を分割すればよい。
Figure JPOXMLDOC01-appb-M000019
The coordinate transformation may be performed using a matrix H that specifies the amount of rotation movement on the sensor surface as shown in FIG. The reason why the matrix M is used in the equation (18) is to correct an image according to the distance f of the point light source 2001 used. As a result, the image of FIG. 25 is corrected by rotating the angle θ, and the image always has the same pattern center. Therefore, in the example of FIGS. 25 and 26, the image may be divided in each quadrant.
 なお、式(18)、式(19)に示した行列Hは、一般的にアフィン行列と呼ばれる2×2の要素を持つ行列であり、4つの座標の関係(この例では各パターンの中心座標2201~2204と中心座標2501~2504)から要素を算出することが可能である。アフィン行列を算出するのに用いる点は正しくは2点であるが、4点の情報を用いて最小二乗法を適用することにより、より精度の高いアフィン行列を得ることが可能である。 The matrix H shown in Expressions (18) and (19) is a matrix having 2 × 2 elements, which is generally called an affine matrix, and has a relationship between four coordinates (in this example, the center coordinate of each pattern). Elements can be calculated from 2201 to 2204 and center coordinates 2501 to 2504). Although two points are correctly used for calculating the affine matrix, a more accurate affine matrix can be obtained by applying the least squares method using information of four points.
 次に、撮像装置を組み立てる際の精度が低い他の例について説明する。例えば、図27のように、撮影用パターン105が厚み方向に斜めにずれて配置された場合を考える。この状態で、点光源2001を撮影した際のセンサ画像の例を図28に示す。 Next, another example in which the accuracy in assembling the imaging device is low will be described. For example, as shown in FIG. 27, a case is considered where the imaging patterns 105 are arranged obliquely shifted in the thickness direction. FIG. 28 shows an example of a sensor image when the point light source 2001 is photographed in this state.
 図28は、撮影用パターンが厚み方向に傾いている状態のセンサ画像の例を示す図である。この場合には、図28のとおり、センサ画像は台形歪みを有する。台形歪みについては、式(18)のようなアフィン行列では補正しきれないが、ホモグラフィ行列を使用することでより容易に精度高く台形歪みを補正することが可能である。 FIG. 28 is a diagram illustrating an example of a sensor image in a state where the photographing pattern is inclined in the thickness direction. In this case, as shown in FIG. 28, the sensor image has a trapezoidal distortion. The trapezoidal distortion cannot be completely corrected by the affine matrix as in the equation (18), but the trapezoidal distortion can be corrected more easily and accurately by using the homography matrix.
 これを補正する方法について説明する。まず、基準となるパターン(例えば、撮影用パターン105を被写体距離から求めた拡大率αで拡大した画像)との相互相関演算により各パターンの中心座標2801~2804を算出する。次に、中心座標2801~2804の4点の重心Oの座標を算出し、図29に示すように重心Oが原点と重なるように配置する。 方法 A method for correcting this will be described. First, the center coordinates 2801 to 2804 of each pattern are calculated by cross-correlation calculation with a reference pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio α obtained from the subject distance). Next, the coordinates of the center of gravity O of the four points of the center coordinates 2801 to 2804 are calculated, and are arranged so that the center of gravity O overlaps the origin as shown in FIG.
 図29は、撮影用パターンが厚み方向に傾いている状態のパターン中心座標の例を示す図である。この時、図29の第一象限のパターン中心2801の座標(x1,y1)を、図22のパターン中心2201の座標(x0,y0)に変換するには、 FIG. 29 is a diagram illustrating an example of pattern center coordinates in a state where the photographing pattern is inclined in the thickness direction. At this time, to convert the coordinates (x1, y1) of the pattern center 2801 in the first quadrant in FIG. 29 to the coordinates (x0, y0) of the pattern center 2201 in FIG.
Figure JPOXMLDOC01-appb-M000020
のようにセンサ面上の移動量を特定する行列Hを使用して座標変換すればよい。この行列Hは一般的にホモグラフィ行列と呼ばれる3×3の要素を持つ行列であり、4つの座標の関係(この例では各パターンの中心座標2201~2204と中心座標2801~2804)から要素を算出することが可能である。なお、行列Mを使用しているのは、使用する点光源2001´の距離f´に応じて画像を補正するためである。結果、図28の画像は台形歪み(アオリ)および回転が補正され、常に同じパターン中心となる画像になるため、この例であれば各象限で画像を分割すればよい。
Figure JPOXMLDOC01-appb-M000020
The coordinates may be converted using a matrix H that specifies the amount of movement on the sensor surface as shown in FIG. The matrix H is a matrix having a 3 × 3 element generally called a homography matrix, and elements are determined from a relationship of four coordinates (in this example, center coordinates 2201 to 2204 and center coordinates 2801 to 2804 of each pattern). It is possible to calculate. The reason why the matrix M is used is to correct the image according to the distance f 'of the point light source 2001' used. As a result, the image in FIG. 28 is corrected for trapezoidal distortion (tilt) and rotation, and is always an image having the same pattern center. Therefore, in this example, the image may be divided in each quadrant.
 以上の歪み補正を実現する構成および処理について、図30~図33を用いて説明する。図30に、歪み補正を実現する構成を示す。 The configuration and processing for realizing the above-described distortion correction will be described with reference to FIGS. FIG. 30 shows a configuration for realizing distortion correction.
 図30は、本発明の第三の実施例に係る撮像装置の構成例を示す図である。図30に示す撮像装置は、図19に示す第二の実施例に係る撮像装置と基本的に同一であるが、一部異なる。その相違を中心に、以下説明する。 FIG. 30 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention. The imaging device shown in FIG. 30 is basically the same as the imaging device according to the second embodiment shown in FIG. 19, but is partially different. The following mainly describes the differences.
 第三の実施例に係る撮像装置は、歪み補正用の行列H、行列Mを含む補正用パラメータを測定・格納するパラメータ格納部としての補正値格納部3001と、補正用パラメータを使用して歪み補正を実施するフリンジスキャン処理部3002を備える。なお、図31に示すように、補正値格納部3001は、撮像モジュール102に含まれるものであってもよい。その場合、補正値格納部3001は、補正値、すなわち補正のためのパラメータを、API(Application Programming Interface)を介した要求に応じて出力するパラメータ出力部として動作する。 The imaging apparatus according to the third embodiment includes a correction value storage unit 3001 as a parameter storage unit that measures and stores correction parameters including a matrix H and a matrix M for distortion correction, and a distortion using the correction parameters. A fringe scan processing unit 3002 for performing correction is provided. As shown in FIG. 31, the correction value storage unit 3001 may be included in the imaging module 102. In that case, the correction value storage unit 3001 operates as a parameter output unit that outputs a correction value, that is, a parameter for correction, in response to a request via an API (Application, Programming, Interface).
 図31は、本発明の第三の実施例の変形例に係る撮像装置の構成例を示す図である。補正用パラメータは、撮像モジュール102の取り付けばらつきによって個体差があることを鑑みると、撮像モジュール102が撮像装置101から分離している構成においては、図31のように撮像モジュール102に補正値格納部3001が付随する形態の方が、汎用性がある。いずれの実施例であっても、補正値の算出方法(キャリブレーション方法)は、共通する。まず、補正値の算出方法(キャリブレーション方法)について述べる。 FIG. 31 is a diagram illustrating a configuration example of an imaging device according to a modification of the third embodiment of the present invention. Considering that the correction parameters have individual differences due to the mounting variation of the imaging module 102, in a configuration in which the imaging module 102 is separated from the imaging device 101, the correction value storage unit is provided in the imaging module 102 as shown in FIG. The form with 3001 is more versatile. In any of the embodiments, the method of calculating the correction value (the calibration method) is common. First, a correction value calculation method (calibration method) will be described.
 図32は、キャリブレーションの処理フローの例を示す図である。まず、キャリブレーションの実施者は、図20のように、撮像モジュール102の光軸上にある点光源2001を撮像モジュール102に向けて照射する(ステップ3201)。この点光源2001は、LED(Light Emitting Diode)光源や太陽など、無限遠または被写体までの距離が判っている光源であれば何でもよい。 FIG. 32 is a diagram illustrating an example of a processing flow of calibration. First, the operator of the calibration irradiates a point light source 2001 on the optical axis of the imaging module 102 toward the imaging module 102 as shown in FIG. 20 (step 3201). The point light source 2001 may be any light source such as an LED (Light Emitting Diode) light source or the sun that knows infinity or the distance to a subject, such as the sun.
 そして、キャリブレーションの実施者は、撮像モジュール102にセンサ画像を取得させる(ステップ3202)。 (4) Then, the operator of the calibration causes the imaging module 102 to acquire a sensor image (Step 3202).
 そして、フリンジスキャン処理部3002は、FZA(Fresnel Zone Aperture)の中心座標を算出する(ステップ3203)。具体的には、フリンジスキャン処理部3002は、基準となるパターン(例えば、撮影用パターン105を被写体距離から求めた拡大率αで拡大した画像)との相互相関演算により各パターンの中心座標(例:中心座標2801~2804)を算出する。 {Then, the fringe scan processing unit 3002 calculates the center coordinates of FZA (Fresnel \ Zone \ Aperture) (step 3203). More specifically, the fringe scan processing unit 3002 calculates the center coordinates of each pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement factor α obtained from the subject distance) by using a cross-correlation operation. : Center coordinates 2801 to 2804) are calculated.
 そして、フリンジスキャン処理部3002は、この中心座標群から重心Oを算出する(ステップ3204)。そして、フリンジスキャン処理部3002は、重心Oを原点に配置する。 Then, the fringe scan processing unit 3002 calculates the center of gravity O from the central coordinate group (step 3204). Then, the fringe scan processing unit 3002 arranges the center of gravity O at the origin.
 そして、フリンジスキャン処理部3002は、補正行列Hを算出する(ステップ3205)。具体的には、フリンジスキャン処理部3002は、4つの中心座標のずれ方の関係(例:中心座標2201~2204と、中心座標2801~2804と、の回転関係や台形ゆがみの関係)から補正行列Hとなるホモグラフィ行列Hの3×3の要素を算出する。 フ Then, the fringe scan processing unit 3002 calculates a correction matrix H (step 3205). Specifically, the fringe scan processing unit 3002 calculates a correction matrix based on the relationship between the displacements of the four center coordinates (eg, the rotation relationship between the center coordinates 2201 to 2204 and the center coordinates 2801 to 2804 and the trapezoidal distortion). A 3 × 3 element of the homography matrix H that becomes H is calculated.
 そして、フリンジスキャン処理部3002は、補正用パラメータとして補正行列Hと重心Oの位置とを、補正値格納部3001に格納する(ステップ3206)。 {Then, the fringe scan processing unit 3002 stores the correction matrix H and the position of the center of gravity O as correction parameters in the correction value storage unit 3001 (step 3206).
 以上が、キャリブレーションの処理である。キャリブレーションの処理によれば、センサ画像を補正するための行列Hと重心Oの位置とを特定し、パラメータとして補正値格納部3001に格納することができる。 The above is the calibration process. According to the calibration processing, the matrix H for correcting the sensor image and the position of the center of gravity O can be specified, and stored in the correction value storage unit 3001 as a parameter.
 つづいて、フリンジスキャン処理部3002が歪み補正処理を実施する。 Subsequently, the fringe scan processing unit 3002 performs distortion correction processing.
 図33は、センサ画像の歪み補正処理フローの例を示す図である。まず、フリンジスキャン処理部3002は、補正値格納部3001から重心Oの位置を読み出し、センサ画像の重心Oを原点となる座標に移動させる(ステップ3301)。 FIG. 33 is a diagram illustrating an example of a sensor image distortion correction processing flow. First, the fringe scan processing unit 3002 reads the position of the center of gravity O from the correction value storage unit 3001, and moves the center of gravity O of the sensor image to the coordinates serving as the origin (step 3301).
 次に、フリンジスキャン処理部3002は、行列Mによる補正を行う(ステップ3302)。具体的には、フリンジスキャン処理部3002は、フォーカス設定部1901からフォーカスを合わせる距離f´の情報を取得し、距離f´に応じた行列Mを用いて補正を実施する。 Next, the fringe scan processing unit 3002 performs correction using the matrix M (step 3302). Specifically, the fringe scan processing unit 3002 obtains information on the focusing distance f ′ from the focus setting unit 1901 and performs correction using a matrix M corresponding to the distance f ′.
 そして、フリンジスキャン処理部3002は、行列Hによる補正を行う(ステップ3303)。具体的には、フリンジスキャン処理部3002は、補正値格納部3001から取得した行列Hによる補正を実施する。 フ Then, the fringe scan processing unit 3002 performs correction using the matrix H (step 3303). Specifically, the fringe scan processing unit 3002 performs correction using the matrix H acquired from the correction value storage unit 3001.
 その後は、図14に示したフリンジスキャンの処理フローのステップ1401~ステップ1407の処理と同様の処理を、行列Hによる補正を実施したセンサ画像に対して行う。 After that, the same processing as the processing of steps 1401 to 1407 of the processing flow of the fringe scan shown in FIG. 14 is performed on the sensor image corrected by the matrix H.
 なお、この例では行列Hと重心Oの位置の移動とを分けて処理するものとしたが、式(20)のように、3×3行列のホモグラフィ行列を用いる場合には、3×3行列に重心Oの移動の補正量を含めることが可能であるため、必ずしも分けて処理するものでなくともよい。 In this example, the matrix H and the movement of the position of the center of gravity O are processed separately. However, when a homography matrix of a 3 × 3 matrix is used as in Expression (20), the 3 × 3 matrix is used. Since it is possible to include the correction amount of the movement of the center of gravity O in the matrix, it is not always necessary to perform the processing separately.
 また、キャリブレーションは事前に実施することを前提に説明したが、点光源や太陽を検出した際に撮影しながら補正用パラメータを再計算し、補正値格納部3001の情報を更新するようにしてもよい。すなわち、事前実施に限られず、使用環境下において随時キャリブレーションを行うようにしてもよい。 Also, the description has been given on the assumption that the calibration is performed in advance. However, when a point light source or the sun is detected, the correction parameters are recalculated while photographing, and the information in the correction value storage unit 3001 is updated. Is also good. That is, the calibration is not limited to the preliminary execution, and the calibration may be performed as needed under the use environment.
 以上が、第三の実施例に係る撮像装置である。第三の実施例に係る撮像装置によれば、撮像装置を組み立てる際の誤差や、撮影用パターンの作製時の誤差を補正し、且つフォーカス調整にかかる歪みを補正することで高精度な空間分割フリンジスキャンを実施することが可能となる。 The above is the imaging apparatus according to the third embodiment. According to the imaging apparatus according to the third embodiment, an error in assembling the imaging apparatus and an error in producing a photographic pattern are corrected, and distortion associated with focus adjustment is corrected, thereby achieving high-precision space division. A fringe scan can be performed.
 第三の実施例の構成においては、センサ画像のフリンジスキャン前に歪みを補正していた(フリンジスキャン処理部3002において、ステップ3301~3303の処理を、ステップ1401~1407の処理の前に行っていた)が、センサ画像を無線や有線により他の装置へ伝送する場合には、情報の欠落防止を考慮して伝送する情報は無加工であることが望ましい場合がある。すなわち、歪み補正処理はフリンジスキャン後に行うのが好ましい場合がある。センサ画像のフリンジスキャン後に歪み補正する方法について、図34、図35を用いて説明する。 In the configuration of the third embodiment, the distortion is corrected before the fringe scan of the sensor image (the processes of steps 3301 to 3303 are performed before the processes of steps 1401 to 1407 in the fringe scan processing unit 3002). However, when the sensor image is transmitted to another device wirelessly or by wire, it may be desirable that the information to be transmitted is unprocessed in consideration of preventing loss of information. That is, it may be preferable to perform the distortion correction processing after the fringe scan. A method of correcting distortion after fringe scanning of a sensor image will be described with reference to FIGS.
 図34は、本発明の第四の実施例に係る撮像装置の構成例を示す図である。第四の実施例に係る撮像装置は、第三の実施例に係る撮像装置と略同一であるが、一部において相違がある。第四の実施例に係る撮像装置が第三の実施例に係る撮像装置と異なるのは、補正値格納部3001の情報を画像処理部3401が読み出して利用する点である。すなわち、第四の実施例に係る撮像装置は、センサ画像の歪みを補正するのではなく、センサ画像と対応する歪みを現像用パターン801の方に付加する。第三の実施例では座標(x1,y1)を、座標(x0,y0)に変換するための行列Hを算出したが、本実施例では、 FIG. 34 is a diagram illustrating a configuration example of an imaging device according to a fourth embodiment of the present invention. The imaging device according to the fourth embodiment is substantially the same as the imaging device according to the third embodiment, but is partially different. The imaging device according to the fourth embodiment is different from the imaging device according to the third embodiment in that the information in the correction value storage unit 3001 is read and used by the image processing unit 3401. That is, the imaging apparatus according to the fourth embodiment does not correct the distortion of the sensor image, but adds the distortion corresponding to the sensor image to the development pattern 801. In the third embodiment, the matrix H for converting the coordinates (x1, y1) into the coordinates (x0, y0) is calculated.
Figure JPOXMLDOC01-appb-M000021
のように、発想が逆の変換となる座標(x0,y0)を座標(x1,y1)に変換するための行列H’、行列M’を算出する。
Figure JPOXMLDOC01-appb-M000021
, A matrix H ′ and a matrix M ′ for transforming the coordinates (x0, y0) having the reverse transformation into the coordinates (x1, y1) are calculated.
 この場合の現像処理の流れは、上述の図15の相関現像方式による現像処理の処理フローと同等であるが、一部相違する。相関現像方式による現像処理のステップ1502における現像用パターン801の生成について、第四の実施例では、図35に示す処理の流れで実施する。 現 像 The flow of the development process in this case is the same as the flow of the development process by the correlation development method in FIG. 15 described above, but is partially different. In the fourth embodiment, the generation of the development pattern 801 in step 1502 of the development processing by the correlation development method is performed according to the processing flow shown in FIG.
 図35は、現像用パターンの歪み補正処理フローの例を示す図である。現像用パターンの歪み補正処理は、相関現像方式による現像処理のステップ1502において開始される。 FIG. 35 is a diagram showing an example of a distortion correction processing flow of a developing pattern. The distortion correction processing for the development pattern is started in step 1502 of the development processing using the correlation development method.
 なお、現像用パターンの歪み補正処理フローの実施前に、画像処理部3401は、図32に示したキャリブレーションの処理フローの例と同様に、式(21)の行列H’を予め歪み付加用の行列として算出しておく。具体的には、画像処理部3401は、4つの座標の関係(例:中心座標2201~2204と、中心座標2801~2804)から補正行列H’となるホモグラフィ行列H’の3×3の要素を予め算出しておく。 Before executing the distortion correction processing flow of the development pattern, the image processing unit 3401 converts the matrix H ′ of the equation (21) into a distortion adding processing in advance similarly to the example of the calibration processing flow illustrated in FIG. Is calculated in advance. Specifically, the image processing unit 3401 obtains a 3 × 3 element of a homography matrix H ′ that becomes a correction matrix H ′ based on a relationship between four coordinates (eg, center coordinates 2201 to 2204 and center coordinates 2801 to 2804). Is calculated in advance.
 そして、現像時、すなわち現像用パターンの歪み補正処理においては、画像処理部3401は、図20のような空間分割フリンジスキャン用の同心円のパターン画像を生成する(ステップ3501)。 Then, at the time of development, that is, in the distortion correction processing of the development pattern, the image processing unit 3401 generates a concentric pattern image for space division fringe scanning as shown in FIG. 20 (step 3501).
 そして、画像処理部3401は、重心Oを移動させる(ステップ3502)。具体的には、画像処理部3401は、補正値格納部3001から重心Oの位置を読み出し、空間分割フリンジスキャン用の同心円のパターン画像の中心を、原点とする座標から重心Oの位置分移動させる。 Then, the image processing unit 3401 moves the center of gravity O (step 3502). Specifically, the image processing unit 3401 reads the position of the center of gravity O from the correction value storage unit 3001, and moves the center of the pattern image of the concentric circle for the space division fringe scan by the position of the center of gravity O from the coordinates with the origin as the origin. .
 そして、画像処理部3401は、補正値格納部3001から取得した行列H’による補正を実施する(ステップ3503)。具体的には、補正値格納部3001から読み出した行列H’を用いて、空間分割フリンジスキャン用の同心円のパターン画像を変換する。 {Then, the image processing unit 3401 performs the correction using the matrix H ′ acquired from the correction value storage unit 3001 (step 3503). Specifically, a concentric pattern image for space division fringe scanning is converted using the matrix H 'read from the correction value storage unit 3001.
 そして、画像処理部3401は、補正値格納部3001から取得した行列M’による補正を実施する(ステップ3504)。具体的には、画像処理部3401は、フォーカス設定部1901からフォーカスを合わせる距離f´を取得して、距離f´に応じた行列M’の情報を補正値格納部3001から取得し、補正を実施する。 {Then, the image processing unit 3401 performs the correction using the matrix M ′ obtained from the correction value storage unit 3001 (Step 3504). Specifically, the image processing unit 3401 obtains the focusing distance f ′ from the focus setting unit 1901, obtains information of a matrix M ′ corresponding to the distance f ′ from the correction value storage unit 3001, and performs correction. carry out.
 そして、画像処理部3401は、パターン画像を4つに分割する(ステップ3505)。これ以降の処理は、2次元FFT演算した複素センサ画像に乗算する図15の現像処理と同一である。 Then, the image processing unit 3401 divides the pattern image into four (Step 3505). The subsequent processing is the same as the development processing of FIG. 15 in which the complex sensor image subjected to the two-dimensional FFT operation is multiplied.
 なお、本方法では現像用パターン801を歪ませることから、モアレ現像方式では点光源現像時のモアレが単一周波数にならず像がボケることになる。よって、モアレ現像方式よりも相関現像方式の方が、親和性が高く望ましいといえる。 In this method, since the development pattern 801 is distorted, the moiré development method does not produce moire at the time of point light source development at a single frequency but blurs the image. Therefore, it can be said that the correlation development method has higher affinity and is more desirable than the moiré development method.
 以上が、第四の実施例に係る撮像装置である。第四の実施例に係る方法・構成によれば、撮像装置を組み立てる際の誤差や、撮影用パターンの作製時の誤差を補正し、且つフォーカス調整にかかる歪みを補正することで高精度な空間分割フリンジスキャンを実施することが可能となる。 The above is the imaging apparatus according to the fourth embodiment. According to the method and configuration according to the fourth embodiment, it is possible to correct an error in assembling an imaging device and an error in manufacturing a photographing pattern, and correct distortion related to focus adjustment, thereby achieving high-precision space. A divided fringe scan can be performed.
 第三および第四の実施例では、撮像装置を組み立てる際の誤差や、撮影用パターンの作製時の誤差を補正し、且つフォーカス調整にかかる歪みを補正する方法について説明した。しかし、撮影用パターンや画像センサの欠陥、汚れ、ゴミに対する補正は考慮されていなかった。第五の実施例では、この欠陥保護の方法について図36乃至40を用いて説明する。この欠陥の問題点についてまず、図37を用いて明らかにする。 In the third and fourth embodiments, the method of correcting an error in assembling the imaging device and an error in manufacturing a photographing pattern and correcting a distortion related to focus adjustment has been described. However, no correction has been taken into account for defects, dirt, and dust in the imaging pattern and the image sensor. In the fifth embodiment, a method of protecting the defect will be described with reference to FIGS. First, the problem of this defect will be clarified using FIG.
 図37は、撮影用パターンに汚れ等が付着している場合の画像の例を示す図である。時分割フリンジスキャンであれば、4つのパターン全ての同じ位置に欠陥が現れるため、センサ画像としては一箇所に欠陥が記録される。空間分割フリンジスキャンの場合には、欠陥3801のようにパターン3803の象限にのみ欠陥が存在することになるため、他のパターンと整合が取れない。図37のパターン3802の象限とパターン3803の象限とについて、点線に沿って輝度分布をマッピングすると、それぞれ図38、図39に示すグラフを得られる。 FIG. 37 is a diagram showing an example of an image in the case where dirt or the like is attached to the photographing pattern. In the case of the time-division fringe scan, a defect appears at the same position in all four patterns, so that the defect is recorded at one position as a sensor image. In the case of the space division fringe scan, since a defect exists only in the quadrant of the pattern 3803 like the defect 3801, matching with other patterns cannot be obtained. By mapping the luminance distribution along the dotted line for the quadrant of the pattern 3802 and the quadrant of the pattern 3803 in FIG. 37, the graphs shown in FIGS. 38 and 39 are obtained.
 図38は、理想的なセンサ画像の輝度分布例を示す図である。すなわち、パターン3802に示されるように、同心円パターンのマスク部分について輝度が小さくなる信号成分を得られる。 FIG. 38 is a diagram showing an example of the luminance distribution of an ideal sensor image. That is, as shown in the pattern 3802, a signal component in which the luminance decreases in the mask portion of the concentric pattern can be obtained.
 図39は、欠陥のあるセンサ画像の輝度分布例を示す図である。すなわち、パターン3803に示されるように、同心円パターンのマスク部分について輝度が小さくなる信号成分を得られるが、欠陥3801の一部により輝度がマスク部分とは無関係に小さくなる。なお、当然に、欠陥3801の度合いに応じて、輝度の減少度合いは異なる。すなわち、欠陥が軽微なものであれば輝度の減少度合いは少なく、欠陥が甚大なものであれば輝度の減少度合いは多くなる。 FIG. 39 is a diagram illustrating an example of the luminance distribution of a defective sensor image. That is, as shown in a pattern 3803, a signal component whose luminance is reduced in the mask portion of the concentric pattern can be obtained, but the luminance is reduced irrespective of the mask portion due to a part of the defect 3801. Note that, of course, the degree of decrease in luminance differs depending on the degree of the defect 3801. That is, if the defect is minor, the degree of decrease in luminance is small, and if the defect is large, the degree of decrease in luminance is large.
 図37に示す状態では、欠陥部分のフリンジスキャンが正しく実施されず、その誤差が画像全体に波及することになり画質が低下する(有意な画像が得られなくなる)。これは、欠陥領域の輝度差が信号成分の輝度差よりも一般的に大きく、影響度が大きいためである。この欠陥位置を特定し、現像処理から除外する方が欠陥の影響が小さくなり、S/N比が悪くなる程度の影響に抑えることができる。第五の実施例の構成について、図36に示す。 で は In the state shown in FIG. 37, the fringe scan of the defective portion is not performed correctly, and the error spreads to the entire image, and the image quality deteriorates (a significant image cannot be obtained). This is because the luminance difference in the defective area is generally larger than the luminance difference in the signal component, and the degree of influence is large. By specifying the defect position and excluding it from the development processing, the influence of the defect is reduced, and the influence of the S / N ratio is reduced. FIG. 36 shows the configuration of the fifth embodiment.
 図36は、本発明の第五の実施例に係る撮像装置の構成例を示す図である。第五の実施例に係る撮像装置は、基本的に第四の実施例に係る撮像装置と同様であるが、一部に相違がある。 FIG. 36 is a diagram illustrating a configuration example of an imaging device according to a fifth embodiment of the present invention. The imaging device according to the fifth embodiment is basically the same as the imaging device according to the fourth embodiment, but is partially different.
 第五の実施例に係る撮像装置が、第四の実施例に係る撮像装置と異なる点は、欠陥検出部3601を備える点と、画像処理部3602が欠陥保護処理を実施する点である。欠陥検出部3601は、フリンジスキャン処理部106からセンサ画像を取得し、図38、図39に示すように、欠陥検出閾値よりも輝度が低い領域を検出すると、当該領域を特定する欠陥信号を出力するとともに、補正値格納部3001に格納する。すなわち、欠陥信号が出力された位置を特定する情報が補正値格納部3001に格納される。そのような情報には、例えば欠陥のあるセンサ上の座標を逐一指定する情報に限られず、センサ上の矩形領域を特定する情報や、欠陥部位が含まれる最小円の中心と半径であってもよい。 撮 像 The imaging apparatus according to the fifth embodiment differs from the imaging apparatus according to the fourth embodiment in that a defect detection unit 3601 is provided and an image processing unit 3602 performs a defect protection process. The defect detection unit 3601 acquires the sensor image from the fringe scan processing unit 106 and, when detecting an area having a luminance lower than the defect detection threshold as shown in FIGS. 38 and 39, outputs a defect signal specifying the area. At the same time, it is stored in the correction value storage unit 3001. That is, information for specifying the position where the defect signal is output is stored in the correction value storage unit 3001. Such information is not limited to, for example, information for specifying coordinates on a defective sensor one by one, but may be information for specifying a rectangular area on the sensor or the center and radius of a minimum circle including a defective part. Good.
 欠陥検出部3601による最も簡便な欠陥検出方法は上述のとおりであるが、信号成分の振幅が大きい時には欠陥と信号の判別が困難になることがある。そこで、欠陥検出部3601は、フリンジスキャンにおけるΦ=0、πのセンサ画像の組み合わせ、もしくはΦ=π/2、3π/2のセンサ画像の組み合わせのように、位相がπずれた関係、すなわち同心円の白黒が反転した状態にある2つのセンサ画像を加算することで、欠陥を判別するようにしてもよい。この処理は、図14のフリンジスキャンの処理のステップ1401とステップ1402の間で行われるようにするのが望ましい。図38、図39に示した信号に対してこの加算を実施した結果を、図40に示す。 The simplest defect detection method by the defect detection unit 3601 is as described above. However, when the amplitude of the signal component is large, it may be difficult to distinguish between a defect and a signal. Therefore, the defect detection unit 3601 determines a relationship in which the phases are shifted by π, such as a combination of sensor images of Φ = 0 and π or a combination of sensor images of Φ = π / 2 and 3π / 2 in a fringe scan, that is, a concentric circle. The defect may be determined by adding the two sensor images in which black and white are inverted. This processing is desirably performed between steps 1401 and 1402 of the fringe scan processing in FIG. FIG. 40 shows the result of performing this addition on the signals shown in FIGS. 38 and 39.
 図40は、反転パターン合成による欠陥検出の例を示す図である。図40に示すように、正常な信号成分が相殺によりキャンセルされて輝度変動成分のみが残留することになる。この応答に対して、欠陥検出部3601は、平均値の±定数の範囲となる欠陥検出閾値を用いて、残留した輝度が欠陥検出閾値を下回るもしくは上回る領域について、欠陥信号を出力し補正値格納部3001に記憶させる。この方法によれば、被写体によらず安定した欠陥検出が可能となる。この欠陥検出を、すべてのセンサ画像のそれぞれにおいて行うようにしてもよいし、反転パターンをペアとして、ペアごとに行うようにしてもよい。 FIG. 40 is a diagram showing an example of defect detection by inversion pattern synthesis. As shown in FIG. 40, the normal signal component is canceled by cancellation, and only the luminance fluctuation component remains. In response to this response, the defect detection unit 3601 outputs a defect signal and stores a correction value in an area where the remaining luminance is below or above the defect detection threshold, using a defect detection threshold within the range of ± constant of the average value. This is stored in the unit 3001. According to this method, stable defect detection can be performed regardless of the subject. This defect detection may be performed for each of all the sensor images, or may be performed for each pair with the inverted pattern as a pair.
 そして、画像処理部3602において、欠陥検出部3601の出力である欠陥信号を用いて、現像用パターンの欠陥信号に該当する箇所をNAN値として使用しない処理、すなわちデータのマスク処理を施す。 {Circle around (2)} The image processing unit 3602 performs a process of not using a portion corresponding to the defect signal of the development pattern as a NAN value, that is, a data mask process, using the defect signal output from the defect detection unit 3601.
 なお、欠陥信号をセンサ画像ではなく画像処理部3602の現像用パターンに対して適用する場合には、第四の実施例のように、行列H’、行列M’を使用して欠陥信号に該当する位置を補正することが望ましい。 When the defect signal is applied not to the sensor image but to the development pattern of the image processing unit 3602, the matrix H ′ and the matrix M ′ are used to correspond to the defect signal as in the fourth embodiment. It is desirable to correct the position at which it is performed.
 また、欠陥信号を画像処理部3602ではなく、第三の実施例の図30のフリンジスキャン処理部3002に入力し、センサ画像の各パターン全ての同一箇所(パターン中心と欠陥信号の相対位置が同じとなる箇所)をNAN値として使用しない処理を施すようにしてもよい。 Also, the defect signal is input not to the image processing unit 3602 but to the fringe scan processing unit 3002 of FIG. 30 of the third embodiment, and the same position of all the patterns of the sensor image (the relative position between the pattern center and the defect signal is the same) May be performed without using the NAN value as the NAN value.
 以上、第五の実施例について説明した。第五の実施例に係る撮像装置によれば、撮像装置を組み立てる際の誤差や、撮影用パターンの作製時の誤差を補正し、且つフォーカス調整にかかる歪みを補正するだけでなく、撮影用パターンや画像センサの欠陥、汚れ、ゴミに対する補正も実施することでより高精度な空間分割フリンジスキャンを実施することが可能となる。 The fifth embodiment has been described above. According to the imaging apparatus according to the fifth embodiment, errors in assembling the imaging apparatus and errors in manufacturing the imaging pattern are corrected, and not only distortions related to focus adjustment are corrected, but also the imaging pattern is corrected. By performing correction for defects, dirt, and dust of the image sensor and the image sensor, it is possible to perform more accurate space division fringe scan.
 なお、本発明は上記の実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。 Note that the present invention is not limited to the above-described embodiment, and includes various modifications. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described above.
 また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。 In addition, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of one embodiment can be added to the configuration of another embodiment.
 また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 に つ い て In addition, a part of the configuration of each embodiment can be added, deleted, or replaced with another configuration.
 また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。
 また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えてもよい。
In addition, each of the above-described configurations, functions, processing units, processing means, and the like may be partially or entirely realized by hardware, for example, by designing an integrated circuit. In addition, the above-described configurations, functions, and the like may be realized by software by a processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files for realizing each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
In addition, control lines and information lines are shown as necessary for the description, and do not necessarily indicate all control lines and information lines on a product. In fact, it can be considered that almost all components are connected to each other.
 以上、本発明について、実施例を中心に説明した。 The above has been a description of the present invention, focusing on the embodiments.
 101・・・撮像装置、102・・・撮像モジュール、103・・・画像センサ、103a・・・画素、104・・・パターン基板、105・・・撮影用パターン、106・・・フリンジスキャン処理部、107・・・画像処理部、108・・・コントローラ、301・・・支持部材、801・・・現像用パターン、1300・・・撮影用パターン、1302・・・第一象限、1701,1801・・・点、1702,1802・・・投影像、1901・・・フォーカス設定部、2001,2001´・・・光源、2101・・・第二象限、2102・・・第一象限、2103・・・第三象限、2104・・・第四象限、2401・・・余白領域、2201,2301,2501,2601,2801・・・第一象限のパターン中心、2502,2602,2802・・・第二象限のパターン中心、2503,2603,2803・・・第三象限のパターン中心、2504,2604,2804・・・第四象限のパターン中心、3001・・・補正値格納部、3002・・・フリンジスキャン処理部、3401・・・画像処理部、3601・・・欠陥検出部、3602・・・画像処理部、3801・・・欠陥、3802・・・第四象限のパターン、3803・・・第一象限のパターン。 101 imaging apparatus, 102 imaging module, 103 image sensor, 103a pixel, 104 pattern substrate, 105 imaging pattern, 106 fringe scan processing unit 107, image processing unit, 108, controller, 301, support member, 801, development pattern, 1300, photography pattern, 1302, first quadrant, 1701, 1801 .. Point, 1702, 1802... Projected image, 1901... Focus setting unit, 2001, 2001 ′, light source, 2101, second quadrant, 2102, first quadrant, 2103, 3rd quadrant, 2104... Fourth quadrant, 2401... Blank area, 2201, 301, 2501, 2601, 280... Pattern center of first quadrant, 25 2, 2602, 2802: pattern center in the second quadrant, 2503, 2603, 2803 ... pattern center in the third quadrant, 2504, 2604, 2804 ... pattern center in the fourth quadrant, 3001 ... correction Value storage unit, 3002 ... fringe scan processing unit, 3401 ... image processing unit, 3601 ... defect detection unit, 3602 ... image processing unit, 3801 ... defect, 3802 ... fourth quadrant , 3803... Pattern in the first quadrant.

Claims (13)

  1.  光を電気信号に変換し、センサ画像を生成する画像センサと、
     撮影用パターンに基づいて前記画像センサで検出される光の強度を変調する変調器と、
     前記センサ画像のうち異なる前記撮影用パターンによって撮影された複数の前記センサ画像に対する所定の補正処理の実行に用いるパラメータを記憶するパラメータ記憶部と、
     を有することを特徴とする撮像装置。
    An image sensor that converts light into an electric signal and generates a sensor image;
    A modulator that modulates the intensity of light detected by the image sensor based on a shooting pattern,
    A parameter storage unit that stores a parameter used for performing a predetermined correction process on a plurality of the sensor images captured by the different capturing patterns among the sensor images,
    An imaging device comprising:
  2.  請求項1に記載の撮像装置であって、
     前記パラメータを要求に応じて出力するパラメータ出力部、
     を備えることを特徴とする撮像装置。
    The imaging device according to claim 1,
    A parameter output unit that outputs the parameter as required,
    An imaging device comprising:
  3.  請求項1に記載の撮像装置であって、
     前記パラメータには、前記撮影用パターンごとに前記画像センサのセンサ面上の移動量を特定する情報が含まれる、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    The parameter includes information for specifying the amount of movement of the image sensor on the sensor surface for each of the photographing patterns,
    An imaging device characterized by the above-mentioned.
  4.  請求項1に記載の撮像装置であって、
     前記パラメータには、前記撮影用パターンごとに前記画像センサ面上の移動量を特定する情報として、前記撮影用パターンの同心円の中心を基準に台形歪みを補正する情報が含まれる、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    The parameters include information for correcting trapezoidal distortion based on the center of a concentric circle of the imaging pattern, as information for specifying the amount of movement on the image sensor surface for each imaging pattern.
    An imaging device characterized by the above-mentioned.
  5.  請求項1に記載の撮像装置であって、
     前記撮影用パターンには、同心円状のパターンが設けられ、
     前記パラメータには、前記撮影用パターンごとに前記画像センサのセンサ面上の移動量を特定する情報として、前記撮影用パターンの同心円の中心間の重心の移動量と、該重心を回転の中心とする回転角度に相当する情報を特定する情報が含まれる、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    The imaging pattern is provided with a concentric pattern,
    The parameters include, as information for specifying the amount of movement on the sensor surface of the image sensor for each of the imaging patterns, the amount of movement of the center of gravity between the centers of concentric circles of the imaging pattern, and the center of rotation as the center of rotation. Information that specifies information corresponding to the rotation angle to be included.
    An imaging device characterized by the above-mentioned.
  6.  請求項1に記載の撮像装置であって、
     前記所定の補正処理は、ホモグラフィ変換であって、
     前記パラメータは、前記ホモグラフィ変換に用いる3×3行列を特定する情報である、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    The predetermined correction processing is a homography conversion,
    The parameter is information for specifying a 3 × 3 matrix used for the homography conversion.
    An imaging device characterized by the above-mentioned.
  7.  請求項1に記載の撮像装置であって、
     前記所定の補正処理は、アフィン変換であって、
     前記パラメータは、前記アフィン変換に用いる2×2行列を特定する情報である、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    The predetermined correction process is an affine transformation,
    The parameter is information for specifying a 2 × 2 matrix used for the affine transformation.
    An imaging device characterized by the above-mentioned.
  8.  請求項1に記載の撮像装置であって、
     前記センサ画像の欠陥部位を検出する欠陥検出部を備え、
     前記所定の補正処理は、前記欠陥部位を除外する処理であって、
     前記パラメータは、前記欠陥部位を特定する情報である、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    A defect detection unit that detects a defect site in the sensor image,
    The predetermined correction process is a process for excluding the defective portion,
    The parameter is information for identifying the defect site,
    An imaging device characterized by the above-mentioned.
  9.  請求項8に記載の撮像装置であって、
     前記欠陥検出部は、前記撮影用パターンのうち反転パターンを有する撮影用パターンにより得られた前記センサ画像同士を合成して所定以上の出力を有する部位を前記欠陥部位として検出する、
     ことを特徴とする撮像装置。
    The imaging device according to claim 8,
    The defect detection unit detects a portion having an output of a predetermined value or more as the defective portion by combining the sensor images obtained by the imaging pattern having an inverted pattern among the imaging patterns,
    An imaging device characterized by the above-mentioned.
  10.  請求項1に記載の撮像装置であって、
     前記所定の補正処理を行うフリンジスキャン処理部を備え、
     前記フリンジスキャン処理部は、前記センサ画像のうち異なる前記撮影用パターンによって撮影された複数の前記センサ画像のそれぞれに前記所定の補正処理を行う、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    A fringe scan processing unit that performs the predetermined correction process,
    The fringe scan processing unit performs the predetermined correction process on each of the plurality of sensor images captured by the different imaging patterns among the sensor images,
    An imaging device characterized by the above-mentioned.
  11.  請求項1に記載の撮像装置であって、
     前記所定の補正処理を行う画像処理部を備え、
     前記画像処理部は、前記センサ画像のうち異なる前記撮影用パターンによって撮影された複数の前記センサ画像の合成画像に適用する所定の現像用パターンに前記所定の補正処理を行う、
     ことを特徴とする撮像装置。
    The imaging device according to claim 1,
    An image processing unit that performs the predetermined correction process,
    The image processing unit performs the predetermined correction process on a predetermined development pattern to be applied to a composite image of a plurality of the sensor images photographed by different ones of the sensor images among the sensor images,
    An imaging device characterized by the above-mentioned.
  12.  請求項1~11のいずれか一項に記載の撮像装置であって、
     前記パラメータは、被写体までの距離に応じて設けられる、
     ことを特徴とする撮像装置。
    The imaging device according to any one of claims 1 to 11, wherein
    The parameter is provided according to the distance to the subject,
    An imaging device characterized by the above-mentioned.
  13.  撮像装置を用いた撮像方法であって、
     前記撮像装置は、
     撮影用パターンを透過させた光の強度を変調する変調ステップと、
     画像センサにより前記変調した光を電気信号に変換し、センサ画像を生成する画像生成ステップと、
     フリンジスキャン処理部により、前記センサ画像のうち異なる前記撮影用パターンによって撮影された複数の前記センサ画像に対する所定の補正処理の実行に用いるパラメータを所定の記憶部から読み出して前記補正処理を実行する補正ステップと、
     を実施することを特徴とする撮像方法。
    An imaging method using an imaging device,
    The imaging device,
    A modulation step of modulating the intensity of the light transmitted through the shooting pattern,
    An image generation step of converting the modulated light into an electric signal by an image sensor and generating a sensor image,
    A correction unit that reads, from a predetermined storage unit, parameters used for performing a predetermined correction process on a plurality of the sensor images captured by the different imaging patterns among the sensor images, and executes the correction process by a fringe scan processing unit. Steps and
    An imaging method characterized in that:
PCT/JP2019/010263 2018-09-18 2019-03-13 Imaging device and imaging method WO2020059181A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018173930A JP7097787B2 (en) 2018-09-18 2018-09-18 Imaging device and imaging method
JP2018-173930 2018-09-18

Publications (1)

Publication Number Publication Date
WO2020059181A1 true WO2020059181A1 (en) 2020-03-26

Family

ID=69886883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/010263 WO2020059181A1 (en) 2018-09-18 2019-03-13 Imaging device and imaging method

Country Status (2)

Country Link
JP (1) JP7097787B2 (en)
WO (1) WO2020059181A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021005743A (en) * 2019-06-25 2021-01-14 株式会社日立製作所 Imaging apparatus
WO2023127509A1 (en) * 2021-12-27 2023-07-06 ソニーグループ株式会社 Adjustment device and operation method of adjustment device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06350926A (en) * 1993-06-02 1994-12-22 Hitachi Ltd Video camera
JPH11205652A (en) * 1998-01-19 1999-07-30 Yoshikazu Ichiyama Learning digital image input device
JP2011166255A (en) * 2010-02-05 2011-08-25 Panasonic Corp Image pickup device
JP2018061109A (en) * 2016-10-04 2018-04-12 株式会社日立製作所 Imaging apparatus and imaging method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06350926A (en) * 1993-06-02 1994-12-22 Hitachi Ltd Video camera
JPH11205652A (en) * 1998-01-19 1999-07-30 Yoshikazu Ichiyama Learning digital image input device
JP2011166255A (en) * 2010-02-05 2011-08-25 Panasonic Corp Image pickup device
JP2018061109A (en) * 2016-10-04 2018-04-12 株式会社日立製作所 Imaging apparatus and imaging method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021005743A (en) * 2019-06-25 2021-01-14 株式会社日立製作所 Imaging apparatus
JP7159118B2 (en) 2019-06-25 2022-10-24 株式会社日立製作所 Imaging device
WO2023127509A1 (en) * 2021-12-27 2023-07-06 ソニーグループ株式会社 Adjustment device and operation method of adjustment device

Also Published As

Publication number Publication date
JP7097787B2 (en) 2022-07-08
JP2020048031A (en) 2020-03-26

Similar Documents

Publication Publication Date Title
JP6820908B2 (en) Imaging device
JP6491332B2 (en) Imaging device
JP6685887B2 (en) Imaging device
JP6721698B2 (en) Imaging device
WO2020059181A1 (en) Imaging device and imaging method
CN110324513B (en) Image pickup apparatus, image pickup module, and image pickup method
JP6646619B2 (en) Imaging device
JP6920974B2 (en) Distance measuring device and distance measuring method
JP6864604B2 (en) Imaging device
JP6807286B2 (en) Imaging device and imaging method
JP2023016864A (en) Imaging apparatus and method
JP6947891B2 (en) Mobile information terminal
JP6770164B2 (en) Imaging device
JP7389195B2 (en) Image generation method
JP6814762B2 (en) Imaging device
JP6636663B2 (en) Imaging device and image generation method
JP7159118B2 (en) Imaging device
JP2021064000A (en) Imaging device
JP2020098963A (en) Imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19862090

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19862090

Country of ref document: EP

Kind code of ref document: A1