WO2020059181A1 - Dispositif d'imagerie, et procédé d'imagerie - Google Patents

Dispositif d'imagerie, et procédé d'imagerie Download PDF

Info

Publication number
WO2020059181A1
WO2020059181A1 PCT/JP2019/010263 JP2019010263W WO2020059181A1 WO 2020059181 A1 WO2020059181 A1 WO 2020059181A1 JP 2019010263 W JP2019010263 W JP 2019010263W WO 2020059181 A1 WO2020059181 A1 WO 2020059181A1
Authority
WO
WIPO (PCT)
Prior art keywords
pattern
image
imaging device
sensor
imaging
Prior art date
Application number
PCT/JP2019/010263
Other languages
English (en)
Japanese (ja)
Inventor
悠介 中村
啓太 山口
和幸 田島
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2020059181A1 publication Critical patent/WO2020059181A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to an imaging device and an imaging method.
  • the present invention claims the priority of Japanese Patent Application No. 2018-173930 filed on September 18, 2018, and the contents described in the application for the designated countries in which weaving by reference to a document is permitted. Incorporated by reference into this application.
  • Patent Document 1 JP-A-2018-61109.
  • This publication states that “a modulator having a first pattern and modulating the intensity of light, an image sensor that converts light transmitted through the modulator into image data and outputs the image data, And an image processing unit for restoring an image based on a cross-correlation operation with pattern data indicating the second pattern.
  • An object of the present invention is to provide a technique for obtaining a correct developed image by correcting an assembly error of an imaging device, a manufacturing error of a photographing pattern, and distortion of focus adjustment.
  • an imaging device converts an light into an electric signal, generates an image of a sensor, and an image sensor that detects the light based on an imaging pattern.
  • a modulator that modulates intensity; and a parameter storage unit that stores a parameter used to execute a predetermined correction process on a plurality of the sensor images captured by the different capturing patterns among the sensor images.
  • FIG. 1 is a diagram illustrating a configuration example of an imaging device according to a first embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a configuration example of an imaging module according to the first embodiment.
  • FIG. 4 is a diagram illustrating a configuration example of another imaging module according to the first embodiment.
  • FIG. 4 is a diagram illustrating an example of a photographing pattern and a developing pattern according to the first embodiment.
  • FIG. 6 is a diagram illustrating another example of the photographing pattern and the developing pattern according to the first embodiment.
  • FIG. 9 is a diagram illustrating an example in which an in-plane shift occurs in a projected image from a pattern substrate surface to an image sensor due to obliquely incident parallel light.
  • FIG. 9 is a diagram illustrating an example in which an in-plane shift occurs in a projected image from a pattern substrate surface to an image sensor due to obliquely incident parallel light.
  • FIG. 3 is a diagram illustrating an example of a projected image of a photographing pattern.
  • FIG. 4 is a diagram illustrating an example of a development pattern.
  • FIG. 6 is a diagram illustrating an example of a developed image by a correlation development method. It is a figure showing an example of a moire fringe by a moire development system.
  • FIG. 4 is a diagram illustrating an example of a developed image by a moiré developing method.
  • FIG. 9 is a diagram illustrating an example of a combination of imaging patterns of an initial phase in a fringe scan.
  • FIG. 4 is a diagram illustrating an example of a shooting pattern of a space division fringe scan.
  • FIG. 9 is a diagram illustrating an example of a processing flow of fringe scan.
  • FIG. 7 is a diagram illustrating an example of a processing flow of a development process by a correlation development method.
  • FIG. 4 is a diagram illustrating an example of a processing flow of a development process by a moire development method.
  • FIG. 4 is a diagram illustrating an example of projection of a shooting pattern when an object is at an infinite distance.
  • FIG. 9 is a diagram illustrating an example of an enlarged projection of a photographing pattern when an object is at a finite distance.
  • FIG. 6 is a diagram illustrating a configuration example of an imaging device according to a second embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of a positional relationship between a point light source and an imaging device.
  • FIG. 9 is a diagram illustrating an example of pattern center coordinates when a point light source is at infinity.
  • FIG. 9 is a diagram illustrating an example of pattern center coordinates when a point light source is at a finite distance.
  • FIG. 9 is a diagram illustrating an example of correcting a sensor image when a point light source is at a finite distance.
  • FIG. 5 is a diagram illustrating an example of a sensor image when a shooting pattern is rotating.
  • FIG. 9 is a diagram illustrating an example of pattern center coordinates when a shooting pattern is rotating. It is a figure showing the example of the state where the pattern for photography is inclined in the thickness direction.
  • FIG. 9 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention.
  • FIG. 14 is a diagram illustrating a configuration example of an imaging device according to a modification of the third embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an example of a processing flow of calibration.
  • FIG. 11 is a diagram illustrating an example of a sensor image distortion correction processing flow.
  • FIG. 14 is a diagram illustrating a configuration example of an imaging device according to a fourth embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an example of a distortion correction processing flow of a development pattern. It is a figure showing the example of composition of the imaging device concerning a 5th example of the present invention.
  • FIG. 9 is a diagram illustrating an example of an image when dirt or the like is attached to a shooting pattern. It is a figure showing an example of a luminance distribution of an ideal sensor image.
  • FIG. 7 is a diagram illustrating an example of a luminance distribution of a defective sensor image.
  • FIG. 9 is a diagram illustrating an example of defect detection by inversion pattern synthesis.
  • the number of elements when referring to the number of elements (including the number, numerical value, amount, range, etc.), a case where it is particularly specified and a case where it is clearly limited to a specific number in principle, etc. Except, the number is not limited to the specific number, and may be more than or less than the specific number.
  • constituent elements are not necessarily essential unless otherwise specified or considered to be essential in principle. Needless to say.
  • an imaging method which realizes a thinner and lower cost by obtaining an object image without using a lens.
  • the operation for solving the inverse problem by signal processing becomes complicated, and the processing load is high, so that the hardware requirement specification of the information equipment is increased.
  • FIG. 1 is a diagram showing a configuration example of an imaging apparatus according to a first embodiment of the present invention.
  • the image capturing apparatus 101 acquires an image of an external object without using a lens to form an image.
  • the image capturing module 102 As shown in FIG. 1, the image capturing module 102, a fringe scan processing unit 106, an image processing unit 107, and a controller 108 It is composed of FIG. 2 shows an example of the imaging module 102.
  • FIG. 2 is a diagram illustrating a configuration of the imaging module according to the first embodiment.
  • the imaging module 102 includes an image sensor 103, a pattern substrate 104, and an imaging pattern 105.
  • the pattern substrate 104 is fixed in close contact with the light receiving surface of the image sensor 103, and a pattern 105 for photographing is formed on the pattern substrate 104.
  • the pattern substrate 104 is made of a material that is transparent to visible light, such as glass or plastic.
  • the photographing pattern 105 is a concentric lattice pattern in which the distance between the lattice patterns, that is, the pitch, is reduced in inverse proportion to the radius from the center toward the outside.
  • the imaging pattern 105 is formed by depositing a metal such as aluminum or chromium by, for example, a sputtering method used in a semiconductor process. Shading is given by the pattern with and without the metal deposited. Note that the formation of the photographing pattern 105 is not limited to this, and may be formed by shading, for example, by printing with an inkjet printer or the like. Further, here, the visible light has been described as an example.
  • the pattern substrate 104 is made of a material that is transparent to far-infrared rays such as germanium, silicon, and chalcogenide.
  • a material that is transparent to the wavelength to be used may be used, and the imaging pattern 105 may be made of a material that blocks light such as metal.
  • the pattern substrate 104 and the imaging pattern 105 can also be said to be modulators that modulate the intensity of light incident on the image sensor 103. Note that, here, a method of forming the imaging pattern 105 on the pattern substrate 104 in order to realize the imaging module 102 has been described. However, the imaging module 102 may be realized by a configuration as shown in FIG.
  • FIG. 3 is a diagram illustrating a configuration example of another imaging module according to the first embodiment.
  • the imaging pattern 105 is formed in a thin film and is held by the support member 301.
  • the angle of view can be changed depending on the thickness of the pattern substrate 104. Therefore, for example, if the pattern substrate 104 has the configuration shown in FIG. 3 and has a function of changing the length of the support member 301, it is possible to change the angle of view during shooting and shoot.
  • the pixels 103a which are light receiving elements, are regularly arranged in a grid pattern.
  • the image sensor 103 converts a light image received by the pixel 103a into an image signal which is an electric signal.
  • the intensity of light transmitted through the imaging pattern 105 is modulated by the pattern, and the transmitted light is received by the image sensor 103.
  • the image sensor 103 is, for example, a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
  • the image signal output from the image sensor 103 is subjected to processing such as noise removal by the fringe scan processing unit 106, and the data processed by the image processing unit 107 is output to the controller 108.
  • the controller 108 converts the data format so as to conform to an interface such as USB (Universal Serial Bus) and outputs the converted data.
  • the photographing pattern 105 is a concentric pattern in which the pitch becomes finer in inverse proportion to the radius from the center, and using the radius r and the coefficient ⁇ from the reference coordinates that is the center of the concentric circle,
  • GZP Gabor Zone Plate
  • FZP Fresnel Zone Plate
  • FIG. 4 is a diagram showing an example of a photographing pattern and a developing pattern according to the first embodiment. Specifically, FIG. 4 is an example of a Gabor zone plate represented by the above equation (1).
  • FIG. 5 is a diagram showing another example of the photographing pattern and the developing pattern according to the first embodiment. Specifically, this is an example of a Fresnel zone plate using a pattern obtained as a result of binarizing Expression (1) with a threshold value of 1.
  • the patterned substrate 104 having a thickness of d that photographing pattern 105 is formed, a parallel beam by an angle theta 0 in the x-axis direction as shown in FIG. 6 is a incident.
  • a parallel beam by an angle theta 0 in the x-axis direction as shown in FIG. 6 is a incident.
  • the refraction angle in the pattern substrate 104 is ⁇
  • FIG. 7 shows an example of a projected image of the photographing pattern 105.
  • FIG. 7 is a view showing an example of a projected image of the photographing pattern. As shown in FIG. 7, when parallel light is incident as shown in FIG. 6 using the photographing pattern 105, the image is projected onto the image sensor 103 by shifting by k as in the above equation (2). Is done. This is the output from the imaging module 102.
  • the image processing unit 107 performs a developing process. Among them, the developing process by the correlation developing method and the moiré developing method will be described.
  • the image processing unit 107 calculates a cross-correlation function between the projected image of the photographing pattern 105 shown in FIG. 7 and the developing pattern 801 shown in FIG. A bright spot having a shift amount k can be obtained.
  • the cross-correlation calculation is performed by a two-dimensional convolution calculation, the calculation amount increases.
  • FIG. 8 is a diagram showing an example of a development pattern.
  • the development pattern 801 has a pattern similar to the Gabor zone plate shown in FIG. 4 or the FZP (Fresnel zone plate) shown in FIG. That is, in the present embodiment, the development pattern 801 does not need to have a physical entity, and may be present as information used in image processing.
  • FIG. 9 is a diagram showing an example of a developed image by the correlation developing method.
  • the image is developed by the correlation development method, as described above, a developed image in which a certain bright point is shifted by k can be obtained.
  • the development pattern 801 uses a Gabor zone plate or a Fresnel zone plate similarly to the imaging pattern 105, the development pattern 801 uses the initial phase ⁇ ,
  • F represents an operation of Fourier transform
  • u is a frequency coordinate in the x direction
  • ⁇ with parentheses is a delta function.
  • the equation after the Fourier transform is also a Fresnel zone plate or a Gabor zone plate. Therefore, the development pattern after Fourier transform may be directly generated based on this equation. As a result, the amount of calculation can be reduced.
  • exp (-iku) represented by this exponential function is a signal component, and when this term is Fourier-transformed,
  • This bright point indicates a light beam at infinity, and is nothing but a captured image obtained by the imaging device 101 in FIG.
  • a pattern not limited to the Fresnel zone plate or the Gabor zone plate, for example, a random pattern may be realized as long as the autocorrelation function of the pattern has a single peak.
  • a moiré fringe as shown in FIG. 10 is generated by multiplying the projected image of the photographing pattern 105 shown in FIG. 7 by the developing pattern 801 shown in FIG. Is subjected to Fourier transform, so that a bright spot having a shift amount of (k ⁇ / ⁇ ) as shown in FIG. 11 can be obtained.
  • FIG. 10 is a diagram showing an example of moire fringes by the moire development method. Specifically, as shown in FIG. 10, the result of multiplication of the projected image of the photographing pattern 105 shown in FIG. 7 and the development pattern 801 shown in FIG. 8 is obtained as moire fringes.
  • the third term of this expansion formula is a signal component, and that a striped pattern that is straight and equidistant in the direction of displacement between the two patterns is formed over the entire overlapping region.
  • a fringe generated at a relatively low spatial frequency due to the superposition of such fringes is called a Moire fringe.
  • F represents an operation of Fourier transform
  • u is a frequency coordinate in the x direction
  • ⁇ with parentheses is a delta function
  • FIG. 11 is a view showing an example of a developed image by the moiré developing method.
  • the moire fringe may be realized by a pattern other than the Fresnel zone plate or the Gabor zone plate, for example, an elliptical pattern.
  • FIG. 12 shows an example of a plurality of patterns.
  • FIG. 12 is a diagram showing an example of a combination of imaging patterns of an initial phase in a fringe scan.
  • a complex sensor image complex sensor image
  • 0
  • may be set so as to equally divide an angle between 0 and 2 ⁇ , and is not limited to this phase.
  • a method of switching patterns by time division in fringe scan processing and a method of switching patterns by space division can be considered.
  • a liquid crystal display element which can electrically switch and display a plurality of initial phases shown in FIG. 12 may be used as the photographing pattern 105 in FIG.
  • the switching timing of the liquid crystal display element and the shutter timing of the image sensor 103 are controlled in synchronization, and after acquiring four images in time series, the fringe scan processing unit 106 performs a fringe scan operation.
  • the fringe scan processing unit 106 divides the image into four images corresponding to the respective patterns of the initial phase, and performs a fringe scan operation.
  • FIG. 13 is a diagram showing an example of a photographing pattern of the space division fringe scan.
  • FIG. 14 is a diagram showing an example of a processing flow of the fringe scan.
  • the fringe scan processing unit 106 acquires sensor images based on a plurality of photographing patterns output from the image sensor 103.
  • the acquired sensor images are converted according to individual photographing patterns. Since it is necessary to divide the image, a plurality of sensor images are obtained by dividing the image into a predetermined area (step 1401).
  • time-division fringe scan is adopted, division is not performed because a plurality of sensor images with different photographing patterns are obtained as time passes.
  • the fringe scan processing unit 106 initializes a complex sensor image for output (step 1402).
  • the fringe scan processing unit 106 acquires the sensor image of the first initial phase ⁇ (step 1403).
  • the fringe scan processing unit 106 multiplies exp (i ⁇ ) according to the initial phase ⁇ (step 1404).
  • the fringe scan processing unit 106 adds the multiplication result to the complex sensor image (step 1405).
  • the fringe scan processing unit 106 outputs a complex sensor image (step 1407).
  • the processing by the fringe scan processing unit 106 in steps 1401 to 1407 described above corresponds to the above equation (10).
  • image processing in the image processing unit 107 will be described.
  • FIG. 15 is a diagram showing an example of a processing flow of the development processing by the correlation development method.
  • the image processing unit 107 obtains a complex sensor image output from the fringe scan processing unit 106, and performs a two-dimensional fast Fourier transform (FFT) on the complex sensor image (Step 1501). .
  • FFT fast Fourier transform
  • the image processing unit 107 generates a predetermined development pattern 801 to be used for the development process, and multiplies the complex sensor image subjected to the two-dimensional FFT operation (step 1502).
  • the image processing unit 107 performs an inverse two-dimensional FFT operation (step 1503).
  • the result of this operation is a complex number.
  • the image processing unit 107 converts the result of the inverse two-dimensional FFT operation into an absolute value or a real part, converts the image to be photographed into a real number, and develops the image (step 1504).
  • the image processing unit 107 performs contrast enhancement processing on the obtained developed image (step 1505). Further, the image processing unit 107 performs color balance adjustment (step 1506) and the like, and outputs the captured image.
  • the above is the development processing by the correlation development method.
  • FIG. 16 is a diagram showing an example of a processing flow of a developing process by the moiré developing method.
  • the image processing unit 107 acquires the complex sensor image output from the fringe scan processing unit 106, generates a predetermined development pattern 801 used for the development process, and multiplies the complex sensor image by the complex sensor image (step 1601). ).
  • the image processing unit 107 obtains a frequency spectrum by a two-dimensional FFT operation (step 1602).
  • the image processing unit 107 cuts out data of a necessary frequency region from the frequency spectrum obtained in step 1602 (step 1603).
  • step 1504 The subsequent realization processing in step 1504, the contrast enhancement processing in step 1505, and the color balance adjustment processing in step 1506 are the same as the processing in steps 1504 to 1506 shown in FIG.
  • FIG. 17 shows the manner in which the photographic pattern 105 is projected onto the image sensor 103 when the subject described above is sufficiently far (in the case of infinity).
  • FIG. 17 is a diagram illustrating an example of projection of a photographing pattern when an object is at an infinite distance.
  • a spherical wave from a point 1701 constituting a distant object becomes a plane wave while propagating a sufficiently long distance and irradiates the imaging pattern 105.
  • the projected image 1702 is projected on the image sensor 103, the projected image becomes It has almost the same shape as the pattern 105 for photographing.
  • a single luminescent spot can be obtained by performing development processing on the projection image 1702 using the development pattern.
  • imaging of an object at a finite distance will be described.
  • FIG. 18 is a diagram showing an example of an enlarged projection of the photographing pattern when the object is at a finite distance.
  • the projection of the imaging pattern 105 onto the image sensor 103 is enlarged more than the imaging pattern 105.
  • a spherical wave from a point 1801 constituting an object irradiates the imaging pattern 105 and a projected image 1802 is projected on the image sensor 103, the projected image is enlarged substantially uniformly. Note that this enlargement factor ⁇ is calculated using the distance f from the photographing pattern 105 to the point 1801.
  • the developing pattern 801 is enlarged in accordance with the uniformly enlarged projection image of the photographing pattern 105, a single bright point can be obtained again for the enlarged projection image 1802. it can.
  • the coefficients of the development patterns 801 beta it is possible to correct by the beta / alpha 2.
  • light from point 1801 at a distance that is not necessarily infinity can be selectively reproduced.
  • FIG. 19 shows the configuration in this case.
  • FIG. 19 is a diagram showing a configuration of an imaging device according to the second embodiment of the present invention.
  • the imaging device according to the second embodiment basically has the same configuration as the imaging device according to the first embodiment. However, what differs from the first embodiment is the presence of a focus setting unit 1901.
  • the focus setting unit 1901 receives the setting of the focus distance by using a knob provided on the imaging apparatus 101 or a GUI (Graphical User Interface) of a smartphone, and outputs the focus distance information to the image processing unit 107.
  • GUI Graphic User Interface
  • the fact that focus adjustment after shooting is possible means that the image processing unit 107 has depth information, and various functions such as autofocus and distance measurement can be realized in the image processing unit 107. .
  • the processing using the development pattern 801 can be performed independently, and the processing can be simplified.
  • FIG. 20 is a diagram illustrating an example of a positional relationship between a point light source and an imaging device. As shown in FIG. 20, a case where the object to be imaged is at a far distance f (point light source 2001) and a case where it is at a short distance f '(point light source 2001') are considered.
  • FIG. 21 is a diagram illustrating an example of a sensor image for each distance of a point light source.
  • light from an object to be imaged is applied to the imaging pattern 105 of the space division fringe scan, and the image projected when the shadow is projected on the image sensor 103 is the same as the object to be imaged. It changes according to the case of the distance f between the pattern and the case of the distance f ′.
  • the fringe scan processing unit 106 divides the image into four in correspondence with the respective patterns of the initial phase. Is changing. For example, the center of the concentric circle of the first quadrant 1302 is shifted to the upper right in the first quadrant 2102 at the distance f ′. As described above, if each quadrant is always divided at the same place, a shift occurs, and the effect of the fringe scan is reduced.
  • the data may be divided into four images in each quadrant.
  • the center of each pattern when the center of the sensor image in FIG. 21 is arranged at the origin is indicated by x.
  • FIG. 22 is a diagram showing an example of pattern center coordinates when the point light source is at infinity.
  • the coordinates of the pattern center 2201 in the first quadrant are represented by (x0, y0).
  • FIG. 23 is a diagram showing an example of the pattern center coordinates when the point light source is at a finite distance.
  • the coordinates of the pattern center 2301 in the first quadrant are represented by (x1, y1).
  • the coordinate conversion may be performed using a matrix M that specifies the amount of movement on the sensor surface as shown in FIG.
  • FIG. 24 shows a sensor image obtained as a result of the conversion.
  • FIG. 24 is a diagram illustrating an example of correcting a sensor image when the point light source is at a finite distance. As shown in FIG. 24, the image is reduced and the same position is always the center of the pattern. Therefore, in this example, if the image is divided into four in each quadrant, a sensor image in each quadrant can be obtained. Since the image is reduced, the obtained sensor image is smaller than the image size before correction. In this case, a constant such as 0 may be complemented in the margin area 2401 or may not be used as the NAN value.
  • FIG. 25 shows an example of a sensor image in a case where the photographing pattern 105 is attached with a shift.
  • FIG. 25 is a diagram illustrating an example of a sensor image when the shooting pattern is rotating.
  • an example of a sensor image in a case where the photographing pattern 105 is attached by being rotated by an angle ⁇ about the origin is shown.
  • the center coordinates 2501 to 2504 of each pattern are calculated by cross-correlation calculation with a reference pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance).
  • a reference pattern for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance.
  • the coordinates of the center of gravity O of four points of the center coordinates 2501 to 2504 are calculated, and are arranged so that the center of gravity O overlaps the origin as shown in FIG.
  • FIG. 26 is a diagram showing an example of the pattern center coordinates when the imaging pattern is rotating. At this time, to convert the coordinates (x1, y1) of the pattern center 2601 in the first quadrant of FIG. 26 to the coordinates (x0, y0) of the pattern center 2201 in FIG.
  • the coordinate transformation may be performed using a matrix H that specifies the amount of rotation movement on the sensor surface as shown in FIG.
  • the reason why the matrix M is used in the equation (18) is to correct an image according to the distance f of the point light source 2001 used.
  • the image of FIG. 25 is corrected by rotating the angle ⁇ , and the image always has the same pattern center. Therefore, in the example of FIGS. 25 and 26, the image may be divided in each quadrant.
  • the matrix H shown in Expressions (18) and (19) is a matrix having 2 ⁇ 2 elements, which is generally called an affine matrix, and has a relationship between four coordinates (in this example, the center coordinate of each pattern). Elements can be calculated from 2201 to 2204 and center coordinates 2501 to 2504). Although two points are correctly used for calculating the affine matrix, a more accurate affine matrix can be obtained by applying the least squares method using information of four points.
  • FIG. 27 shows an example of a sensor image when the point light source 2001 is photographed in this state.
  • FIG. 28 is a diagram illustrating an example of a sensor image in a state where the photographing pattern is inclined in the thickness direction.
  • the sensor image has a trapezoidal distortion.
  • the trapezoidal distortion cannot be completely corrected by the affine matrix as in the equation (18), but the trapezoidal distortion can be corrected more easily and accurately by using the homography matrix.
  • the center coordinates 2801 to 2804 of each pattern are calculated by cross-correlation calculation with a reference pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance).
  • a reference pattern for example, an image obtained by enlarging the photographing pattern 105 at an enlargement ratio ⁇ obtained from the subject distance.
  • the coordinates of the center of gravity O of the four points of the center coordinates 2801 to 2804 are calculated, and are arranged so that the center of gravity O overlaps the origin as shown in FIG.
  • FIG. 29 is a diagram illustrating an example of pattern center coordinates in a state where the photographing pattern is inclined in the thickness direction. At this time, to convert the coordinates (x1, y1) of the pattern center 2801 in the first quadrant in FIG. 29 to the coordinates (x0, y0) of the pattern center 2201 in FIG.
  • the coordinates may be converted using a matrix H that specifies the amount of movement on the sensor surface as shown in FIG.
  • the matrix H is a matrix having a 3 ⁇ 3 element generally called a homography matrix, and elements are determined from a relationship of four coordinates (in this example, center coordinates 2201 to 2204 and center coordinates 2801 to 2804 of each pattern). It is possible to calculate.
  • the reason why the matrix M is used is to correct the image according to the distance f 'of the point light source 2001' used. As a result, the image in FIG. 28 is corrected for trapezoidal distortion (tilt) and rotation, and is always an image having the same pattern center. Therefore, in this example, the image may be divided in each quadrant.
  • FIG. 30 shows a configuration for realizing distortion correction.
  • FIG. 30 is a diagram illustrating a configuration example of an imaging device according to a third embodiment of the present invention.
  • the imaging device shown in FIG. 30 is basically the same as the imaging device according to the second embodiment shown in FIG. 19, but is partially different. The following mainly describes the differences.
  • the imaging apparatus includes a correction value storage unit 3001 as a parameter storage unit that measures and stores correction parameters including a matrix H and a matrix M for distortion correction, and a distortion using the correction parameters.
  • a fringe scan processing unit 3002 for performing correction is provided.
  • the correction value storage unit 3001 may be included in the imaging module 102.
  • the correction value storage unit 3001 operates as a parameter output unit that outputs a correction value, that is, a parameter for correction, in response to a request via an API (Application, Programming, Interface).
  • API Application, Programming, Interface
  • FIG. 31 is a diagram illustrating a configuration example of an imaging device according to a modification of the third embodiment of the present invention.
  • the correction value storage unit is provided in the imaging module 102 as shown in FIG.
  • the form with 3001 is more versatile.
  • the method of calculating the correction value (the calibration method) is common. First, a correction value calculation method (calibration method) will be described.
  • FIG. 32 is a diagram illustrating an example of a processing flow of calibration.
  • the operator of the calibration irradiates a point light source 2001 on the optical axis of the imaging module 102 toward the imaging module 102 as shown in FIG. 20 (step 3201).
  • the point light source 2001 may be any light source such as an LED (Light Emitting Diode) light source or the sun that knows infinity or the distance to a subject, such as the sun.
  • the operator of the calibration causes the imaging module 102 to acquire a sensor image (Step 3202).
  • the fringe scan processing unit 3002 calculates the center coordinates of FZA (Fresnel ⁇ Zone ⁇ Aperture) (step 3203). More specifically, the fringe scan processing unit 3002 calculates the center coordinates of each pattern (for example, an image obtained by enlarging the photographing pattern 105 at an enlargement factor ⁇ obtained from the subject distance) by using a cross-correlation operation. : Center coordinates 2801 to 2804) are calculated.
  • the fringe scan processing unit 3002 calculates the center of gravity O from the central coordinate group (step 3204). Then, the fringe scan processing unit 3002 arranges the center of gravity O at the origin.
  • the fringe scan processing unit 3002 calculates a correction matrix H (step 3205). Specifically, the fringe scan processing unit 3002 calculates a correction matrix based on the relationship between the displacements of the four center coordinates (eg, the rotation relationship between the center coordinates 2201 to 2204 and the center coordinates 2801 to 2804 and the trapezoidal distortion). A 3 ⁇ 3 element of the homography matrix H that becomes H is calculated.
  • the fringe scan processing unit 3002 stores the correction matrix H and the position of the center of gravity O as correction parameters in the correction value storage unit 3001 (step 3206).
  • the matrix H for correcting the sensor image and the position of the center of gravity O can be specified, and stored in the correction value storage unit 3001 as a parameter.
  • the fringe scan processing unit 3002 performs distortion correction processing.
  • FIG. 33 is a diagram illustrating an example of a sensor image distortion correction processing flow.
  • the fringe scan processing unit 3002 reads the position of the center of gravity O from the correction value storage unit 3001, and moves the center of gravity O of the sensor image to the coordinates serving as the origin (step 3301).
  • the fringe scan processing unit 3002 performs correction using the matrix M (step 3302). Specifically, the fringe scan processing unit 3002 obtains information on the focusing distance f ′ from the focus setting unit 1901 and performs correction using a matrix M corresponding to the distance f ′.
  • the fringe scan processing unit 3002 performs correction using the matrix H (step 3303). Specifically, the fringe scan processing unit 3002 performs correction using the matrix H acquired from the correction value storage unit 3001.
  • the matrix H and the movement of the position of the center of gravity O are processed separately.
  • a homography matrix of a 3 ⁇ 3 matrix is used as in Expression (20)
  • the 3 ⁇ 3 matrix is used. Since it is possible to include the correction amount of the movement of the center of gravity O in the matrix, it is not always necessary to perform the processing separately.
  • the above is the imaging apparatus according to the third embodiment.
  • an error in assembling the imaging apparatus and an error in producing a photographic pattern are corrected, and distortion associated with focus adjustment is corrected, thereby achieving high-precision space division.
  • a fringe scan can be performed.
  • the distortion is corrected before the fringe scan of the sensor image (the processes of steps 3301 to 3303 are performed before the processes of steps 1401 to 1407 in the fringe scan processing unit 3002).
  • the sensor image is transmitted to another device wirelessly or by wire, it may be desirable that the information to be transmitted is unprocessed in consideration of preventing loss of information. That is, it may be preferable to perform the distortion correction processing after the fringe scan.
  • a method of correcting distortion after fringe scanning of a sensor image will be described with reference to FIGS.
  • FIG. 34 is a diagram illustrating a configuration example of an imaging device according to a fourth embodiment of the present invention.
  • the imaging device according to the fourth embodiment is substantially the same as the imaging device according to the third embodiment, but is partially different.
  • the imaging device according to the fourth embodiment is different from the imaging device according to the third embodiment in that the information in the correction value storage unit 3001 is read and used by the image processing unit 3401. That is, the imaging apparatus according to the fourth embodiment does not correct the distortion of the sensor image, but adds the distortion corresponding to the sensor image to the development pattern 801.
  • the matrix H for converting the coordinates (x1, y1) into the coordinates (x0, y0) is calculated.
  • a matrix H ′ and a matrix M ′ for transforming the coordinates (x0, y0) having the reverse transformation into the coordinates (x1, y1) are calculated.
  • the flow of the development process in this case is the same as the flow of the development process by the correlation development method in FIG. 15 described above, but is partially different.
  • the generation of the development pattern 801 in step 1502 of the development processing by the correlation development method is performed according to the processing flow shown in FIG.
  • FIG. 35 is a diagram showing an example of a distortion correction processing flow of a developing pattern.
  • the distortion correction processing for the development pattern is started in step 1502 of the development processing using the correlation development method.
  • the image processing unit 3401 converts the matrix H ′ of the equation (21) into a distortion adding processing in advance similarly to the example of the calibration processing flow illustrated in FIG. Is calculated in advance. Specifically, the image processing unit 3401 obtains a 3 ⁇ 3 element of a homography matrix H ′ that becomes a correction matrix H ′ based on a relationship between four coordinates (eg, center coordinates 2201 to 2204 and center coordinates 2801 to 2804). Is calculated in advance.
  • the image processing unit 3401 generates a concentric pattern image for space division fringe scanning as shown in FIG. 20 (step 3501).
  • the image processing unit 3401 moves the center of gravity O (step 3502). Specifically, the image processing unit 3401 reads the position of the center of gravity O from the correction value storage unit 3001, and moves the center of the pattern image of the concentric circle for the space division fringe scan by the position of the center of gravity O from the coordinates with the origin as the origin. .
  • the image processing unit 3401 performs the correction using the matrix H ′ acquired from the correction value storage unit 3001 (step 3503). Specifically, a concentric pattern image for space division fringe scanning is converted using the matrix H 'read from the correction value storage unit 3001.
  • the image processing unit 3401 performs the correction using the matrix M ′ obtained from the correction value storage unit 3001 (Step 3504). Specifically, the image processing unit 3401 obtains the focusing distance f ′ from the focus setting unit 1901, obtains information of a matrix M ′ corresponding to the distance f ′ from the correction value storage unit 3001, and performs correction. carry out.
  • the image processing unit 3401 divides the pattern image into four (Step 3505).
  • the subsequent processing is the same as the development processing of FIG. 15 in which the complex sensor image subjected to the two-dimensional FFT operation is multiplied.
  • the moiré development method since the development pattern 801 is distorted, the moiré development method does not produce moire at the time of point light source development at a single frequency but blurs the image. Therefore, it can be said that the correlation development method has higher affinity and is more desirable than the moiré development method.
  • the above is the imaging apparatus according to the fourth embodiment. According to the method and configuration according to the fourth embodiment, it is possible to correct an error in assembling an imaging device and an error in manufacturing a photographing pattern, and correct distortion related to focus adjustment, thereby achieving high-precision space. A divided fringe scan can be performed.
  • the method of correcting an error in assembling the imaging device and an error in manufacturing a photographing pattern and correcting a distortion related to focus adjustment has been described. However, no correction has been taken into account for defects, dirt, and dust in the imaging pattern and the image sensor.
  • a method of protecting the defect will be described with reference to FIGS. First, the problem of this defect will be clarified using FIG.
  • FIG. 37 is a diagram showing an example of an image in the case where dirt or the like is attached to the photographing pattern.
  • a defect appears at the same position in all four patterns, so that the defect is recorded at one position as a sensor image.
  • the space division fringe scan since a defect exists only in the quadrant of the pattern 3803 like the defect 3801, matching with other patterns cannot be obtained.
  • FIG. 38 is a diagram showing an example of the luminance distribution of an ideal sensor image. That is, as shown in the pattern 3802, a signal component in which the luminance decreases in the mask portion of the concentric pattern can be obtained.
  • FIG. 39 is a diagram illustrating an example of the luminance distribution of a defective sensor image. That is, as shown in a pattern 3803, a signal component whose luminance is reduced in the mask portion of the concentric pattern can be obtained, but the luminance is reduced irrespective of the mask portion due to a part of the defect 3801. Note that, of course, the degree of decrease in luminance differs depending on the degree of the defect 3801. That is, if the defect is minor, the degree of decrease in luminance is small, and if the defect is large, the degree of decrease in luminance is large.
  • FIG. 36 shows the configuration of the fifth embodiment.
  • FIG. 36 is a diagram illustrating a configuration example of an imaging device according to a fifth embodiment of the present invention.
  • the imaging device according to the fifth embodiment is basically the same as the imaging device according to the fourth embodiment, but is partially different.
  • the imaging apparatus differs from the imaging apparatus according to the fourth embodiment in that a defect detection unit 3601 is provided and an image processing unit 3602 performs a defect protection process.
  • the defect detection unit 3601 acquires the sensor image from the fringe scan processing unit 106 and, when detecting an area having a luminance lower than the defect detection threshold as shown in FIGS. 38 and 39, outputs a defect signal specifying the area.
  • it is stored in the correction value storage unit 3001. That is, information for specifying the position where the defect signal is output is stored in the correction value storage unit 3001.
  • Such information is not limited to, for example, information for specifying coordinates on a defective sensor one by one, but may be information for specifying a rectangular area on the sensor or the center and radius of a minimum circle including a defective part. Good.
  • the simplest defect detection method by the defect detection unit 3601 is as described above.
  • the defect may be determined by adding the two sensor images in which black and white are inverted. This processing is desirably performed between steps 1401 and 1402 of the fringe scan processing in FIG.
  • FIG. 40 shows the result of performing this addition on the signals shown in FIGS. 38 and 39.
  • FIG. 40 is a diagram showing an example of defect detection by inversion pattern synthesis.
  • the normal signal component is canceled by cancellation, and only the luminance fluctuation component remains.
  • the defect detection unit 3601 outputs a defect signal and stores a correction value in an area where the remaining luminance is below or above the defect detection threshold, using a defect detection threshold within the range of ⁇ constant of the average value. This is stored in the unit 3001. According to this method, stable defect detection can be performed regardless of the subject. This defect detection may be performed for each of all the sensor images, or may be performed for each pair with the inverted pattern as a pair.
  • the image processing unit 3602 performs a process of not using a portion corresponding to the defect signal of the development pattern as a NAN value, that is, a data mask process, using the defect signal output from the defect detection unit 3601.
  • the matrix H ′ and the matrix M ′ are used to correspond to the defect signal as in the fourth embodiment. It is desirable to correct the position at which it is performed.
  • the defect signal is input not to the image processing unit 3602 but to the fringe scan processing unit 3002 of FIG. 30 of the third embodiment, and the same position of all the patterns of the sensor image (the relative position between the pattern center and the defect signal is the same) May be performed without using the NAN value as the NAN value.
  • the fifth embodiment has been described above. According to the imaging apparatus according to the fifth embodiment, errors in assembling the imaging apparatus and errors in manufacturing the imaging pattern are corrected, and not only distortions related to focus adjustment are corrected, but also the imaging pattern is corrected. By performing correction for defects, dirt, and dust of the image sensor and the image sensor, it is possible to perform more accurate space division fringe scan.
  • each of the above-described configurations, functions, processing units, processing means, and the like may be partially or entirely realized by hardware, for example, by designing an integrated circuit.
  • the above-described configurations, functions, and the like may be realized by software by a processor interpreting and executing a program that realizes each function.
  • Information such as programs, tables, and files for realizing each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
  • control lines and information lines are shown as necessary for the description, and do not necessarily indicate all control lines and information lines on a product. In fact, it can be considered that almost all components are connected to each other.

Abstract

L'invention concerne une technologie pour obtenir des images développées précises par correction d'erreurs d'assemblage de dispositif d'imagerie, d'erreurs de fabrication de motif de photographie et de distorsions de réglage de mise au point. Un dispositif d'imagerie est caractérisé en ce qu'il comprend : un capteur d'image pour convertir la lumière en signaux électriques et générer des images de capteur; un modulateur pour moduler l'intensité de la lumière détectée par le capteur d'image sur la base d'un motif de photographie; et une unité de stockage de paramètres pour stocker des paramètres utilisés pour exécuter un processus de correction prescrit sur une pluralité d'images de capteur, parmi lesdites images de capteur, qui ont été photographiées à l'aide de différents motifs de photographie.
PCT/JP2019/010263 2018-09-18 2019-03-13 Dispositif d'imagerie, et procédé d'imagerie WO2020059181A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018173930A JP7097787B2 (ja) 2018-09-18 2018-09-18 撮像装置および撮像方法
JP2018-173930 2018-09-18

Publications (1)

Publication Number Publication Date
WO2020059181A1 true WO2020059181A1 (fr) 2020-03-26

Family

ID=69886883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/010263 WO2020059181A1 (fr) 2018-09-18 2019-03-13 Dispositif d'imagerie, et procédé d'imagerie

Country Status (2)

Country Link
JP (1) JP7097787B2 (fr)
WO (1) WO2020059181A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021005743A (ja) * 2019-06-25 2021-01-14 株式会社日立製作所 撮像装置
WO2023127509A1 (fr) * 2021-12-27 2023-07-06 ソニーグループ株式会社 Dispositif de réglage et procédé de fonctionnement de dispositif de réglage

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06350926A (ja) * 1993-06-02 1994-12-22 Hitachi Ltd ビデオカメラ
JPH11205652A (ja) * 1998-01-19 1999-07-30 Yoshikazu Ichiyama 学習するディジタル方式画像入力装置
JP2011166255A (ja) * 2010-02-05 2011-08-25 Panasonic Corp 撮像装置
JP2018061109A (ja) * 2016-10-04 2018-04-12 株式会社日立製作所 撮像装置および撮像方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06350926A (ja) * 1993-06-02 1994-12-22 Hitachi Ltd ビデオカメラ
JPH11205652A (ja) * 1998-01-19 1999-07-30 Yoshikazu Ichiyama 学習するディジタル方式画像入力装置
JP2011166255A (ja) * 2010-02-05 2011-08-25 Panasonic Corp 撮像装置
JP2018061109A (ja) * 2016-10-04 2018-04-12 株式会社日立製作所 撮像装置および撮像方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021005743A (ja) * 2019-06-25 2021-01-14 株式会社日立製作所 撮像装置
JP7159118B2 (ja) 2019-06-25 2022-10-24 株式会社日立製作所 撮像装置
WO2023127509A1 (fr) * 2021-12-27 2023-07-06 ソニーグループ株式会社 Dispositif de réglage et procédé de fonctionnement de dispositif de réglage

Also Published As

Publication number Publication date
JP7097787B2 (ja) 2022-07-08
JP2020048031A (ja) 2020-03-26

Similar Documents

Publication Publication Date Title
JP6820908B2 (ja) 撮像装置
JP6491332B2 (ja) 撮像装置
JP6685887B2 (ja) 撮像装置
JP6721698B2 (ja) 撮像装置
WO2020059181A1 (fr) Dispositif d'imagerie, et procédé d'imagerie
CN110324513B (zh) 摄像装置,摄像模块和摄像方法
JP6646619B2 (ja) 撮像装置
JP6920974B2 (ja) 距離計測装置および距離計測方法
JP6864604B2 (ja) 撮像装置
JP6807286B2 (ja) 撮像装置及び撮像方法
JP2023016864A (ja) 撮像装置および方法
JP6947891B2 (ja) 携帯情報端末
JP6770164B2 (ja) 撮像装置
JP7389195B2 (ja) 画像生成方法
JP6814762B2 (ja) 撮像装置
JP6636663B2 (ja) 撮像装置及び画像生成方法
JP7159118B2 (ja) 撮像装置
JP2021064000A (ja) 撮像装置
JP2020098963A (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19862090

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19862090

Country of ref document: EP

Kind code of ref document: A1