WO2019131191A1 - Wide conversion lens and image capture device - Google Patents

Wide conversion lens and image capture device Download PDF

Info

Publication number
WO2019131191A1
WO2019131191A1 PCT/JP2018/045990 JP2018045990W WO2019131191A1 WO 2019131191 A1 WO2019131191 A1 WO 2019131191A1 JP 2018045990 W JP2018045990 W JP 2018045990W WO 2019131191 A1 WO2019131191 A1 WO 2019131191A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lens
wide conversion
conversion lens
imaging device
Prior art date
Application number
PCT/JP2018/045990
Other languages
French (fr)
Japanese (ja)
Inventor
大樹 小松
慶延 岸根
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2019131191A1 publication Critical patent/WO2019131191A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • G02B13/18Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B15/00Optical objectives with means for varying the magnification
    • G02B15/02Optical objectives with means for varying the magnification by changing, adding, or subtracting a part of the objective, e.g. convertible objective
    • G02B15/10Optical objectives with means for varying the magnification by changing, adding, or subtracting a part of the objective, e.g. convertible objective by adding a part, e.g. close-up attachment
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/18Diffraction gratings

Definitions

  • the present invention relates to a wide conversion lens and an imaging device to which the wide conversion lens is attached.
  • Non-Patent Document 1 a Fresnel zone plate is disposed in proximity to an imaging device, and a projected image formed on the imaging device by light from an object is superimposed on a projection pattern corresponding to the Fresnel zone plate. By subjecting the moiré fringes to Fourier transform, it is possible to obtain an image of the subject without a lens.
  • the technical report of Non-Patent Document 2 is made on the technology of Non-Patent Document 1.
  • Patent Document 1 an image of a subject is reconstructed by Fourier-transforming a moiré fringe formed by incidence of light from the subject on two grating patterns (Fresnel zone plate) arranged to face each other.
  • JP 2012-93751 A JP, 2014-56054, A WO 2016/203573
  • refocusing is to refocus the image to a desired focusing distance by image reconstruction
  • represents the pitch of the Fresnel zone plate
  • d represents the distance between the image sensor and the Fresnel zone plate. Represent.
  • the pitch and / or the position of the Fresnel zone plate is changed, which brings about the demerit described below.
  • the present invention has been made in view of such circumstances, and it is an object of the present invention to provide a wide conversion lens capable of obtaining images with different angle of view with good image quality when mounted in an imaging device. Another object of the present invention is to provide an imaging device capable of obtaining images with different angle of view with good image quality by using a wide conversion lens.
  • a wide conversion lens according to a first aspect of the present invention is a wide conversion lens mounted on the object side of an imaging apparatus, and the following conditional expressions (1) and (2) Fulfill.
  • Cs is the length in the sagittal direction of the axial light beam incident on the surface closest to the object side of the wide conversion lens
  • Ms is the sagittal direction of the light beam with the largest angle of view incident on the surface closest to the object side in the wide conversion lens
  • D.sub.A is the maximum air separation in the wide conversion lens
  • F.sub.f is the focal length of the lens group on the object side of the position where the maximum air separation exists.
  • the wide conversion lens according to the first aspect can obtain an image with different angle of view with good image quality when mounted on an imaging device.
  • Ms / Cs ⁇ 2.0 and 0.5 ⁇ Ff / DA ⁇ ⁇ 1.5 it is more preferable that Ms / Cs ⁇ 2.0 and 0.5 ⁇ Ff / DA ⁇ ⁇ 1.5.
  • the wide conversion lens according to the second aspect satisfies the following conditional expression (3) in the first aspect.
  • Ct is the length in the tangential direction of the axial light beam incident on the surface closest to the object side of the wide conversion lens
  • Mt is the tangent of the light beam with the largest angle of view incident on the surface closest to the object side of the wide conversion lens. It is the length obtained by projecting the length in the principal direction on a plane perpendicular to the chief ray. If Mt / Ct ⁇ 2.2, it will be difficult to cite the peripheral light ratio, but if the characteristic of the wide conversion lens is in the range specified in the second aspect, the peripheral light ratio is increased and the optical distortion is reduced. be able to.
  • Mt / Ct ⁇ 3.0. If 3.0 ⁇ Mt / Ct, the pupil aberration becomes too large, and distortion and astigmatism increase.
  • the wide conversion lens according to the third aspect satisfies the following conditional expression (4) in the first or second aspect.
  • WCTL is the length from the first surface which is the surface closest to the object side of the wide conversion lens to the final surface which is the surface furthest to the object side
  • WCf is the focal length of the wide conversion lens. Is out of the range specified in the third aspect, the afocal system collapses, the shift from the back focus of the mounted optical system becomes large, and the focus shifts when the user uses it. It is preferable that
  • the wide conversion lens according to the fourth aspect satisfies the following conditional expression (5) in any one of the first to third aspects.
  • Dist is an optical distortion when the wide conversion lens is attached to the ideal lens. Since about 10% distortion can be recognized in object recognition in a sensing application or the like, an optical distortion of up to 10% is acceptable. In the fourth aspect, it is more preferable that
  • the wide conversion lens according to the fifth aspect includes, in order from the object side, a front group of negative refractive power and a rear group of positive refractive power.
  • the imaging device is equipped with the wide conversion lens according to any one of the first to fifth aspects, and a wide conversion lens, a coded aperture, and the code And an imaging device main body having an imaging device for outputting a signal of a projected image formed by the aperture, and an image restoration unit for reconstructing an image of a spatial region based on the signal.
  • the wide conversion lens according to any one of the first to fifth aspects images with different angle of view can be obtained with good image quality (improved peripheral light amount ratio and small distortion) be able to.
  • the imaging device main body includes a coded aperture, an imaging element for outputting a signal of a projected image formed by the coded aperture, and a spatial region based on the signal.
  • the wide conversion lens is mounted on the object side of the coding aperture, and a projection image is formed on the imaging device by the wide conversion lens and the coding aperture.
  • An imaging apparatus using a coded aperture has a refocusing function, but if it is attempted to change the angle of view (focal length) by changing ⁇ and d, there are disadvantages such as image quality deterioration and manufacturing difficulty increase as described above.
  • the peripheral light amount ratio decreases in accordance with the cosine fourth law (also referred to as “cos fourth law”).
  • cosine fourth law also referred to as “cos fourth law”.
  • the image restoration unit corrects an aberration caused by the wide conversion lens in the image of the spatial region.
  • aberrations such as astigmatism, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc.
  • Aberration correction may be performed before or after the image is reconstructed. In the case of correcting the aberration before the reconstruction, for example, the shape and / or the size of the coded aperture by which the projection image is multiplied can be performed according to the type and / or the amount of the aberration to be corrected.
  • an image such as applying different filters (such as a filter for point image restoration) depending on conditions such as the type and / or degree of aberration to be corrected and the position in the image. It can be done by processing.
  • the correction of the aberration in the present invention is not limited to such an aspect.
  • An imaging apparatus is, in the eighth aspect, provided with a lens information acquisition unit that acquires information of a wide conversion lens, and the image restoration unit corrects an aberration based on the acquired information.
  • the aberration can be corrected accurately and easily based on the information of the lens.
  • the lens information storage unit for storing information of the lens may be provided in the wide conversion lens, and the lens information acquisition unit may acquire information from the lens information storage unit.
  • Such a lens information storage unit can be configured using a non-temporary recording medium.
  • the coding aperture is a Fresnel zone plate
  • the image restoration unit is formed on the imaging element by the wide conversion lens and the Fresnel zone plate
  • the projected image is multiplied by the Fresnel zone pattern corresponding to the Fresnel zone plate to generate a multiplied image
  • the multiplied image is subjected to Fourier transform to reconstruct an image in the spatial domain.
  • the Fresnel zone pattern has a first Fresnel zone pattern and the first Fresnel zone pattern has the same local spatial frequency in each region, and the phase of the local spatial frequency is different.
  • the image restoration unit multiplies the projection image with the first Fresnel zone pattern and the second Fresnel zone pattern, respectively, to form a real image and an imaginary image as a multiplication image.
  • a complex image is generated, and the complex image is subjected to two-dimensional complex Fourier transformation to reconstruct an image in the spatial domain.
  • An imaging device is the interchangeable lens that is attached to and removed from the imaging device in any one of the seventh to tenth aspects. According to the eleventh aspect, it is possible to attach a lens with a different angle of view and focal length according to the purpose of imaging and imaging conditions, and to capture under desired conditions.
  • the wide conversion lens of the present invention it is possible to obtain an image with different angle of view with good image quality when mounted on an imaging device. Further, according to the imaging device of the present invention, an image with different angle of view can be obtained with good image quality by the wide conversion lens.
  • FIG. 1 is a block diagram showing the configuration of an imaging apparatus according to the first embodiment.
  • FIG. 2 is a view showing another example of the configuration of the lens apparatus.
  • FIG. 3 is a diagram showing an example of a Fresnel zone plate.
  • FIG. 4 is a block diagram showing the configuration of the processing unit.
  • FIG. 5 is a diagram showing information stored in the storage unit.
  • FIG. 6 is a diagram showing lens configurations of Examples 1 to 4.
  • FIG. 7 is a view showing the optical paths of Examples 1 to 4.
  • FIG. 8 is a diagram showing lens data of Examples 1 to 3.
  • FIG. 9 is a diagram showing lens data of Example 4.
  • FIG. 10 is a diagram showing the aspheric coefficients of the first, second, and fourth embodiments.
  • FIG. 11 is a diagram showing the longitudinal aberration of Example 1.
  • FIG. 12 is a diagram showing the longitudinal aberration of Example 2.
  • FIG. 13 is a diagram showing the longitudinal aberration of Example 3.
  • FIG. 14 is a diagram showing the longitudinal aberration of Example 4.
  • FIG. 15 is a table showing the specifications of the first to fourth embodiments.
  • FIG. 16 is a table showing peripheral light amount calculation conditions of the first to fourth embodiments.
  • FIG. 17 is a diagram showing the definition of parameters.
  • FIG. 18 is a table showing peripheral light amounts of Examples 1 to 4.
  • FIG. 19 is a graph showing the peripheral lens ratio of the ideal lens and Examples 1 to 4.
  • FIG. 20 is a flowchart showing the process of image reconstruction.
  • FIG. 21 is a diagram showing an example of Fresnel zone plates having different phases.
  • FIG. 22 is a view showing a Fresnel zone pattern of an enlargement factor according to the focusing distance.
  • FIG. 1 is a block diagram showing the configuration of an imaging device 10 (imaging device) according to the first embodiment.
  • the imaging device 10 includes a lens device 100 (wide conversion lens, interchangeable lens) and an imaging device main body 200 (imaging device main body).
  • the imaging device 10 can be applied to a smart phone, a tablet terminal, a surveillance camera, an inspection camera, and the like in addition to the digital camera, the application target is not limited to these devices.
  • the lens apparatus 100 includes a lens 300 (wide conversion lens, interchangeable lens).
  • the lens 300 is a lens (wide conversion lens) that is mounted on the imaging device main body 200 (on the object side relative to the Fresnel zone plate 210) to make the photographing angle of view wider than in the non-mounted state.
  • the lens apparatus 100 and the imaging apparatus main body 200 are mounted (attached) via the lens side mount 130 and the main body side mount 270 (lens mounting portion), and when mounted, the lens side terminal 132 and the body side terminal 272 contact with each other.
  • communication between the lens apparatus 100 and the imaging apparatus main body 200 becomes possible.
  • Information of the lens 300 is stored in the memory 120 (lens information storage unit), and the stored information is acquired by an instruction from the image processing unit 230 (information acquisition unit 230E; see FIG. 4).
  • the lens configuration of the lens 300, the focal length, the shooting angle of view, the F value, the type and value of aberration, and the like can be stored in the memory 120.
  • These pieces of information may be measured after manufacturing of the lens 300 (lens apparatus 100) and stored in the memory 120, or design values may be stored. Further, information input by the operation of the operation unit 260 may be used.
  • the lens apparatus 100 can be attached to and detached from the imaging apparatus main body 200 by the operation of the user, whereby a lens with a different angle of view and focal length can be attached according to the photographing purpose, photographing conditions, etc. and photographed under desired conditions. it can.
  • lens devices 102, 104, 106 (see FIG. 2) having lenses 400, 500, 600 can be mounted on the imaging device body 200.
  • the lenses 400, 500, and 600 are also wide conversion lenses similar to the lens 300.
  • FIGS. 1 and 2 the configurations of the lenses 300, 400, 500, and 600 are simplified and displayed. The specific configuration of the lenses 300, 400, 500, 600 will be described later.
  • lens devices 100, 102, 104, and 106 are attached to the imaging apparatus main body 200 via the lens side mount 130 and the main body side mount 270, the attachment to the imaging apparatus main body 200.
  • a screw thread (lens attachment portion) provided on one of the lens devices 100, 102, 104, 106 and the imaging device main body 200, and a screw groove (lens attachment portion provided on the other corresponding to the screw thread And may be carried out.
  • the lenses 300, 400, 500, and 600 are attached to the subject side (object side) of the Fresnel zone plate 210.
  • the imaging device body 200 includes a Fresnel zone plate 210 (Fresnel zone plate, coded aperture) and an imaging device 220 (imaging device), and light from an object is a lens 300 (or lenses 400, 500, 600) and the Fresnel zone A projection image formed through the plate 210 is acquired by the imaging device 220.
  • the Fresnel zone plate 210 is disposed on the imaging surface side of the imaging element 220 in a state where the center coincides with the center of the imaging element 220 and is parallel to the imaging surface 220 A (light receiving surface) of the imaging element 220.
  • the Fresnel zone plate 210 may be replaceable with respect to the imaging device body 200.
  • Fresnel zone plates having different characteristics (size, pitch, phase, distance to the image sensor, etc.), the characteristics (field angle, depth (distance measurement accuracy, etc.), etc.) of the acquired projection image can be controlled and desired.
  • the image of the characteristic can be reconstructed.
  • a Fresnel zone plate may be described as "FZP".
  • FIG. 3 Part (a) of FIG. 3 is a view showing FZP1 which is an example of the Fresnel zone plate 210.
  • FZP1 the transmittance of incident light varies continuously according to the FZP1 pattern according to the distance from the center, and the region closer to white (transmission region) has higher light transmittance and a region closer to black ( The light transmission rate is lower as the light shield region is).
  • the transmission areas and the light shielding areas are alternately arranged concentrically, and the transmission areas and the light shielding areas constitute a Fresnel zone plate.
  • the distance between concentric circles narrows from the center of FZP1 toward the periphery.
  • Such a concentric pattern (local change in spatial frequency) is expressed by the equations (2), (3), (7) etc. described later, and the fineness of the concentric circles in these equations is called "pitch". .
  • the pitch is determined by the value of ⁇ described above, and when ⁇ is small, a coarse pattern is obtained, and when ⁇ is large, a fine pattern is obtained.
  • a memory may be provided in the imaging apparatus main body 200 to store pitch information (a value of ⁇ ), and the image processing unit 230 (information acquisition unit 230E) may acquire and use this information.
  • the optical axis L (see FIG. 1) of the Fresnel zone plate 210 is an axis passing through the center of the FZP and the imaging device 220 and perpendicular to the FZP and the imaging surface 220A of the imaging device 220.
  • the FZP is disposed close to the imaging device 220.
  • the projected image be blurred due to the diffraction of light, so it is preferable not to separate too much.
  • Part (b) of FIG. 3 is a view showing FZP2 which is another example of the Fresnel zone plate.
  • FZP2 sets the threshold for the transmittance to the pattern of FZP1, and the region where the transmittance exceeds the threshold is the transmission region (white part) with 100% transmittance, and the region below the threshold is the transmission
  • a light shielding area (black portion) with a rate of 0% is used, and the transmittance changes discontinuously (in two steps of 0% or 100%) according to the distance from the center.
  • the transmission areas and the light shielding areas are alternately arranged concentrically, and the transmission areas and the light shielding areas constitute a Fresnel zone plate.
  • the "Fresnel zone plate” in the present invention includes an aspect such as FZP1 and an aspect such as FZP2 and, correspondingly, the "Fresnel zone pattern” in the present invention changes the transmittance continuously. Pattern and the pattern in which the transmittance changes discontinuously.
  • a light shielding portion (a region not transmitting light as well as the light shielding region) is provided in the peripheral portion of the Fresnel zone plate as shown in FIG. 3 to prevent unnecessary light from being incident on the peripheral portion of the imaging element 220. You may
  • the imaging element 220 is an image sensor having a plurality of pixels formed of photoelectric conversion elements arranged in a two-dimensional direction.
  • a microlens may be provided for each pixel to increase the light collection efficiency.
  • color filters for example, red, blue, and green
  • interpolation processing according to the arrangement pattern of the color filter is performed as in the case of demosaicing processing (also referred to as synchronization processing) in color image generation in a normal digital camera.
  • a signal of a deficient color in each pixel is generated, and a signal of each color (for example, red, blue, and green) is obtained in all the pixels.
  • a signal of each color for example, red, blue, and green
  • Such processing can be performed by the image processing unit 230.
  • the imaging device main body 200 includes an image processing unit 230, a storage unit 240, a display unit 250, and an operation unit 260, in addition to the above-described Fresnel zone plate 210 and imaging device 220, the Fresnel zone plate 210 and imaging device An image restoration or the like of the subject is performed based on the projection image acquired by 220.
  • FIG. 4 is a diagram showing the configuration of the image processing unit 230 (image restoration unit).
  • the image processing unit 230 includes a projection image input unit 230A, a multiplication image generation unit 230B (image restoration unit), a Fourier transform unit 230C (image restoration unit), an aberration correction unit 230D (image restoration unit), and an information acquisition unit 230E (lens information acquisition unit) and a display control unit 230F.
  • the projected image input unit 230A controls the imaging device 220 to acquire, from the imaging device 220, a projected image formed on the imaging device 220 by the light from the subject being incident on the FZP.
  • the multiplication image generation unit 230B multiplies the projection image by a plurality of Fresnel zone patterns (first and second Fresnel zone patterns) having the same local spatial frequency and different phases of the local spatial frequency.
  • a complex image consisting of an image of the real part and an image of the imaginary part.
  • the Fourier transform unit 230C reconstructs an image in the spatial domain by performing a two-dimensional complex Fourier transform on the complex image.
  • the aberration correction unit 230D corrects an aberration (such as astigmatism, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc.) caused by the lens 300 or the lens 400 in an image in a spatial region.
  • an aberration such as astigmatism, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc.
  • the information acquisition unit 230E acquires information (for example, lens configuration, focal length, shooting angle of view, F value, type and value of aberration, etc.) of the lens 300 stored in the memory 120 of the lens device 100. Further, the information acquisition unit 230E acquires information (information on pitch) of the Fresnel zone plate 210 used for acquiring the projection image.
  • the display control unit 230F controls display of a projection image, a complex image, a reconstructed image and the like on the display unit 250.
  • ROM 230G non-temporary recording medium
  • computer (processor) readable codes of various programs for operating the imaging device 10, such as a program for reconstructing an image, are recorded.
  • the functions of the image processing unit 230 described above can be realized using various processors.
  • the various processors include, for example, a central processing unit (CPU) that is a general-purpose processor that executes software (programs) to realize various functions.
  • the various processors described above also include a programmable logic device (PLD) which is a processor whose circuit configuration can be changed after manufacturing an FPGA (Field Programmable Gate Array) or the like.
  • PLD programmable logic device
  • FPGA Field Programmable Gate Array
  • dedicated electric circuits and the like which are processors having a circuit configuration specially designed to execute specific processing such as application specific integrated circuits (ASICs) are also included in the various processors described above.
  • ASICs application specific integrated circuits
  • each unit may be realized by one processor or may be realized by combining a plurality of processors. Also, multiple functions may be realized by one processor.
  • a plurality of functions are configured by one processor
  • one processor or more is configured by a combination of one or more CPUs and software, as represented by computers such as clients and servers; Is realized as a plurality of functions.
  • SoC system on chip
  • IC integrated circuit
  • circuitry circuitry
  • processor When the processor (or electric circuit) described above executes software (program), computer readable code of software to be executed (for example, software for acquiring a projection image, acquiring lens information, restoring an image, etc.) It is stored in a non-transitory recording medium such as the ROM 230G, and the processor refers to the software.
  • RAM Random Access Memory
  • EEPROM Electrically Erasable and Programmable Read Only Memory
  • FIG. 4 devices such as a RAM and an EEPROM are not shown.
  • the storage unit 240 is composed of non-temporary recording media such as a CD (Compact Disk), a DVD (Digital Versatile Disk), a hard disk (Hard Disk), and various semiconductor memories, and stores the images and information shown in FIG. .
  • the lens information 240A is information of the lens 300 acquired from the lens device 100 (for example, lens configuration, focal length, shooting angle of view, F value, type and value of aberration, etc.).
  • the projected image 240 B is a projected image acquired from the imaging element 220.
  • the Fresnel zone plate information 240 C is information on the local spatial frequency of the Fresnel zone plate 210 (including pitch information such as the value of ⁇ ).
  • the Fresnel zone plate information 240 C may be information acquired from the imaging device 220 or information input via the operation unit 260.
  • the Fresnel zone pattern information 240D is information indicating a Fresnel zone pattern, and it is preferable to record a plurality of Fresnel zone patterns having different local spatial frequency phases.
  • the multiplication image 240E (multiplication image) is obtained by multiplying the projected image by the Fresnel zone pattern (first and second Fresnel zone patterns) indicated by the Fresnel zone pattern information 240D, and an image of the real part and an image of the imaginary part And a complex image consisting of
  • the reconstructed image 240F is an image of a spatial region obtained by performing two-dimensional complex Fourier transform on the multiplied image 240E.
  • the display unit 250 includes a display device such as a liquid crystal display (not shown), and displays a projected image, a multiplied image, a reconstructed image and the like, and for UI (User Interface) when inputting an instruction via the operation unit 260 It is also used for screen display of
  • the operation unit 260 includes devices such as a keyboard, a mouse, a button, and a switch (not shown), and the user uses these devices to issue a projection image acquisition instruction, an image reconstruction instruction, focusing distance conditions, and local spatial frequency information ( Pitch, phase, etc. can be input.
  • the display device of the display unit 250 may be configured by a touch panel and used as the operation unit 260 in addition to image display.
  • Example 1 Part (a) of FIG. 6 is a view showing the arrangement of the lens 300, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220.
  • the lens 300 is a wide conversion lens composed of a front group 310 having negative power (refractive power, hereinafter the same) in order from the object side (object side) and a rear group 320 having positive power. , 306, 308.
  • the Fresnel zone plate 210 is disposed at the position of the pupil of the lens 300 (the position of the stop).
  • Example 2 Part (b) of FIG. 6 is a view showing the arrangement of the lens 400, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220.
  • the lens 400 is a wide conversion lens including a front group 410 having negative power and a rear group 420 having positive power in order from the object side (subject side), and includes lenses 402, 404, 406, and 408. Ru.
  • the Fresnel zone plate 210 is disposed at the position of the pupil of the lens 400 (the position of the stop).
  • Example 3 Part (c) of FIG. 6 is a view showing the arrangement of the lens 500, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220.
  • the lens 500 is a wide conversion lens including a front group 510 having negative power and a rear group 520 having positive power in order from the object side (subject side), and includes lenses 502, 504, 506, and 508. Ru.
  • the Fresnel zone plate 210 is disposed at the position of the pupil of the lens 500 (the position of the stop).
  • Example 4 Part (d) of FIG. 6 is a view showing the arrangement of the lens 600, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220.
  • the lens 600 is a wide conversion lens including a front group 610 having negative power and a rear group 620 having positive power in order from the object side (subject side), and includes lenses 602, 604, 606, and 608. Ru.
  • the Fresnel zone plate 210 is disposed at the position of the pupil of the lens 600 (the position of the stop).
  • the lens attached to the imaging apparatus main body in the imaging apparatus of the present invention may be a zoom lens having a variable angle of view (focal length).
  • FIG. 7 shows the optical paths of the axial light beam B1 and the light beam B2 of the maximum angle of view with respect to the lens 300 in a state where the ideal lens I1 (lens with zero aberration) is arranged at the pupil position of the lens 300.
  • the Fresnel zone plate 210 is disposed at the position of the ideal lens I1 and the light beam travels as a wavefront, the light from the Fresnel zone plate 210 (on the side of the imaging device 220) does not necessarily have an optical path like a dotted line.
  • an optical path when the ideal lens I1 is disposed is shown.
  • the lens data of the ideal lens I1 is data in the case where the distance between the pupil and the imaging surface 220A is 1 mm (that is, the distance between the ideal lens I2 and the imaging surface 220A is 1 mm) by normalization.
  • the lens data of the ideal lenses I2 to I4 shown below are the same as that of the ideal lens I1.
  • mm represents a millimeter
  • deg represents an angle.
  • part (b) of FIG. 7 shows axial luminous flux B3 with respect to lens 400 and luminous flux B4 of the maximum angle of view in a state where ideal lens I2 (lens with zero aberration) is disposed at the pupil position of lens 400.
  • the Fresnel zone plate 210 is disposed at the position of the ideal lens I2 and the light beam travels as a wave front, the light from the Fresnel zone plate 210 (the image sensor 220 side) does not necessarily have an optical path like a dotted line portion.
  • an optical path when the ideal lens I2 is disposed is shown.
  • part (c) of FIG. 7 shows axial luminous flux B5 with respect to lens 500 and luminous flux B6 of the maximum angle of view in a state where ideal lens I3 (lens with zero aberration) is disposed at the pupil position of lens 500.
  • the Fresnel zone plate 210 is disposed at the position of the ideal lens I3 and the light beam travels as a wavefront, the light from the Fresnel zone plate 210 (on the side of the imaging device 220) does not necessarily have an optical path like a dotted line.
  • an optical path when the ideal lens I3 is disposed is shown.
  • part (d) of FIG. 7 shows axial luminous flux B7 with respect to lens 600 and luminous flux B8 of the maximum angle of view in a state where ideal lens I4 (lens with zero aberration) is disposed at the pupil position of lens 600.
  • the Fresnel zone plate 210 is disposed at the position of the ideal lens I4 and the light beam travels as a wave front, the light from the Fresnel zone plate 210 (the image sensor 220 side) does not necessarily have an optical path like a dotted line.
  • an optical path when the ideal lens I4 is disposed is shown.
  • FIGS. 8 and 9 are tables showing lens data of the lenses 300, 400, and 500.
  • the surface with “*” attached to the surface number indicates that the surface is aspheric.
  • FIG. 9 is a table showing lens data of the lens 600.
  • the lens data in FIGS. 8 and 9 is data in the case where the distance between the pupil and the imaging surface 220A is 1 mm (that is, the distance between the Fresnel zone plate 210 and the imaging surface 220A is 1 mm) by normalization. If the distance is k times by changing the focal length, the radius of curvature and the surface distance are also k times.
  • FIG. 10 is a table showing the aspheric coefficients for the lens 300.
  • the aspheric shape of the lens surface is represented by the following equation (1).
  • c is the reciprocal of paraxial radius of curvature (paraxial curvature)
  • hi is the height from the optical axis
  • is a conical multiplier
  • Ai is an aspheric coefficient.
  • Portions (a) to (d) of FIG. 11 are diagrams showing the spherical aberration, the astigmatism, the distortion, and the lateral chromatic aberration of the lens 300, respectively.
  • (a) to (d) of FIG. 12 are diagrams showing the spherical aberration, the astigmatism, the distortion, and the lateral chromatic aberration of the lens 400, respectively.
  • (a) to (d) of FIG. 13 are diagrams showing the spherical aberration, the astigmatism, the distortion, and the lateral chromatic aberration of the lens 500, respectively.
  • parts (a) to (d) of FIG. 14 are diagrams showing spherical aberration, astigmatism, distortion, and lateral chromatic aberration of the lens 600, respectively.
  • FIG. 15 is a table showing specifications of Examples 1 to 4.
  • Example 1 when the lens 300 is mounted, the angle of view is changed (wider to 60.0 degrees) than the original optical system (the angle of view is 20.6 degrees). I understand.
  • Example 2 when the lens 400 is mounted, the angle of view changes with respect to the original optical system (the angle of view is 46.5 degrees) (wide as 80.0 degrees).
  • Example 3 when the lens 500 is mounted, the angle of view changes with respect to the original optical system (the angle of view is 24.5 degrees) (wide as 60.0 degrees).
  • Example 4 when the lens 600 is mounted, the angle of view changes with respect to the original optical system (the angle of view is 20.2 deg.) (Wide as 60.0 deg).
  • the “original optical system” in the first to fourth embodiments is an optical system (a focal length is 1.0 mm) using a coded aperture such as the Fresnel zone plate 210.
  • FIG. 16 is a table showing conditional expressions for calculating the peripheral light amount in Examples 1 to 4.
  • the meanings of the parameters in the table are as follows. Any of the examples satisfies the numerical range in each aspect of the present invention.
  • the definitions of Cs, Ct, Mt and Ms are shown in parts (a), (b) and (c) of FIG. 17 (shown as a representative example for Example 1).
  • the part (b) of FIG. 17 is a figure which shows the plane P1
  • the (c) part of FIG. 17 is a figure which shows the plane P2.
  • the planes P1 and P2 are planes perpendicular to the chief ray L1 of the axial light beam B1 and the chief ray L2 of the light beam B2 having the largest angle of view.
  • FIG 19 is a graph showing the peripheral light amount ratio for the ideal lens and (wide conversion lens of the examples 1 to 4 + ideal lens).
  • the peripheral light ratio of the ideal lens when the half angle of view ⁇ is 30 deg is about 56%, and it is about 34% for 40 deg, but in the case of (wide conversion lens of Examples 1 to 4 + ideal lens)
  • the peripheral light amount ratio is about 78% to 99%, and it can be seen that the peripheral light amount ratio is improved (not reduced).
  • FIG. 20 is a flowchart showing image reconstruction processing.
  • step S100 the image processing unit 230 (projected image input unit 230A) acquires a projected image of the subject from the imaging device 220.
  • the projection image to be acquired is a projection image formed on the imaging element 220 by the light from the subject being incident on the Fresnel zone plate 210.
  • step S110 information (a pitch of the Fresnel zone plate 210) of the local spatial frequency of the Fresnel zone plate 210 used for the projection image acquisition by the image processing unit 230 (the information acquisition unit 230E) is input.
  • This information may be input from a memory (not shown) or may be input according to the user's operation on the operation unit 260.
  • the information acquisition unit 230E may analyze and input the projection image acquired in step S100. Since the pitch is determined by the value of ⁇ (see the equations (2) to (4), (7) etc. described later), specifically, the value of ⁇ may be input.
  • the pitch value of ⁇
  • image reconstruction may be repeated while changing the pitch (the value of ⁇ ) to obtain a value capable of obtaining a clear image.
  • step S120 the image processing unit 230 (multiplication image generation unit 230B) multiplies the projection image by the first and second Fresnel zone patterns to generate a complex image consisting of a real part image and an imaginary part image.
  • the Fresnel zone pattern to be multiplied in step S120 can be a pattern selected according to the pitch (value of ⁇ ) input in step S110 among the patterns (Fresnel zone pattern information 240D) stored in the storage unit 240. . Further, it is also possible to use a pattern in which the pattern stored in the storage unit 240 is changed (may be enlarged or reduced as necessary) according to the pitch (value of ⁇ ).
  • the image processing unit 230 (multiplication image generation unit 230B) stores the generated complex image in the storage unit 240 as a multiplication image 240E.
  • the first Fresnel zone pattern used in image restoration can be, for example, the pattern shown in part (a) of FIG. 21 (the phase at the center is 0 deg), and this is multiplied by the projected image to obtain an image of the real part. Is obtained.
  • the second Fresnel zone pattern can be, for example, the pattern shown in part (b) of FIG. 21 (the pitch is the same as the first Fresnel zone pattern, and the phase is shifted by 90 degrees). An image of the imaginary part is obtained by multiplication.
  • the phase shift of the first and second Fresnel zone patterns is preferably 90 degrees, but a positive or negative range of 70 deg or more and 110 deg or less reconstructs a clear image can do.
  • the phase of the local spatial frequency of the first Fresnel zone pattern or the second Fresnel zone pattern may be identical to the phase of the Fresnel zone plate 210.
  • Fresnel zone pattern information 240D When using a Fresnel zone pattern, data of a plurality of Fresnel zone patterns different in phase can be stored as Fresnel zone pattern information 240D in the storage unit 240, and a desired pattern can be selected and used. Further, the image processing unit 230 (multiplication image generation unit 230B) may generate a desired pattern based on the information of pitch and phase. Since such a Fresnel zone pattern is stored in the storage unit 240 as the Fresnel zone pattern information 240D, which is electronic data, it is possible to quickly and easily select and generate a desired pattern.
  • the size of the apparatus is increased by holding the plates (substrates) corresponding to a plurality of patterns as an entity, the manufacturing cost is increased, and the characteristic variation among the plurality of patterns (manufacturing variation, temporal change, temperature change) There is no problem such as image quality deterioration due to
  • a plurality of patterns corresponding to the subject distance may be stored in the storage unit 240 as the Fresnel zone pattern information 240D, and read and used.
  • one Fresnel zone pattern may be stored as a reference pattern, and may be enlarged at a different enlargement factor according to the subject distance.
  • a pattern corresponding to the subject distance infinity and having the same size as the Fresnel zone plate can be used as the reference pattern.
  • FIG. 22 is a diagram showing how the enlargement ratio of the Fresnel zone pattern differs according to the subject distance.
  • the focus evaluation value of the reconstructed image (for example, the luminance signal in the focus evaluation area set in the image) is repeatedly generated while repeating the complex image generation (step S120) and the image reconstruction (step S130) while changing the enlargement ratio.
  • a clear image may be acquired by maximizing the integral value of.
  • the imaging device 10 can perform refocusing using the Fresnel zone plate 210 (coded aperture) and the Fresnel zone pattern.
  • the image processing unit 230 performs two-dimensional complex Fourier transform on the complex image to reconstruct an image of the subject (image in the spatial region) (described later). Furthermore, the image processing unit 230 (aberration correction unit 230D) causes the aberration (the astigmatism, the lens 300, the lens 400, and the like) attached to the imaging device main body 200 in the reconstructed image (the image in the space region) Aberration, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc. are corrected. The correction is performed based on the lens information 240A acquired from the memory 120.
  • the aberration the astigmatism, the lens 300, the lens 400, and the like
  • aberration correction may be performed before or after the image is reconstructed.
  • correcting the aberration before reconstruction for example, by changing the shape and / or size of the Fresnel zone pattern (coded aperture) to be multiplied to the projection image according to the type and / or amount of aberration to be corrected be able to.
  • an image such as applying different filters (such as a filter for point image restoration) depending on conditions such as the type and / or degree of aberration to be corrected and the position in the image. It can be done by processing.
  • the correction of the aberration in the present invention is not limited to such an aspect.
  • the image processing unit 230 displays the reconstructed image on the display unit 250 (step S140). Further, the image processing unit 230 (Fourier transform unit 230C) stores the reconstructed image in the storage unit 240 as a reconstructed image 240F.
  • I (r) The larger the value of I (r), the greater the light transmission in the predetermined wavelength band.
  • r is the radius of the Fresnel zone plate, and ⁇ (> 0) is a constant which determines the fineness (pitch) of the pattern.
  • I2 (r) which is offset within the range of 0 to 1 as in equation (3) will be considered.
  • this coding aperture (Fresnel zone plate) is disposed at a distance d from the sensor surface of the imaging device.
  • the translated shadow S (r) is expressed by equation (4).
  • I2 (r) and S (r) are originally two-dimensional images and are two-variable functions, here, for the sake of simplicity, only focusing on one-dimensional images on a cross section cut at a plane including the center and the incident light source Think. However, it can be easily extended to the two-dimensional case by calculating as the following equation (5).
  • the captured shadow image (projected image) is image restored (reconstructed) on a computer and output.
  • the shadow image is multiplied with the non-displaced Fresnel zone aperture image (Fresnel zone pattern).
  • Fresnel zone pattern the non-displaced Fresnel zone aperture image
  • Mr (r) is the same real function as I (r), but with the offset (DC component) removed.
  • the image reconstruction in the prior art is equivalent to the case where the projection image of the real aperture is multiplied by the Fresnel zone pattern represented by the real function Mr (r) Do.
  • the real part (cos ⁇ r 2 ) of Mc (r) is the same as Mr (r).
  • the projected image is multiplied by the two Fresnel zone patterns (first and second Fresnel zone patterns) corresponding to the real part and imaginary part of the complex function and having different phases as described above.
  • a complex image consisting of an image of a part and an image of an imaginary part is generated.
  • the image after multiplication (multiplied image) is expressed by the following equations (8) and (9) in each case.
  • the first term is a component that can be removed by offset correction or the like for each of the multiplied images Fr (r) and Fc (r), which is a case where Mr (r) and Mc (r) are used for the internally multiplied image.
  • the second term is the moiré interference fringes in which the “difference frequency” between the superimposed Fresnel zone apertures (corresponding to cos ( ⁇ when the two apertures are represented by cos ⁇ and cos ⁇ ) is extracted, Since this corresponds to the basis of the Fourier transform, it is converted to a delta function by applying the Fourier transform to become a "point" and a component contributing to imaging.
  • the third term corresponds to the “sum frequency” (corresponding to cos ( ⁇ + ⁇ )), which is a component that does not contribute to image formation even when subjected to Fourier transform and acts as noise.
  • ⁇ (k, ⁇ , ⁇ r) is a real number polynomial. If the absolute value of a complex number is taken with respect to these, a restored image can be obtained, but in the case of fr (k) (in the case of the prior art), the first term and the second term produce two symmetrical points at the origin , It becomes a restored image that is overlapped point-symmetrically. On the other hand, in the case of fc (k) (in the case of the first embodiment), such a problem does not occur and the image is reconstructed normally.
  • the phase is rotated depending on the incident angle of light, but when the absolute value of the complex number is taken with respect to this, infinite distance light It can be confirmed that the image is formed into a delta function (point) in response to the arrival of. Since all calculations from the angular spectrum of the incident light to the imaged image are linear, superposition is established, which can explain the imaging of the image.
  • the peripheral light amount is (4) to (cos ⁇ ), and the distortion is 2 ⁇ ⁇ ⁇ d ⁇ tan ⁇ .
  • represents a half angle of view.
  • the imaging device 10 includes the wide conversion lens (lens 300, 400, 500, 600) according to the first embodiment so that images with different angle of view have good image quality (peripheral light ratio) High optical distortion). Further, the angle of view can be changed while maintaining the refocusing function by the coded aperture (Fresnel zone plate 210).
  • the projected image is multiplied by the first and second Fresnel zone patterns to generate a complex image
  • the complex image is complex Fourier transformed to reconstruct an image in the spatial domain.
  • the reconstruction of the image in the imaging device of the present invention is not limited to the aspect using a complex image.
  • the projection image is multiplied by a real function such as Mr (r) to obtain a multiplication image, and this multiplication image is subjected to Fourier transform May be reconfigured.
  • a real function such as Mr (r)

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Lenses (AREA)
  • Diffracting Gratings Or Hologram Optical Elements (AREA)

Abstract

The purpose of the present invention is to provide: a wide conversion lens with which images having differing angles of view are obtained with good image quality when the wide conversion lens is mounted on an image capture device; and an image capture device comprising such a wide conversion lens. A wide conversion lens according to a first embodiment of the present invention is mounted on the object side of the image capture device, and satisfies conditions (1) and (2). 1.1 < Ms/Cs (1) 0.3 < |Ff/DA| < 2.0 (2) Cs is the length in the sagittal direction of an on-axis luminous flux incident on the surface of the wide conversion lens closest to the object; Ms is the length in the sagittal direction of the luminous flux of the maximum angle of view incident on the surface of the wide conversion lens closest to the object; DA is the maximum air gap of the wide conversion lens; and Ff is the focal distance of the lens group on the objective side of the position at which the maximum air gap is present.

Description

ワイドコンバージョンレンズ及び撮像装置Wide conversion lens and imaging device
 本発明はワイドコンバージョンレンズ、及びワイドコンバージョンレンズが装着される撮像装置に関する。 The present invention relates to a wide conversion lens and an imaging device to which the wide conversion lens is attached.
 撮像装置の分野では、主レンズ(主たる撮像装置のレンズ)に装着して画角を広くする(焦点距離を短くする)ワイドコンバージョンレンズが知られている。例えば、特許文献1には、負の屈折力を有する第1レンズ及び正の屈折力を有する第2レンズを備え、レンズの特徴が条件式を満たす広角リレーレンズが記載されている。また特許文献2には、物体側に凸面を向けたメニスカス乗の負の第1レンズと正の第2レンズとを備え、レンズの特徴が条件式を満たすワイドコンバータレンズが記載されている。 In the field of imaging devices, wide conversion lenses are known which are attached to a main lens (lens of the main imaging device) to widen the angle of view (shorten the focal length). For example, Patent Document 1 describes a wide-angle relay lens including a first lens having negative refractive power and a second lens having positive refractive power, and the features of the lenses satisfy the conditional expression. Further, Patent Document 2 describes a wide converter lens including a negative first lens of meniscus power and a second positive lens, the convex surface of which is directed to the object side, and the features of the lens satisfy the conditional expression.
 一方、撮像装置の分野では、近年レンズなしで被写体の画像を取得する技術が開発されている。例えば下記非特許文献1では、撮像素子にフレネルゾーンプレートを近接して配置し、被写体からの光により撮像素子上に形成される投影像とフレネルゾーンプレートに対応する投影パターンとを重ね合わせて生じたモアレ縞をフーリエ変換することによりレンズなしで被写体の像を取得することができる。なお、非特許文献1の技術について非特許文献2により技術報告がなされている。また、非特許文献1,2と同様にフレネルゾーンプレートをマスクパターンに用いるレンズレスイメージング技術が知られている(特許文献3を参照)。特許文献1では、対向して配置された2つの格子パターン(フレネルゾーンプレート)に被写体からの光が入射して形成されたモアレ縞をフーリエ変換することで、被写体の像を再構成している。 On the other hand, in the field of imaging devices, in recent years, a technology for acquiring an image of a subject without a lens has been developed. For example, in Non-Patent Document 1 below, a Fresnel zone plate is disposed in proximity to an imaging device, and a projected image formed on the imaging device by light from an object is superimposed on a projection pattern corresponding to the Fresnel zone plate. By subjecting the moiré fringes to Fourier transform, it is possible to obtain an image of the subject without a lens. The technical report of Non-Patent Document 2 is made on the technology of Non-Patent Document 1. Moreover, the lensless imaging technique which uses a Fresnel zone plate for a mask pattern like nonpatent literature 1 and 2 is known (refer to patent documents 3). In Patent Document 1, an image of a subject is reconstructed by Fourier-transforming a moiré fringe formed by incidence of light from the subject on two grating patterns (Fresnel zone plate) arranged to face each other. .
特開2012-93751号公報JP 2012-93751 A 特開2014-56054号公報JP, 2014-56054, A WO2016/203573号公報WO 2016/203573
 従来のワイドコンバージョンレンズは、撮像装置に装着した場合に画角の異なる画像を良好な画質で得られるものではなかった。例えば特許文献1に記載の広角リレーレンズでは、本願発明者らの検討によれば周辺光量比は向上するものの光学ディストーションが約44%と非常に大きく、画像認識を行う場合被写体が歪みすぎて認識できなくなってしまう。また特許文献2に記載のワイドコンバータレンズではレンズに入射する軸外光束が軸上光束に対し太いものの、周辺光量比が低い。なお、一般にレンズの周辺光量比は画面の周辺に行くほど低下する。理論的には、周辺光量比は画角に依存し、RI=(cosγ)の関係がある。ここで、RI(Relative Illumination)を周辺光量比とし、γを半画角とする。例えば、半画角が30度の場合において、RIは約56.25%である。 Conventional wide conversion lenses can not obtain images with different angles of view with good image quality when mounted on an imaging device. For example, in the wide-angle relay lens described in Patent Document 1, according to the study of the inventors of the present invention, although the peripheral light amount ratio is improved, the optical distortion is very large at about 44%. It will not be possible. Further, in the wide converter lens described in Patent Document 2, although the off-axis light beam incident on the lens is thicker than the on-axis light beam, the peripheral light amount ratio is low. In general, the peripheral light amount ratio of the lens decreases toward the periphery of the screen. Theoretically, the peripheral light amount ratio depends on the angle of view, and there is a relation of RI = (cos γ) 4 . Here, RI (Relative Illumination) is set as the peripheral light ratio, and γ is set as a half angle of view. For example, in the case of a half angle of view of 30 degrees, RI is about 56.25%.
 一方、上述した非特許文献1,2及び特許文献3に記載の技術ではリフォーカスが可能であり、焦点距離はf=2×β×dである。ここで、リフォーカスは、画像の再構成により所望の合焦距離に事後的に合焦させることであり、βはフレネルゾーンプレートのピッチを表し、dはイメージセンサとフレネルゾーンプレートとの間隔を表す。このような構成の撮像装置で画角(焦点距離)を変更する場合、フレネルゾーンプレートのピッチ及び/または位置を変更することになるが、これにより以下に説明するデメリットが生じる。 On the other hand, refocusing is possible with the techniques described in Non-Patent Documents 1 and 2 and Patent Document 3 described above, and the focal distance is f = 2 × β × d. Here, refocusing is to refocus the image to a desired focusing distance by image reconstruction, β represents the pitch of the Fresnel zone plate, and d represents the distance between the image sensor and the Fresnel zone plate. Represent. In the case of changing the angle of view (focal length) in the imaging apparatus having such a configuration, the pitch and / or the position of the Fresnel zone plate is changed, which brings about the demerit described below.
 例えば、広角化する(焦点距離を短くする)場合は「(1)フレネルゾーンプレートのピッチを粗くする」または「(2)フレネルゾーンプレートをセンサに近づける(dの値を小さくする)」必要がある。(1)の場合はフレネルゾーンプレートにおいて光を透過させる領域と遮光する領域とにより構成されるパターンである縞の数が低下することにより画角分解能(解像度)が低下し、(2)の場合はフレネルゾーンプレートをイメージセンサに近づけるほどフレネルゾーンプレートとイメージセンサの位置精度への要求が高くなる。一方、望遠化する(焦点距離を長くする)場合は「(3)フレネルゾーンプレートのピッチを細かくする」または「(4)フレネルゾーンプレートをイメージセンサから離す(dの値を大きくする)」必要がある。(3)の場合は狭ピッチ化が必要とされることにより製造難度が上昇し、(4)の場合光の回折によりイメージセンサ上においてフレネルゾーンプレートの像がボケて再構成画像の画質が低下する。このようなβ,dの変更による画角変更のデメリットを考慮すると、符号化開口の前(物体側)にワイドコンバージョンレンズを装着して画角を広げることが考えられる。しかしながら、特許文献1,2のような従来のワイドコンバージョンレンズでは、レンズを用いる一般的な撮像装置の場合と同様に、符号化開口を有する撮像装置に装着した場合に画角の異なる画像を良好な画質で得ることが困難であった。 For example, when widening the angle (shortening the focal length), it is necessary to "(1) roughen the pitch of the Fresnel zone plate" or "(2) move the Fresnel zone plate closer to the sensor (decrease the value of d)" is there. In the case of (1), the field angle resolution (resolution) is lowered due to the decrease in the number of stripes which is a pattern constituted by the light transmitting region and the light shielding region in the Fresnel zone plate, and in the case of (2) The closer the Fresnel zone plate is to the image sensor, the higher is the demand for the positional accuracy of the Fresnel zone plate and the image sensor. On the other hand, in the case of telephoto (to increase the focal length), it is necessary to (3) make the pitch of the Fresnel zone plate finer or (4) release the Fresnel zone plate from the image sensor (increase the value of d). There is. In the case of (3), narrowing of the pitch is required, which increases the manufacturing difficulty. In the case of (4), the light diffraction causes the image of the Fresnel zone plate to be blurred on the image sensor and the image quality of the reconstructed image is deteriorated. Do. Considering the demerit of changing the angle of view due to the change of β and d, it is conceivable to widen the angle of view by mounting a wide conversion lens in front of the coding aperture (object side). However, in the conventional wide conversion lenses such as Patent Documents 1 and 2, as in the case of a general imaging apparatus using a lens, images with different angle of view are excellent when mounted on an imaging apparatus having a coded aperture. It was difficult to obtain with good image quality.
 このように、従来の技術では、ワイドコンバージョンレンズを撮像装置に装着した場合に画角の異なる画像を良好な画質で得ることが困難であった。 As described above, in the related art, when a wide conversion lens is attached to an imaging device, it is difficult to obtain an image with different angle of view with good image quality.
 本発明はこのような事情に鑑みてなされたもので、撮像装置に装着した場合に画角の異なる画像を良好な画質で得られるワイドコンバージョンレンズを提供することを目的とする。また本発明は、ワイドコンバージョンレンズにより画角の異なる画像を良好な画質で得られる撮像装置を提供することを目的とする。 The present invention has been made in view of such circumstances, and it is an object of the present invention to provide a wide conversion lens capable of obtaining images with different angle of view with good image quality when mounted in an imaging device. Another object of the present invention is to provide an imaging device capable of obtaining images with different angle of view with good image quality by using a wide conversion lens.
 上述した目的を達成するため、本発明の第1の態様に係るワイドコンバージョンレンズは、撮像装置の物体側に装着されるワイドコンバージョンレンズであって、以下の条件式(1)及び(2)を満たす。 In order to achieve the above-mentioned object, a wide conversion lens according to a first aspect of the present invention is a wide conversion lens mounted on the object side of an imaging apparatus, and the following conditional expressions (1) and (2) Fulfill.
 1.1<Ms/Cs   (1)
 0.3<|Ff/DA|<2.0   (2)
 ただし、Csはワイドコンバージョンレンズの最も物体側の面に入射する軸上光束のサジタル方向の長さであり、Msはワイドコンバージョンレンズの最も物体側の面に入射する最大画角の光束のサジタル方向の長さであり、DAはワイドコンバージョンレンズにおける最大の空気間隔であり、Ffは最大の空気間隔が存在する位置よりも物体側のレンズ群の焦点距離である。
1.1 <Ms / Cs (1)
0.3 <| Ff / DA | <2.0 (2)
Where Cs is the length in the sagittal direction of the axial light beam incident on the surface closest to the object side of the wide conversion lens, and Ms is the sagittal direction of the light beam with the largest angle of view incident on the surface closest to the object side in the wide conversion lens Where D.sub.A is the maximum air separation in the wide conversion lens, and F.sub.f is the focal length of the lens group on the object side of the position where the maximum air separation exists.
 Ms/Cs≦1.1だと周辺光量を上げるのが困難になり、2.0≦Ms/Csだと瞳収差が大きくなりすぎ歪曲収差、非点収差が増大してしまう。また、|Ff/DA|≦0.3だと前群のパワーが強くなりすぎ負の歪曲が大きく発生してしまうか、または全長が長くなり小型化できない。これに対しMs/Cs及び|Ff/DA|が第1の態様に規定する範囲であれば、周辺光量比を高く、かつ光学ディストーションを小さくすることができる。このように、第1の態様に係るワイドコンバージョンレンズは、撮像装置に装着した場合に画角の異なる画像を良好な画質で得ることができる。 If Ms / Cs ≦ 1.1, it becomes difficult to increase the peripheral light amount, and if 2.0 ≦ Ms / Cs, the pupil aberration becomes too large, and distortion and astigmatism increase. Further, if | Ff / DA | ≦ 0.3, the power of the front group becomes too strong, negative distortion may occur largely, or the overall length becomes long, and miniaturization can not be achieved. On the other hand, if Ms / Cs and | Ff / DA | are in the range defined in the first aspect, the peripheral light amount ratio can be made high and the optical distortion can be made small. As described above, the wide conversion lens according to the first aspect can obtain an image with different angle of view with good image quality when mounted on an imaging device.
 なお第1の態様において、Ms/Cs<2.0、かつ0.5<|Ff/DA|<1.5であることがより好ましい。これらの条件式を満たすことによって周辺光量比がさらに向上しディストーションがさらに小さくなるからである。 In the first embodiment, it is more preferable that Ms / Cs <2.0 and 0.5 <│Ff / DA│ <1.5. By satisfying these conditional expressions, the peripheral light amount ratio is further improved, and the distortion is further reduced.
 第2の態様に係るワイドコンバージョンレンズは第1の態様において、以下の条件式(3)を満たす。 The wide conversion lens according to the second aspect satisfies the following conditional expression (3) in the first aspect.
 2.2<Mt/Ct   (3)
 ただし、Ctはワイドコンバージョンレンズの最も物体側の面に入射する軸上光束のタンジェンシャル方向の長さであり、Mtはワイドコンバージョンレンズの最も物体側の面に入射する最大画角の光束のタンジェンシャル方向の長さを主光線に垂直な面に射影した長さである。Mt/Ct≦2.2だと周辺光量比を挙げるのが困難になるが、ワイドコンバージョンレンズの特性が第2の態様に規定する範囲であれば周辺光量比を高く、かつ光学ディストーションを小さくすることができる。なお第2の態様においてMt/Ct<3.0であることが好ましい。3.0≦Mt/Ctだと瞳収差が大きくなりすぎ、歪曲や非点収差が増大するからである。
2.2 <Mt / Ct (3)
Where Ct is the length in the tangential direction of the axial light beam incident on the surface closest to the object side of the wide conversion lens, and Mt is the tangent of the light beam with the largest angle of view incident on the surface closest to the object side of the wide conversion lens. It is the length obtained by projecting the length in the principal direction on a plane perpendicular to the chief ray. If Mt / Ct ≦ 2.2, it will be difficult to cite the peripheral light ratio, but if the characteristic of the wide conversion lens is in the range specified in the second aspect, the peripheral light ratio is increased and the optical distortion is reduced. be able to. In the second aspect, preferably, Mt / Ct <3.0. If 3.0 ≦ Mt / Ct, the pupil aberration becomes too large, and distortion and astigmatism increase.
 第3の態様に係るワイドコンバージョンレンズは第1または第2の態様において、以下の条件式(4)を満たす。 The wide conversion lens according to the third aspect satisfies the following conditional expression (4) in the first or second aspect.
 |WCTL/WCf|<0.1   (4)
 ただし、WCTLはワイドコンバージョンレンズの物体側に最も近い面である第1面から物体側に最も遠い面である最終面までの長さであり、WCfはワイドコンバージョンレンズの焦点距離である。|WCTL/WCf|が第3の態様に規定する範囲を超えるとアフォーカル系が崩れ、装着する光学システムのバックフォーカスとのずれが大きくなってしまい、ユーザが使用するときピントがずれてしまうため、|WCTL/WCf|は第3の態様に規定する範囲内であることが好ましい。
| WCTL / WCf | <0.1 (4)
However, WCTL is the length from the first surface which is the surface closest to the object side of the wide conversion lens to the final surface which is the surface furthest to the object side, and WCf is the focal length of the wide conversion lens. Is out of the range specified in the third aspect, the afocal system collapses, the shift from the back focus of the mounted optical system becomes large, and the focus shifts when the user uses it. It is preferable that | WCTL / WCf | is within the range defined in the third aspect.
 第4の態様に係るワイドコンバージョンレンズは第1から第3の態様のいずれか1つにおいて、以下の条件式(5)を満たす。 The wide conversion lens according to the fourth aspect satisfies the following conditional expression (5) in any one of the first to third aspects.
 |Dist|<10%   (5)
 ただし、Distはワイドコンバージョンレンズを理想レンズに装着したときの光学ディストーションである。センシング用途などにおける物体認識において10%程度の歪曲収差は認識できるため、光学ディストーションは10%までは許容できる。第4の態様において、|Dist|<5%であることがより好ましい。歪曲収差がさらに少なくなるからである。
| Dist | <10% (5)
However, Dist is an optical distortion when the wide conversion lens is attached to the ideal lens. Since about 10% distortion can be recognized in object recognition in a sensing application or the like, an optical distortion of up to 10% is acceptable. In the fourth aspect, it is more preferable that | Dist | <5%. This is because distortion is further reduced.
 第5の態様に係るワイドコンバージョンレンズは第1から第4の態様のいずれか1つにおいて、物体側より順に負の屈折力の前群と正の屈折力の後群から構成される。第5の態様の構成を採用することにより、広角化が容易になるからである。 In any one of the first to fourth aspects, the wide conversion lens according to the fifth aspect includes, in order from the object side, a front group of negative refractive power and a rear group of positive refractive power. By adopting the configuration of the fifth aspect, it is easy to achieve a wide angle.
 上述した目的を達成するため、第6の態様に係る撮像装置は第1から第5の態様のいずれか1つに係るワイドコンバージョンレンズと、ワイドコンバージョンレンズが装着され、符号化開口と、前記符号化開口により形成される投影像の信号を出力する撮像素子と、前記信号に基づいて空間領域の画像を再構成する画像復元部とを有する撮像装置本体と、を備える。第6の態様によれば、第1から第5の態様のいずれか1つに係るワイドコンバージョンレンズにより、画角の異なる画像を良好な画質で(周辺光量比が向上し、ディストーションが小さい)得ることができる。 In order to achieve the above-described object, the imaging device according to the sixth aspect is equipped with the wide conversion lens according to any one of the first to fifth aspects, and a wide conversion lens, a coded aperture, and the code And an imaging device main body having an imaging device for outputting a signal of a projected image formed by the aperture, and an image restoration unit for reconstructing an image of a spatial region based on the signal. According to the sixth aspect, with the wide conversion lens according to any one of the first to fifth aspects, images with different angle of view can be obtained with good image quality (improved peripheral light amount ratio and small distortion) be able to.
 第7の態様に係る撮像装置は第6の態様において、撮像装置本体は、符号化開口と、符号化開口により形成される投影像の信号を出力する撮像素子と、信号に基づいて空間領域の画像を再構成する画像復元部と、を有し、ワイドコンバージョンレンズは符号化開口よりも物体側に装着され、ワイドコンバージョンレンズ及び符号化開口により撮像素子に投影像が形成される。符号化開口を用いた撮像装置はリフォーカス機能を有するが、β,dの変更により画角(焦点距離)を変更しようとすると、上述のように画質低下、製造難度上昇等のデメリットがある。また、コサイン4乗則(「cos4乗則」とも表示する)に従って周辺光量比が低下する。しかしながら第7の態様によれば、第1から第5の態様のいずれか1つに係るワイドコンバージョンレンズを備えることによって、これらのデメリットを解消しつつ画角を変更することができ、また良好な画質の画像を得ることができる。 In an imaging device according to a seventh aspect, in the sixth aspect, the imaging device main body includes a coded aperture, an imaging element for outputting a signal of a projected image formed by the coded aperture, and a spatial region based on the signal. The wide conversion lens is mounted on the object side of the coding aperture, and a projection image is formed on the imaging device by the wide conversion lens and the coding aperture. An imaging apparatus using a coded aperture has a refocusing function, but if it is attempted to change the angle of view (focal length) by changing β and d, there are disadvantages such as image quality deterioration and manufacturing difficulty increase as described above. In addition, the peripheral light amount ratio decreases in accordance with the cosine fourth law (also referred to as “cos fourth law”). However, according to the seventh aspect, by providing the wide conversion lens according to any one of the first to fifth aspects, it is possible to change the angle of view while eliminating these disadvantages. An image of image quality can be obtained.
 第8の態様に係る撮像装置は第7の態様において、画像復元部は、空間領域の画像においてワイドコンバージョンレンズに起因する収差を補正する。第8の態様によれば、レンズに起因する収差(非点収差、球面収差、コマ収差、歪曲収差、軸上色収差、倍率色収差等)を補正するので、良好な画質の画像を得ることができる。収差の補正は画像を再構成する前に行ってもよいし、再構成の後で行ってもよい。再構成の前に収差を補正する場合、例えば投影像に乗算する符号化開口の形状及び/または大きさを、補正する収差の種類及び/または量に応じて変えることにより行うことができる。一方、再構成の後で収差を補正する場合、例えば補正する収差の種類及び/または程度、画像における位置等の条件に応じて異なるフィルタ(点像復元用のフィルタ等)を適用する等の画像処理により行うことができる。ただし、本発明における収差の補正はこのような態様に限定されるものではない。 In an imaging device according to an eighth aspect, in the seventh aspect, the image restoration unit corrects an aberration caused by the wide conversion lens in the image of the spatial region. According to the eighth aspect, since aberrations (such as astigmatism, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc.) caused by the lens are corrected, it is possible to obtain an image of good image quality. . Aberration correction may be performed before or after the image is reconstructed. In the case of correcting the aberration before the reconstruction, for example, the shape and / or the size of the coded aperture by which the projection image is multiplied can be performed according to the type and / or the amount of the aberration to be corrected. On the other hand, when correcting the aberration after the reconstruction, for example, an image such as applying different filters (such as a filter for point image restoration) depending on conditions such as the type and / or degree of aberration to be corrected and the position in the image. It can be done by processing. However, the correction of the aberration in the present invention is not limited to such an aspect.
 本発明の第9の態様に係る撮像装置は第8の態様において、ワイドコンバージョンレンズの情報を取得するレンズ情報取得部を備え、画像復元部は取得した情報に基づいて収差を補正する。第9の態様によれば、レンズの情報に基づいて正確かつ容易に収差を補正することができる。なお、レンズの情報を記憶するレンズ情報記憶部をワイドコンバージョンレンズに備え、レンズ情報取得部がレンズ情報記憶部から情報を取得してもよい。このようなレンズ情報記憶部は非一時的記録媒体を用いて構成することができる。 An imaging apparatus according to a ninth aspect of the present invention is, in the eighth aspect, provided with a lens information acquisition unit that acquires information of a wide conversion lens, and the image restoration unit corrects an aberration based on the acquired information. According to the ninth aspect, the aberration can be corrected accurately and easily based on the information of the lens. The lens information storage unit for storing information of the lens may be provided in the wide conversion lens, and the lens information acquisition unit may acquire information from the lens information storage unit. Such a lens information storage unit can be configured using a non-temporary recording medium.
 第10の態様に係る撮像装置は第7から第9の態様のいずれか1つにおいて、符号化開口はフレネルゾーンプレートであり、画像復元部は、ワイドコンバージョンレンズ及びフレネルゾーンプレートにより撮像素子に形成される投影像に対しフレネルゾーンプレートに対応したフレネルゾーンパターンを乗算して乗算画像を生成し、乗算画像をフーリエ変換して空間領域の画像を再構成する。第10の態様において、合焦させる被写体距離に応じて拡大率の異なるフレネルゾーンパターンを使用して乗算画像を生成することが好ましい。これにより所望の被写体距離で合焦した画像を得ることができる。 In the imaging device according to the tenth aspect, in any one of the seventh to ninth aspects, the coding aperture is a Fresnel zone plate, and the image restoration unit is formed on the imaging element by the wide conversion lens and the Fresnel zone plate The projected image is multiplied by the Fresnel zone pattern corresponding to the Fresnel zone plate to generate a multiplied image, and the multiplied image is subjected to Fourier transform to reconstruct an image in the spatial domain. In the tenth aspect, it is preferable to generate a multiplied image using Fresnel zone patterns having different enlargement ratios according to the subject distance to be focused. Thereby, an image focused at a desired subject distance can be obtained.
 第10の態様において、複素画像の2次元複素フーリエ変換による画像の再構成を行うことが好ましい。これにより被写体像の重なり等がない高画質の画像が得られるからである。具体的には、フレネルゾーンパターンは第1のフレネルゾーンパターンと、第1のフレネルゾーンパターンとは各領域における局所的な空間周波数が同一であり、かつ局所的な空間周波数の位相が異なる第2のフレネルゾーンパターンと、を含み、画像復元部は第1のフレネルゾーンパターン及び第2のフレネルゾーンパターンを投影像にそれぞれ乗算して、乗算画像として実部の画像と虚部の画像とからなる複素画像を生成し、複素画像を2次元複素フーリエ変換して空間領域の画像を再構成することが好ましい。 In the tenth aspect, it is preferable to perform image reconstruction by two-dimensional complex Fourier transform of a complex image. This is because it is possible to obtain a high quality image with no overlapping of subject images. Specifically, the Fresnel zone pattern has a first Fresnel zone pattern and the first Fresnel zone pattern has the same local spatial frequency in each region, and the phase of the local spatial frequency is different. And the image restoration unit multiplies the projection image with the first Fresnel zone pattern and the second Fresnel zone pattern, respectively, to form a real image and an imaginary image as a multiplication image. Preferably, a complex image is generated, and the complex image is subjected to two-dimensional complex Fourier transformation to reconstruct an image in the spatial domain.
 本発明の第11の態様に係る撮像装置は第7から第10の態様のいずれか1つにおいて、ワイドコンバージョンレンズは撮像装置に対し取り付け及び取り外しされる交換レンズである。第11の態様によれば、撮影目的、撮影条件に応じて異なる画角、焦点距離のレンズを取り付け、所望の条件で撮影することができる。 An imaging device according to an eleventh aspect of the present invention is the interchangeable lens that is attached to and removed from the imaging device in any one of the seventh to tenth aspects. According to the eleventh aspect, it is possible to attach a lens with a different angle of view and focal length according to the purpose of imaging and imaging conditions, and to capture under desired conditions.
 以上説明したように、本発明のワイドコンバージョンレンズによれば撮像装置に装着した場合に画角の異なる画像を良好な画質で得ることができる。また本発明の撮像装置によれば、ワイドコンバージョンレンズにより画角の異なる画像を良好な画質で得ることができる。 As described above, according to the wide conversion lens of the present invention, it is possible to obtain an image with different angle of view with good image quality when mounted on an imaging device. Further, according to the imaging device of the present invention, an image with different angle of view can be obtained with good image quality by the wide conversion lens.
図1は、第1の実施形態に係る撮像装置の構成を示すブロック図である。FIG. 1 is a block diagram showing the configuration of an imaging apparatus according to the first embodiment. 図2は、レンズ装置の構成の他の例を示す図である。FIG. 2 is a view showing another example of the configuration of the lens apparatus. 図3は、フレネルゾーンプレートの例を示す図である。FIG. 3 is a diagram showing an example of a Fresnel zone plate. 図4は、処理部の構成を示すブロック図である。FIG. 4 is a block diagram showing the configuration of the processing unit. 図5は、記憶部に記憶される情報を示す図である。FIG. 5 is a diagram showing information stored in the storage unit. 図6は、実施例1~4のレンズ構成を示す図である。FIG. 6 is a diagram showing lens configurations of Examples 1 to 4. 図7は、実施例1~4の光路を示す図である。FIG. 7 is a view showing the optical paths of Examples 1 to 4. 図8は、実施例1~3のレンズデータを示す図である。FIG. 8 is a diagram showing lens data of Examples 1 to 3. 図9は、実施例4のレンズデータを示す図である。FIG. 9 is a diagram showing lens data of Example 4. 図10は、実施例1,2,4の非球面係数を示す図である。FIG. 10 is a diagram showing the aspheric coefficients of the first, second, and fourth embodiments. 図11は、実施例1の縦収差を示す図である。FIG. 11 is a diagram showing the longitudinal aberration of Example 1. 図12は、実施例2の縦収差を示す図である。FIG. 12 is a diagram showing the longitudinal aberration of Example 2. 図13は、実施例3の縦収差を示す図である。FIG. 13 is a diagram showing the longitudinal aberration of Example 3. 図14は、実施例4の縦収差を示す図である。FIG. 14 is a diagram showing the longitudinal aberration of Example 4. 図15は、実施例1~4の諸元を示す図である。FIG. 15 is a table showing the specifications of the first to fourth embodiments. 図16は、実施例1~4の周辺光量算出条件を示す表である。FIG. 16 is a table showing peripheral light amount calculation conditions of the first to fourth embodiments. 図17は、パラメータの定義を示す図である。FIG. 17 is a diagram showing the definition of parameters. 図18は、実施例1~4の周辺光量を示す表である。FIG. 18 is a table showing peripheral light amounts of Examples 1 to 4. 図19は、理想レンズ及び実施例1~4の周辺光量比を示すグラフである。FIG. 19 is a graph showing the peripheral lens ratio of the ideal lens and Examples 1 to 4. 図20は、画像再構成の処理を示すフローチャートである。FIG. 20 is a flowchart showing the process of image reconstruction. 図21は、位相が異なるフレネルゾーンプレートの例を示す図である。FIG. 21 is a diagram showing an example of Fresnel zone plates having different phases. 図22は、合焦距離に応じた拡大率のフレネルゾーンパターンを示す図である。FIG. 22 is a view showing a Fresnel zone pattern of an enlargement factor according to the focusing distance.
 以下、添付図面を参照しつつ、本発明に係るワイドコンバージョンレンズ及び撮像装置を実施するための形態について詳細に説明する。 Hereinafter, embodiments for implementing a wide conversion lens and an imaging device according to the present invention will be described in detail with reference to the attached drawings.
 <第1の実施形態>
 <撮像装置の構成>
 図1は第1の実施形態に係る撮像装置10(撮像装置)の構成を示すブロック図である。撮像装置10は、レンズ装置100(ワイドコンバージョンレンズ、交換レンズ)と撮像装置本体200(撮像装置本体)とを備える。撮像装置10はデジタルカメラの他、スマートフォン、タブレット端末、監視カメラ、検査用カメラ等に適用できるが、適用対象はこれらの機器に限定されるものではない。
First Embodiment
<Configuration of Imaging Device>
FIG. 1 is a block diagram showing the configuration of an imaging device 10 (imaging device) according to the first embodiment. The imaging device 10 includes a lens device 100 (wide conversion lens, interchangeable lens) and an imaging device main body 200 (imaging device main body). Although the imaging device 10 can be applied to a smart phone, a tablet terminal, a surveillance camera, an inspection camera, and the like in addition to the digital camera, the application target is not limited to these devices.
 <レンズ装置の構成>
 レンズ装置100はレンズ300(ワイドコンバージョンレンズ、交換レンズ)を備える。レンズ300は、撮像装置本体200に対し装着(フレネルゾーンプレート210よりも被写体側)することにより、装着されていない状態よりも撮影画角を広くするレンズ(ワイドコンバージョンレンズ)である。レンズ装置100と撮像装置本体200とはレンズ側マウント130及び本体側マウント270(レンズ取付部)を介して装着(取り付け)され、装着されるとレンズ側端子132と本体側端子272とが接触してレンズ装置100と撮像装置本体200との間での通信が可能になる。メモリ120(レンズ情報記憶部)にはレンズ300の情報が記憶されており、記憶された情報は画像処理部230(情報取得部230E;図4参照)からの指令により取得される。例えば、レンズ300のレンズ構成、焦点距離、撮影画角、F値、収差の種類及び値等をメモリ120に記憶しておくことができる。これらの情報はレンズ300(レンズ装置100)の製造後に測定してメモリ120に記憶させてもよいし、設計値を記憶させてもよい。また、操作部260の操作により入力した情報を用いてもよい。
<Configuration of Lens Device>
The lens apparatus 100 includes a lens 300 (wide conversion lens, interchangeable lens). The lens 300 is a lens (wide conversion lens) that is mounted on the imaging device main body 200 (on the object side relative to the Fresnel zone plate 210) to make the photographing angle of view wider than in the non-mounted state. The lens apparatus 100 and the imaging apparatus main body 200 are mounted (attached) via the lens side mount 130 and the main body side mount 270 (lens mounting portion), and when mounted, the lens side terminal 132 and the body side terminal 272 contact with each other. Thus, communication between the lens apparatus 100 and the imaging apparatus main body 200 becomes possible. Information of the lens 300 is stored in the memory 120 (lens information storage unit), and the stored information is acquired by an instruction from the image processing unit 230 (information acquisition unit 230E; see FIG. 4). For example, the lens configuration of the lens 300, the focal length, the shooting angle of view, the F value, the type and value of aberration, and the like can be stored in the memory 120. These pieces of information may be measured after manufacturing of the lens 300 (lens apparatus 100) and stored in the memory 120, or design values may be stored. Further, information input by the operation of the operation unit 260 may be used.
 レンズ装置100は、ユーザの操作により撮像装置本体200に対し取り付け及び取り外しされ、これにより撮影目的、撮影条件等に応じて異なる画角、焦点距離のレンズを装着し所望の条件で撮影することができる。例えば、レンズ装置100に代えて、レンズ400,500,600を有するレンズ装置102,104,106(図2参照)を撮像装置本体200に装着することができる。レンズ400,500,600も、レンズ300と同様のワイドコンバージョンレンズである。なお、図1,2ではレンズ300,400,500,600の構成を簡略化して表示している。レンズ300,400,500,600の具体的な構成は後述する。また、図1,2ではレンズ装置100,102,104,106がレンズ側マウント130及び本体側マウント270を介して撮像装置本体200に装着される場合について説明したが、撮像装置本体200への装着はこのような態様に限定されるものではない。この他にも、例えばレンズ装置100,102,104,106と撮像装置本体200との一方に設けたネジ山(レンズ取付部)と、他方に設けられネジ山に対応したネジ溝(レンズ取付部)とにより行ってもよい。なお、レンズ300,400,500,600はフレネルゾーンプレート210よりも被写体側(物体側)に取り付けられる。 The lens apparatus 100 can be attached to and detached from the imaging apparatus main body 200 by the operation of the user, whereby a lens with a different angle of view and focal length can be attached according to the photographing purpose, photographing conditions, etc. and photographed under desired conditions. it can. For example, instead of the lens device 100, lens devices 102, 104, 106 (see FIG. 2) having lenses 400, 500, 600 can be mounted on the imaging device body 200. The lenses 400, 500, and 600 are also wide conversion lenses similar to the lens 300. In FIGS. 1 and 2, the configurations of the lenses 300, 400, 500, and 600 are simplified and displayed. The specific configuration of the lenses 300, 400, 500, 600 will be described later. 1 and 2 describe the case where the lens devices 100, 102, 104, and 106 are attached to the imaging apparatus main body 200 via the lens side mount 130 and the main body side mount 270, the attachment to the imaging apparatus main body 200. Is not limited to such an embodiment. Besides, for example, a screw thread (lens attachment portion) provided on one of the lens devices 100, 102, 104, 106 and the imaging device main body 200, and a screw groove (lens attachment portion provided on the other corresponding to the screw thread And may be carried out. The lenses 300, 400, 500, and 600 are attached to the subject side (object side) of the Fresnel zone plate 210.
 <撮像装置本体の構成>
 撮像装置本体200は、フレネルゾーンプレート210(フレネルゾーンプレート、符号化開口)と撮像素子220(撮像素子)とを備え、被写体からの光がレンズ300(またはレンズ400,500,600)とフレネルゾーンプレート210とを透過して形成された投影像を撮像素子220により取得する。フレネルゾーンプレート210は、中心が撮像素子220の中心と一致し、かつ撮像素子220の撮像面220A(受光面)と平行な状態で撮像素子220の撮像面側に配置される。フレネルゾーンプレート210は撮像装置本体200に対し交換可能でもよい。特性(大きさ、ピッチ、位相、イメージセンサとの距離等)の異なるフレネルゾーンプレートを使い分けることで、取得する投影像の特性(画角、深度(距離測定精度)等)を制御し、所望の特性の画像を再構成することができる。なお、以下の説明においてフレネルゾーンプレート(Fresnel Zone Plate)を「FZP」と記載する場合がある。
<Configuration of Imaging Device Body>
The imaging device body 200 includes a Fresnel zone plate 210 (Fresnel zone plate, coded aperture) and an imaging device 220 (imaging device), and light from an object is a lens 300 (or lenses 400, 500, 600) and the Fresnel zone A projection image formed through the plate 210 is acquired by the imaging device 220. The Fresnel zone plate 210 is disposed on the imaging surface side of the imaging element 220 in a state where the center coincides with the center of the imaging element 220 and is parallel to the imaging surface 220 A (light receiving surface) of the imaging element 220. The Fresnel zone plate 210 may be replaceable with respect to the imaging device body 200. By using different Fresnel zone plates having different characteristics (size, pitch, phase, distance to the image sensor, etc.), the characteristics (field angle, depth (distance measurement accuracy, etc.), etc.) of the acquired projection image can be controlled and desired. The image of the characteristic can be reconstructed. In the following description, a Fresnel zone plate may be described as "FZP".
 <フレネルゾーンプレートの構成>
 図3の(a)部分は、フレネルゾーンプレート210の例であるFZP1を示す図である。FZP1では、入射する光の透過率がFZP1のパターンに従って中心からの距離に応じて連続的に変化しており、白色に近い領域(透過領域)ほど光の透過率が高く、黒色に近い領域(遮光領域)ほど光の透過率が低い。全体としては透過領域と遮光領域とが交互に同心円状に配置された状態となっており、これら透過領域及び遮光領域がフレネルゾーンプレートを構成する。同心円の間隔は、FZP1の中心から周辺に向かうにつれて狭くなる。このような同心円状のパターン(局所的な空間周波数の変化)は後述する式(2),(3),(7)等により表され、これらの式における同心円の細かさを「ピッチ」と呼ぶ。ピッチは上述したβの値によって定まり、βが小さいと粗いパターンになり、βが大きいと細かいパターンになる。撮像装置本体200にメモリを設けてピッチの情報(βの値)を記憶しておき、画像処理部230(情報取得部230E)がこの情報を取得して使用してもよい。
<Configuration of Fresnel zone plate>
Part (a) of FIG. 3 is a view showing FZP1 which is an example of the Fresnel zone plate 210. As shown in FIG. In FZP1, the transmittance of incident light varies continuously according to the FZP1 pattern according to the distance from the center, and the region closer to white (transmission region) has higher light transmittance and a region closer to black ( The light transmission rate is lower as the light shield region is). As a whole, the transmission areas and the light shielding areas are alternately arranged concentrically, and the transmission areas and the light shielding areas constitute a Fresnel zone plate. The distance between concentric circles narrows from the center of FZP1 toward the periphery. Such a concentric pattern (local change in spatial frequency) is expressed by the equations (2), (3), (7) etc. described later, and the fineness of the concentric circles in these equations is called "pitch". . The pitch is determined by the value of β described above, and when β is small, a coarse pattern is obtained, and when β is large, a fine pattern is obtained. A memory may be provided in the imaging apparatus main body 200 to store pitch information (a value of β), and the image processing unit 230 (information acquisition unit 230E) may acquire and use this information.
 フレネルゾーンプレート210の光軸L(図1参照)は、FZP及び撮像素子220の中心を通りFZP及び撮像素子220の撮像面220Aと垂直な軸である。FZPは撮像素子220に近接して配置される。撮像素子220との距離によっては光の回折により投影像がぼけるので、離しすぎないことが好ましい。 The optical axis L (see FIG. 1) of the Fresnel zone plate 210 is an axis passing through the center of the FZP and the imaging device 220 and perpendicular to the FZP and the imaging surface 220A of the imaging device 220. The FZP is disposed close to the imaging device 220. Depending on the distance to the image sensor 220, it is preferable that the projected image be blurred due to the diffraction of light, so it is preferable not to separate too much.
 図3の(b)部分は、フレネルゾーンプレートの他の例であるFZP2を示す図である。FZP2は、FZP1のパターンに対して透過率にしきい値を設定して、透過率がしきい値を超える領域は透過率100%の透過領域(白色の部分)、しきい値以下の領域は透過率0%の遮光領域(黒色の部分)としたものであり、透過率が中心からの距離に応じて不連続的に(0%または100%の2段階に)変化する。全体としては透過領域と遮光領域とが交互に同心円状に配置された状態となっており、これら透過領域及び遮光領域がフレネルゾーンプレートを構成する。このように、本発明における「フレネルゾーンプレート」はFZP1のような態様及びFZP2のような態様を含み、またこれに対応して、本発明における「フレネルゾーンパターン」は透過率が連続的に変化するパターン及び透過率が不連続に変化するパターンを含む。なお、図3に示すようなフレネルゾーンプレートの周辺部分に遮光部(遮光領域と同様に、光が透過しない領域)を設けて、撮像素子220の周辺部分に不要な光が入射するのを防止してもよい。 Part (b) of FIG. 3 is a view showing FZP2 which is another example of the Fresnel zone plate. FZP2 sets the threshold for the transmittance to the pattern of FZP1, and the region where the transmittance exceeds the threshold is the transmission region (white part) with 100% transmittance, and the region below the threshold is the transmission A light shielding area (black portion) with a rate of 0% is used, and the transmittance changes discontinuously (in two steps of 0% or 100%) according to the distance from the center. As a whole, the transmission areas and the light shielding areas are alternately arranged concentrically, and the transmission areas and the light shielding areas constitute a Fresnel zone plate. Thus, the "Fresnel zone plate" in the present invention includes an aspect such as FZP1 and an aspect such as FZP2 and, correspondingly, the "Fresnel zone pattern" in the present invention changes the transmittance continuously. Pattern and the pattern in which the transmittance changes discontinuously. A light shielding portion (a region not transmitting light as well as the light shielding region) is provided in the peripheral portion of the Fresnel zone plate as shown in FIG. 3 to prevent unnecessary light from being incident on the peripheral portion of the imaging element 220. You may
 <撮像素子の構成>
 撮像素子220は、2次元方向に配列された光電変換素子により構成される複数の画素を有するイメージセンサである。各画素にマイクロレンズを設けて集光効率を上げてもよい。また、各画素にカラーフィルタ(例えば赤色、青色、及び緑色)を配設してカラー画像を再構成できるようにしてもよい。この場合、投影像の取得に際しては、通常のデジタルカメラにおけるカラー画像生成の際のデモザイク処理(同時化処理ともいう)と同様に、カラーフィルタの配列パターンに応じた補間処理が行われる。この補間処理により各画素(受光素子)における不足した色の信号が生成され、全画素において各色(例えば赤色、青色、及び緑色)の信号が得られる。このような処理は、画像処理部230により行うことができる。
<Configuration of Image Sensor>
The imaging element 220 is an image sensor having a plurality of pixels formed of photoelectric conversion elements arranged in a two-dimensional direction. A microlens may be provided for each pixel to increase the light collection efficiency. In addition, color filters (for example, red, blue, and green) may be provided for each pixel so that a color image can be reconstructed. In this case, at the time of acquiring a projection image, interpolation processing according to the arrangement pattern of the color filter is performed as in the case of demosaicing processing (also referred to as synchronization processing) in color image generation in a normal digital camera. By this interpolation processing, a signal of a deficient color in each pixel (light receiving element) is generated, and a signal of each color (for example, red, blue, and green) is obtained in all the pixels. Such processing can be performed by the image processing unit 230.
 撮像装置本体200は、上述したフレネルゾーンプレート210及び撮像素子220の他に、画像処理部230と、記憶部240と、表示部250と、操作部260とを備え、フレネルゾーンプレート210及び撮像素子220により取得した投影像に基づく被写体の画像復元等を行う。 The imaging device main body 200 includes an image processing unit 230, a storage unit 240, a display unit 250, and an operation unit 260, in addition to the above-described Fresnel zone plate 210 and imaging device 220, the Fresnel zone plate 210 and imaging device An image restoration or the like of the subject is performed based on the projection image acquired by 220.
 <画像処理部の構成>
 図4は画像処理部230(画像復元部)の構成を示す図である。画像処理部230は、投影像入力部230Aと、乗算画像生成部230B(画像復元部)と、フーリエ変換部230C(画像復元部)と、収差補正部230D(画像復元部)と、情報取得部230E(レンズ情報取得部)と、表示制御部230Fと、を有する。投影像入力部230Aは、撮像素子220を制御して、被写体からの光がFZPに入射して撮像素子220に形成された投影像を撮像素子220から取得する。乗算画像生成部230Bは、局所的な空間周波数が同一であり、かつ局所的な空間周波数の位相が異なる複数のフレネルゾーンパターン(第1,第2のフレネルゾーンパターン)を投影像に乗算して、実部の画像と虚部の画像とからなる複素画像を生成する。フーリエ変換部230Cは、複素画像を2次元複素フーリエ変換することにより空間領域の画像を再構成する。収差補正部230Dは、空間領域の画像においてレンズ300またはレンズ400に起因する収差(非点収差、球面収差、コマ収差、歪曲収差、軸上色収差、倍率色収差等)を補正する。情報取得部230Eは、レンズ装置100のメモリ120に記憶されたレンズ300の情報(例えば、レンズ構成、焦点距離、撮影画角、F値、収差の種類及び値等)を取得する。また情報取得部230Eは、投影像の取得に用いたフレネルゾーンプレート210の情報(ピッチの情報)を取得する。表示制御部230Fは、投影像、複素画像、再構成画像等の表示部250への表示制御を行う。ROM230G(非一時的記録媒体)には、画像の再構成を行うプログラム等、撮像装置10が動作するための各種プログラムのコンピュータ(プロセッサ)読み取り可能なコードが記録される。
<Configuration of Image Processing Unit>
FIG. 4 is a diagram showing the configuration of the image processing unit 230 (image restoration unit). The image processing unit 230 includes a projection image input unit 230A, a multiplication image generation unit 230B (image restoration unit), a Fourier transform unit 230C (image restoration unit), an aberration correction unit 230D (image restoration unit), and an information acquisition unit 230E (lens information acquisition unit) and a display control unit 230F. The projected image input unit 230A controls the imaging device 220 to acquire, from the imaging device 220, a projected image formed on the imaging device 220 by the light from the subject being incident on the FZP. The multiplication image generation unit 230B multiplies the projection image by a plurality of Fresnel zone patterns (first and second Fresnel zone patterns) having the same local spatial frequency and different phases of the local spatial frequency. , A complex image consisting of an image of the real part and an image of the imaginary part. The Fourier transform unit 230C reconstructs an image in the spatial domain by performing a two-dimensional complex Fourier transform on the complex image. The aberration correction unit 230D corrects an aberration (such as astigmatism, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc.) caused by the lens 300 or the lens 400 in an image in a spatial region. The information acquisition unit 230E acquires information (for example, lens configuration, focal length, shooting angle of view, F value, type and value of aberration, etc.) of the lens 300 stored in the memory 120 of the lens device 100. Further, the information acquisition unit 230E acquires information (information on pitch) of the Fresnel zone plate 210 used for acquiring the projection image. The display control unit 230F controls display of a projection image, a complex image, a reconstructed image and the like on the display unit 250. In the ROM 230G (non-temporary recording medium), computer (processor) readable codes of various programs for operating the imaging device 10, such as a program for reconstructing an image, are recorded.
 上述した画像処理部230の機能は、各種のプロセッサ(processor)を用いて実現できる。各種のプロセッサには、例えばソフトウェア(プログラム)を実行して各種の機能を実現する汎用的なプロセッサであるCPU(Central Processing Unit)が含まれる。また、上述した各種のプロセッサには、FPGA(Field Programmable Gate Array)などの製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device:PLD)も含まれる。さらに、ASIC(Application Specific Integrated Circuit)などの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路なども上述した各種のプロセッサに含まれる。 The functions of the image processing unit 230 described above can be realized using various processors. The various processors include, for example, a central processing unit (CPU) that is a general-purpose processor that executes software (programs) to realize various functions. The various processors described above also include a programmable logic device (PLD) which is a processor whose circuit configuration can be changed after manufacturing an FPGA (Field Programmable Gate Array) or the like. Furthermore, dedicated electric circuits and the like which are processors having a circuit configuration specially designed to execute specific processing such as application specific integrated circuits (ASICs) are also included in the various processors described above.
 各部の機能は1つのプロセッサにより実現されてもよいし、複数のプロセッサを組み合わせて実現されてもよい。また、複数の機能を1つのプロセッサで実現してもよい。複数の機能を1つのプロセッサで構成する例としては、第1に、クライアント、サーバなどのコンピュータに代表されるように、1つ以上のCPUとソフトウェアの組合せで1つのプロセッサを構成し、このプロセッサが複数の機能として実現する形態がある。第2に、システムオンチップ(System On Chip:SoC)などに代表されるように、システム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の機能は、ハードウェア的な構造として、上述した各種のプロセッサを1つ以上用いて構成される。 The function of each unit may be realized by one processor or may be realized by combining a plurality of processors. Also, multiple functions may be realized by one processor. As an example in which a plurality of functions are configured by one processor, first, one processor or more is configured by a combination of one or more CPUs and software, as represented by computers such as clients and servers; Is realized as a plurality of functions. Second, as typified by a system on chip (SoC) or the like, there is a mode using a processor that realizes the functions of the entire system with one integrated circuit (IC) chip. Thus, various functions are configured using one or more of the various processors described above as a hardware-like structure.
 さらに、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子などの回路素子を組み合わせた電気回路(circuitry)である。 Furthermore, the hardware-like structure of these various processors is, more specifically, an electrical circuit (circuitry) combining circuit elements such as semiconductor elements.
 上述したプロセッサ(あるいは電気回路)がソフトウェア(プログラム)を実行する際は、実行するソフトウェア(例えば、投影像の取得、レンズ情報の取得、画像の復元等を行うソフトウェア)のコンピュータ読み取り可能なコードをROM230G等の非一時的記録媒体に記憶しておき、プロセッサがそのソフトウェアを参照する。ソフトウェアを用いた処理の際には例えばRAM(Random Access Memory)が一時的記憶領域として用いられ、また例えばEEPROM(Electronically Erasable and Programmable Read Only Memory)に記憶されたデータが参照される。なお、図4ではRAM,EEPROM等のデバイスの図示は省略する。 When the processor (or electric circuit) described above executes software (program), computer readable code of software to be executed (for example, software for acquiring a projection image, acquiring lens information, restoring an image, etc.) It is stored in a non-transitory recording medium such as the ROM 230G, and the processor refers to the software. At the time of processing using software, for example, RAM (Random Access Memory) is used as a temporary storage area, and data stored in, for example, EEPROM (Electronically Erasable and Programmable Read Only Memory) is referred to. In FIG. 4, devices such as a RAM and an EEPROM are not shown.
 <記憶部の構成>
 記憶部240はCD(Compact Disk)、DVD(Digital Versatile Disk)、ハードディスク(Hard Disk)、各種半導体メモリ等の非一時的記録媒体により構成され、図5に示す画像及び情報を互いに関連づけて記憶する。レンズ情報240Aは、レンズ装置100から取得したレンズ300の情報(例えば、レンズ構成、焦点距離、撮影画角、F値、収差の種類及び値等)である。投影像240Bは、撮像素子220から取得した投影像である。フレネルゾーンプレート情報240Cはフレネルゾーンプレート210の局所的な空間周波数の情報(βの値等のピッチ情報を含む)である。フレネルゾーンプレート情報240Cは撮像素子220から取得した情報でもよいし、操作部260を介して入力した情報でもよい。フレネルゾーンパターン情報240Dはフレネルゾーンパターンを示す情報であり、局所的な空間周波数の位相が異なる複数のフレネルゾーンパターンについて記録することが好ましい。乗算画像240E(乗算画像)は、フレネルゾーンパターン情報240Dが示すフレネルゾーンパターン(第1,第2のフレネルゾーンパターン)を投影像に乗算して得られた、実部の画像と虚部の画像とからなる複素画像である。再構成画像240Fは、乗算画像240Eを2次元複素フーリエ変換して得られた空間領域の画像である。
<Configuration of Storage Unit>
The storage unit 240 is composed of non-temporary recording media such as a CD (Compact Disk), a DVD (Digital Versatile Disk), a hard disk (Hard Disk), and various semiconductor memories, and stores the images and information shown in FIG. . The lens information 240A is information of the lens 300 acquired from the lens device 100 (for example, lens configuration, focal length, shooting angle of view, F value, type and value of aberration, etc.). The projected image 240 B is a projected image acquired from the imaging element 220. The Fresnel zone plate information 240 C is information on the local spatial frequency of the Fresnel zone plate 210 (including pitch information such as the value of β). The Fresnel zone plate information 240 C may be information acquired from the imaging device 220 or information input via the operation unit 260. The Fresnel zone pattern information 240D is information indicating a Fresnel zone pattern, and it is preferable to record a plurality of Fresnel zone patterns having different local spatial frequency phases. The multiplication image 240E (multiplication image) is obtained by multiplying the projected image by the Fresnel zone pattern (first and second Fresnel zone patterns) indicated by the Fresnel zone pattern information 240D, and an image of the real part and an image of the imaginary part And a complex image consisting of The reconstructed image 240F is an image of a spatial region obtained by performing two-dimensional complex Fourier transform on the multiplied image 240E.
 <表示部及び操作部の構成>
 表示部250は図示せぬ液晶ディスプレイ等の表示装置を含んで構成され、投影像、乗算画像、再構成画像等を表示するとともに、操作部260を介した指示入力時のUI(User Interface)用の画面表示にも用いられる。操作部260は図示せぬキーボード、マウス、ボタン、スイッチ等のデバイスにより構成され、ユーザはこれらデバイスにより投影像取得指示、画像再構成指示、合焦距離の条件、局所的な空間周波数の情報(ピッチ、位相)等を入力することができる。なお、表示部250の表示装置をタッチパネルにより構成し、画像表示の他に操作部260として用いてもよい。
<Configuration of Display Unit and Operation Unit>
The display unit 250 includes a display device such as a liquid crystal display (not shown), and displays a projected image, a multiplied image, a reconstructed image and the like, and for UI (User Interface) when inputting an instruction via the operation unit 260 It is also used for screen display of The operation unit 260 includes devices such as a keyboard, a mouse, a button, and a switch (not shown), and the user uses these devices to issue a projection image acquisition instruction, an image reconstruction instruction, focusing distance conditions, and local spatial frequency information ( Pitch, phase, etc. can be input. The display device of the display unit 250 may be configured by a touch panel and used as the operation unit 260 in addition to image display.
 <レンズの具体的構成>
 <実施例1>
 図6の(a)部分はレンズ300、フレネルゾーンプレート210、及び撮像素子220の撮像面220Aの配置を示す図である。レンズ300は物体側(被写体側)から順に負のパワー(屈折力、以下同じ)を有する前群310と正のパワーを有する後群320とから構成されるワイドコンバージョンレンズであり、レンズ302,304,306,308から構成される。フレネルゾーンプレート210はレンズ300の瞳の位置(絞りの位置)に配置される。
<Specific configuration of lens>
Example 1
Part (a) of FIG. 6 is a view showing the arrangement of the lens 300, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220. The lens 300 is a wide conversion lens composed of a front group 310 having negative power (refractive power, hereinafter the same) in order from the object side (object side) and a rear group 320 having positive power. , 306, 308. The Fresnel zone plate 210 is disposed at the position of the pupil of the lens 300 (the position of the stop).
 <実施例2>
 図6の(b)部分はレンズ400、フレネルゾーンプレート210、及び撮像素子220の撮像面220Aの配置を示す図である。レンズ400は物体側(被写体側)から順に負のパワーを有する前群410と正のパワーを有する後群420とから構成されるワイドコンバージョンレンズであり、レンズ402,404,406,408から構成される。フレネルゾーンプレート210はレンズ400の瞳の位置(絞りの位置)に配置される。
Example 2
Part (b) of FIG. 6 is a view showing the arrangement of the lens 400, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220. The lens 400 is a wide conversion lens including a front group 410 having negative power and a rear group 420 having positive power in order from the object side (subject side), and includes lenses 402, 404, 406, and 408. Ru. The Fresnel zone plate 210 is disposed at the position of the pupil of the lens 400 (the position of the stop).
 <実施例3>
 図6の(c)部分はレンズ500、フレネルゾーンプレート210、及び撮像素子220の撮像面220Aの配置を示す図である。レンズ500は物体側(被写体側)から順に負のパワーを有する前群510と正のパワーを有する後群520とから構成されるワイドコンバージョンレンズであり、レンズ502,504,506,508から構成される。フレネルゾーンプレート210はレンズ500の瞳の位置(絞りの位置)に配置される。
Example 3
Part (c) of FIG. 6 is a view showing the arrangement of the lens 500, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220. The lens 500 is a wide conversion lens including a front group 510 having negative power and a rear group 520 having positive power in order from the object side (subject side), and includes lenses 502, 504, 506, and 508. Ru. The Fresnel zone plate 210 is disposed at the position of the pupil of the lens 500 (the position of the stop).
 <実施例4>
 図6の(d)部分はレンズ600、フレネルゾーンプレート210、及び撮像素子220の撮像面220Aの配置を示す図である。レンズ600は物体側(被写体側)から順に負のパワーを有する前群610と正のパワーを有する後群620とから構成されるワイドコンバージョンレンズであり、レンズ602,604,606,608から構成される。フレネルゾーンプレート210はレンズ600の瞳の位置(絞りの位置)に配置される。
Example 4
Part (d) of FIG. 6 is a view showing the arrangement of the lens 600, the Fresnel zone plate 210, and the imaging surface 220A of the imaging device 220. The lens 600 is a wide conversion lens including a front group 610 having negative power and a rear group 620 having positive power in order from the object side (subject side), and includes lenses 602, 604, 606, and 608. Ru. The Fresnel zone plate 210 is disposed at the position of the pupil of the lens 600 (the position of the stop).
 なお、レンズ300,400,500,600の画角は固定されているが、本発明の撮像装置において撮像装置本体に取り付けられるレンズは、画角(焦点距離)が可変のズームレンズでもよい。 Although the angle of view of the lenses 300, 400, 500, and 600 is fixed, the lens attached to the imaging apparatus main body in the imaging apparatus of the present invention may be a zoom lens having a variable angle of view (focal length).
 <光路図>
 図7の(a)部分は、レンズ300の瞳の位置に理想レンズI1(収差がゼロであるレンズ)を配置した状態における、レンズ300に対する軸上光束B1及び最大画角の光束B2の光路を示す図である。なお、実際は理想レンズI1の位置にフレネルゾーンプレート210が配置され光束が波面として進行するので、フレネルゾーンプレート210以降(撮像素子220側)については点線部分のような光路となるわけではないが、参考のため理想レンズI1を配置した場合の光路を示している。理想レンズI1の特性は、f’=1mm、IH=9.407、FNo.=2.88、2ω=41.2degである。なお、理想レンズI1のレンズデータは、規格化により瞳と撮像面220Aの間隔を1mm(すなわち、理想レンズI2と撮像面220Aの間隔が1mm)とした場合のデータである。以下に示す理想レンズI2~I4も、レンズデータは理想レンズI1と同様である。なお、mmはミリメートルを表し、degは角度を表す。
<Optical path diagram>
Part (a) of FIG. 7 shows the optical paths of the axial light beam B1 and the light beam B2 of the maximum angle of view with respect to the lens 300 in a state where the ideal lens I1 (lens with zero aberration) is arranged at the pupil position of the lens 300. FIG. In fact, since the Fresnel zone plate 210 is disposed at the position of the ideal lens I1 and the light beam travels as a wavefront, the light from the Fresnel zone plate 210 (on the side of the imaging device 220) does not necessarily have an optical path like a dotted line. For reference, an optical path when the ideal lens I1 is disposed is shown. The characteristics of the ideal lens I1 are f '= 1 mm, IH = 9.407, FNo. = 2.88, 2ω = 41.2 deg. The lens data of the ideal lens I1 is data in the case where the distance between the pupil and the imaging surface 220A is 1 mm (that is, the distance between the ideal lens I2 and the imaging surface 220A is 1 mm) by normalization. The lens data of the ideal lenses I2 to I4 shown below are the same as that of the ideal lens I1. Here, mm represents a millimeter, and deg represents an angle.
 同様に、図7の(b)部分は、レンズ400の瞳の位置に理想レンズI2(収差がゼロであるレンズ)を配置した状態における、レンズ400に対する軸上光束B3及び最大画角の光束B4の光路を示す図である。なお、実際は理想レンズI2の位置にフレネルゾーンプレート210が配置され光束が波面として進行するので、フレネルゾーンプレート210以降(撮像素子220側)については点線部分のような光路となるわけではないが、参考のため理想レンズI2を配置した場合の光路を示している。 Similarly, part (b) of FIG. 7 shows axial luminous flux B3 with respect to lens 400 and luminous flux B4 of the maximum angle of view in a state where ideal lens I2 (lens with zero aberration) is disposed at the pupil position of lens 400. Is a diagram showing an optical path of In fact, since the Fresnel zone plate 210 is disposed at the position of the ideal lens I2 and the light beam travels as a wave front, the light from the Fresnel zone plate 210 (the image sensor 220 side) does not necessarily have an optical path like a dotted line portion. For reference, an optical path when the ideal lens I2 is disposed is shown.
 同様に、図7の(c)部分は、レンズ500の瞳の位置に理想レンズI3(収差がゼロであるレンズ)を配置した状態における、レンズ500に対する軸上光束B5及び最大画角の光束B6の光路を示す図である。なお、実際は理想レンズI3の位置にフレネルゾーンプレート210が配置され光束が波面として進行するので、フレネルゾーンプレート210以降(撮像素子220側)については点線部分のような光路となるわけではないが、参考のため理想レンズI3を配置した場合の光路を示している。 Similarly, part (c) of FIG. 7 shows axial luminous flux B5 with respect to lens 500 and luminous flux B6 of the maximum angle of view in a state where ideal lens I3 (lens with zero aberration) is disposed at the pupil position of lens 500. Is a diagram showing an optical path of In fact, since the Fresnel zone plate 210 is disposed at the position of the ideal lens I3 and the light beam travels as a wavefront, the light from the Fresnel zone plate 210 (on the side of the imaging device 220) does not necessarily have an optical path like a dotted line. For reference, an optical path when the ideal lens I3 is disposed is shown.
 同様に、図7の(d)部分は、レンズ600の瞳の位置に理想レンズI4(収差がゼロであるレンズ)を配置した状態における、レンズ600に対する軸上光束B7及び最大画角の光束B8の光路を示す図である。なお、実際は理想レンズI4の位置にフレネルゾーンプレート210が配置され光束が波面として進行するので、フレネルゾーンプレート210以降(撮像素子220側)については点線部分のような光路となるわけではないが、参考のため理想レンズI4を配置した場合の光路を示している。 Similarly, part (d) of FIG. 7 shows axial luminous flux B7 with respect to lens 600 and luminous flux B8 of the maximum angle of view in a state where ideal lens I4 (lens with zero aberration) is disposed at the pupil position of lens 600. Is a diagram showing an optical path of In fact, since the Fresnel zone plate 210 is disposed at the position of the ideal lens I4 and the light beam travels as a wave front, the light from the Fresnel zone plate 210 (the image sensor 220 side) does not necessarily have an optical path like a dotted line. For reference, an optical path when the ideal lens I4 is disposed is shown.
 <レンズデータ>
 図8の(a)部分~(c)部分はレンズ300,400,500のレンズデータを示す表である。表中、面番号に「*」を付した面は非球面であることを示す。同様に、図9はレンズ600のレンズデータを示す表である。なお、図8,9におけるレンズデータは規格化により瞳と撮像面220Aの間隔を1mm(すなわち、フレネルゾーンプレート210と撮像面220Aの間隔が1mm)とした場合のデータである。焦点距離を変える等により間隔がk倍となった場合は曲率半径、面間隔もk倍になる。
<Lens data>
Parts (a) to (c) of FIG. 8 are tables showing lens data of the lenses 300, 400, and 500. In the table, the surface with “*” attached to the surface number indicates that the surface is aspheric. Similarly, FIG. 9 is a table showing lens data of the lens 600. The lens data in FIGS. 8 and 9 is data in the case where the distance between the pupil and the imaging surface 220A is 1 mm (that is, the distance between the Fresnel zone plate 210 and the imaging surface 220A is 1 mm) by normalization. If the distance is k times by changing the focal length, the radius of curvature and the surface distance are also k times.
 図10はレンズ300についての非球面係数を示す表である。レンズ表面の中心を原点とし、光軸がx軸である場合、レンズ表面の非球面形状は以下の式(1)によって表示される。なおcは近軸曲率半径の逆数(近軸曲率)であり、hiは光軸からの高さであり、κは円錐乗数であり、Aiが非球面係数である。 FIG. 10 is a table showing the aspheric coefficients for the lens 300. When the center of the lens surface is the origin and the optical axis is the x-axis, the aspheric shape of the lens surface is represented by the following equation (1). Here, c is the reciprocal of paraxial radius of curvature (paraxial curvature), hi is the height from the optical axis, κ is a conical multiplier, and Ai is an aspheric coefficient.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 <縦収差図>
 図11の(a)部分~(d)部分は、それぞれレンズ300の球面収差、非点収差、歪曲収差、及び倍率色収差を示す図である。同様に、図12の(a)部分~(d)部分は、それぞれレンズ400の球面収差、非点収差、歪曲収差、及び倍率色収差を示す図である。同様に、図13の(a)部分~(d)部分は、それぞれレンズ500の球面収差、非点収差、歪曲収差、及び倍率色収差を示す図である。同様に、図14の(a)部分~(d)部分は、それぞれレンズ600の球面収差、非点収差、歪曲収差、及び倍率色収差を示す図である。
<Longitudinal aberration diagram>
Portions (a) to (d) of FIG. 11 are diagrams showing the spherical aberration, the astigmatism, the distortion, and the lateral chromatic aberration of the lens 300, respectively. Similarly, (a) to (d) of FIG. 12 are diagrams showing the spherical aberration, the astigmatism, the distortion, and the lateral chromatic aberration of the lens 400, respectively. Similarly, (a) to (d) of FIG. 13 are diagrams showing the spherical aberration, the astigmatism, the distortion, and the lateral chromatic aberration of the lens 500, respectively. Similarly, parts (a) to (d) of FIG. 14 are diagrams showing spherical aberration, astigmatism, distortion, and lateral chromatic aberration of the lens 600, respectively.
 <実施例の諸元>
 図15は実施例1~4の諸元を示す表である。上述した構成により、実施例1(レンズ300を装着する場合)では元の光学系(画角が20.6deg)に対し画角が変化している(60.0degと広くなっている)ことが分かる。同様に、実施例2(レンズ400を装着する場合)では元の光学系(画角が46.5deg)に対し画角が変化している(80.0degと広くなっている)。同様に、実施例3(レンズ500を装着する場合)では元の光学系(画角が24.5deg)に対し画角が変化している(60.0degと広くなっている)。同様に、実施例4(レンズ600を装着する場合)では元の光学系(画角が20.2deg)に対し画角が変化している(60.0degと広くなっている)。なお、実施例1~4における「元の光学系」はフレネルゾーンプレート210のような符号化開口を用いた光学系(焦点距離は1.0mm)である。
<Specifications of Example>
FIG. 15 is a table showing specifications of Examples 1 to 4. With the configuration described above, in Example 1 (when the lens 300 is mounted), the angle of view is changed (wider to 60.0 degrees) than the original optical system (the angle of view is 20.6 degrees). I understand. Similarly, in Example 2 (when the lens 400 is mounted), the angle of view changes with respect to the original optical system (the angle of view is 46.5 degrees) (wide as 80.0 degrees). Similarly, in Example 3 (when the lens 500 is mounted), the angle of view changes with respect to the original optical system (the angle of view is 24.5 degrees) (wide as 60.0 degrees). Similarly, in Example 4 (when the lens 600 is mounted), the angle of view changes with respect to the original optical system (the angle of view is 20.2 deg.) (Wide as 60.0 deg). The “original optical system” in the first to fourth embodiments is an optical system (a focal length is 1.0 mm) using a coded aperture such as the Fresnel zone plate 210.
 <周辺光量>
 図16は実施例1~4についての周辺光量算出のための条件式を示す表であり、表中のパラメータの意味は以下の通りである。いずれの実施例も、本発明の各態様における数値範囲を満たしている。なお、Cs,Ct,Mt,Msの定義を図17の(a)部分,(b)部分,及び(c)部分に示す(代表例として実施例1について図示)。図17の(b)部分は平面P1を示す図であり、図17の(c)部分は平面P2を示す図である。平面P1,P2はそれぞれ軸上光束B1の主光線L1,最大画角の光束B2の主光線L2に垂直な面である。
<Edge light>
FIG. 16 is a table showing conditional expressions for calculating the peripheral light amount in Examples 1 to 4. The meanings of the parameters in the table are as follows. Any of the examples satisfies the numerical range in each aspect of the present invention. The definitions of Cs, Ct, Mt and Ms are shown in parts (a), (b) and (c) of FIG. 17 (shown as a representative example for Example 1). The part (b) of FIG. 17 is a figure which shows the plane P1, and the (c) part of FIG. 17 is a figure which shows the plane P2. The planes P1 and P2 are planes perpendicular to the chief ray L1 of the axial light beam B1 and the chief ray L2 of the light beam B2 having the largest angle of view.
 Cs  :最も物体側の面に入射する軸上光束のサジタル方向(接線方向)の長さ
 Ms  :最も物体側の面に入射する最大画角の光束のサジタル方向の長さ
 DA  :ワイドコンバージョンレンズ(レンズ300,400,500,600)中の最大の空気間隔
 Ff  :DAより物体側のレンズ群の焦点距離
 Ct  :最も物体側の面に入射する軸上光束のタンジェンシャル方向(放射方向)の長さ
 Mt  :最も物体側の面に入射する最大画角の光束のタンジェンシャル方向の長さを、主光線に垂直な面に射影したときの長さ
 WCTL:ワイドコンバージョンレンズの第1面から最終面までの長さ
 WCf :ワイドコンバージョンレンズの焦点距離
 Dist:ワイドコンバージョンレンズを理想レンズに装着したときの光学ディストーション
 図18は、(実施例1~4のワイドコンバージョンレンズ+理想レンズ)についての周辺光量比(光軸上を100%とし、半画角γに対し(cosγ)の4乗で示される)を示す表であり、図19は理想レンズ及び(実施例1~4のワイドコンバージョンレンズ+理想レンズ)についての周辺光量比を示すグラフである。半画角γが30degの場合の理想レンズの周辺光量比は約56%であり、40degの場合は約34%であるが、(実施例1~4のワイドコンバージョンレンズ+理想レンズ)の場合の周辺光量比は約78%~99%であり、周辺光量比が向上している(低下していない)ことが分かる。
Cs: Length in the sagittal direction (tangent direction) of axial luminous flux incident on the surface closest to the object side Ms: Length in sagittal direction of luminous flux of maximum angle of view incident on the surface closest to the object DA: Wide conversion lens ( Maximum air gap in lens 300, 400, 500, 600) Ff: Focal length of lens group on the object side from DA Ct: Length of tangential direction (radial direction) of axial light beam incident on surface closest to object side M t: The length in the tangential direction of the light flux with the largest angle of view incident on the surface closest to the object side, projected on the surface perpendicular to the chief ray WCTL: The first surface to the final surface of the wide conversion lens Length to WCf: Focal length of wide conversion lens Dist: Optical distortion when the wide conversion lens is mounted on the ideal lens Figure 18 Is a table showing the peripheral light amount ratio (indicated on the optical axis as 100% and represented by the fourth power of (cos γ) to the half angle of view γ) for (Wide conversion lens of Example 1 to 4 + ideal lens) FIG. 19 is a graph showing the peripheral light amount ratio for the ideal lens and (wide conversion lens of the examples 1 to 4 + ideal lens). The peripheral light ratio of the ideal lens when the half angle of view γ is 30 deg is about 56%, and it is about 34% for 40 deg, but in the case of (wide conversion lens of Examples 1 to 4 + ideal lens) The peripheral light amount ratio is about 78% to 99%, and it can be seen that the peripheral light amount ratio is improved (not reduced).
 <画像の再構成>
 上述した構成の撮像装置10による画像の再構成について説明する。図20は画像の再構成処理を示すフローチャートである。
<Reconstruction of image>
The reconstruction of an image by the imaging device 10 configured as described above will be described. FIG. 20 is a flowchart showing image reconstruction processing.
 <投影像の入力>
 ステップS100では、画像処理部230(投影像入力部230A)が撮像素子220から被写体の投影像を取得する。取得する投影像は、被写体からの光がフレネルゾーンプレート210に入射して撮像素子220に形成された投影像である。
<Projected image input>
In step S100, the image processing unit 230 (projected image input unit 230A) acquires a projected image of the subject from the imaging device 220. The projection image to be acquired is a projection image formed on the imaging element 220 by the light from the subject being incident on the Fresnel zone plate 210.
 <局所的な空間周波数の情報>
 ステップS110では、画像処理部230(情報取得部230E)が投影像取得に用いたフレネルゾーンプレート210の局所的な空間周波数の情報(フレネルゾーンプレート210のピッチ)を入力する。この情報は図示せぬメモリから入力してもよいし、操作部260に対するユーザの操作に応じて入力してもよい。また、ステップS100で取得した投影像を情報取得部230Eが解析して入力してもよい。ピッチはβの値により定まるので(後述する式(2)~(4),(7)等を参照)、具体的にはβの値を入力すればよい。なお、既知の被写体(例えば、距離無限遠の点光源)を撮影した場合は、撮影画像を解析することによりピッチ(βの値)を取得することができる。また、ピッチ(βの値)を変えながら画像の再構成を繰り返し、鮮明な画像が得られる値を求めてもよい。
<Local spatial frequency information>
In step S110, information (a pitch of the Fresnel zone plate 210) of the local spatial frequency of the Fresnel zone plate 210 used for the projection image acquisition by the image processing unit 230 (the information acquisition unit 230E) is input. This information may be input from a memory (not shown) or may be input according to the user's operation on the operation unit 260. In addition, the information acquisition unit 230E may analyze and input the projection image acquired in step S100. Since the pitch is determined by the value of β (see the equations (2) to (4), (7) etc. described later), specifically, the value of β may be input. When a known subject (for example, point light source at infinite distance) is photographed, the pitch (value of β) can be acquired by analyzing the photographed image. In addition, image reconstruction may be repeated while changing the pitch (the value of β) to obtain a value capable of obtaining a clear image.
 <複素画像の生成>
 ステップS120では、画像処理部230(乗算画像生成部230B)が投影像に第1,第2のフレネルゾーンパターンをそれぞれ乗算して、実部の画像と虚部の画像とからなる複素画像を生成する。ステップS120で乗算されるフレネルゾーンパターンは、記憶部240に記憶されたパターン(フレネルゾーンパターン情報240D)のうちステップS110で入力したピッチ(βの値)に応じて選択したパターンを用いることができる。また、記憶部240に記憶されたパターンをピッチ(βの値)に応じて変化(必要に応じて拡大または縮小してよい)させたパターンを用いることもできる。画像処理部230(乗算画像生成部230B)は、生成した複素画像を乗算画像240Eとして記憶部240に記憶する。
<Generation of complex image>
In step S120, the image processing unit 230 (multiplication image generation unit 230B) multiplies the projection image by the first and second Fresnel zone patterns to generate a complex image consisting of a real part image and an imaginary part image. Do. The Fresnel zone pattern to be multiplied in step S120 can be a pattern selected according to the pitch (value of β) input in step S110 among the patterns (Fresnel zone pattern information 240D) stored in the storage unit 240. . Further, it is also possible to use a pattern in which the pattern stored in the storage unit 240 is changed (may be enlarged or reduced as necessary) according to the pitch (value of β). The image processing unit 230 (multiplication image generation unit 230B) stores the generated complex image in the storage unit 240 as a multiplication image 240E.
 <フレネルゾーンパターンの位相>
 画像の復元において用いる第1のフレネルゾーンパターンは例えば図21の(a)部分に示すパターン(中心での位相が0deg)とすることができ、これを投影像に乗算することにより実部の画像が得られる。また、第2のフレネルゾーンパターンは例えば図21の(b)部分に示すパターン(第1のフレネルゾーンパターンとピッチが同じ、位相が90degずれている)とすることができ、これを投影像に乗算することにより虚部の画像が得られる。式(7)等について上述したように第1,第2のフレネルゾーンパターンの位相ずれは90degであることが好ましいが、正または負に70deg以上110deg以下の範囲であれば鮮明な画像を再構成することができる。第1のフレネルゾーンパターンまたは第2のフレネルゾーンパターンの局所的な空間周波数の位相を、フレネルゾーンプレート210の位相と同一にしてもよい。
<Phase of Fresnel zone pattern>
The first Fresnel zone pattern used in image restoration can be, for example, the pattern shown in part (a) of FIG. 21 (the phase at the center is 0 deg), and this is multiplied by the projected image to obtain an image of the real part. Is obtained. Also, the second Fresnel zone pattern can be, for example, the pattern shown in part (b) of FIG. 21 (the pitch is the same as the first Fresnel zone pattern, and the phase is shifted by 90 degrees). An image of the imaginary part is obtained by multiplication. As described above for the equation (7) etc., the phase shift of the first and second Fresnel zone patterns is preferably 90 degrees, but a positive or negative range of 70 deg or more and 110 deg or less reconstructs a clear image can do. The phase of the local spatial frequency of the first Fresnel zone pattern or the second Fresnel zone pattern may be identical to the phase of the Fresnel zone plate 210.
 フレネルゾーンパターンを用いるに際しては、位相の異なる複数のフレネルゾーンパターンのデータをフレネルゾーンパターン情報240Dとして記憶部240に記憶しておき所望のパターンを選択して用いることができる。また、画像処理部230(乗算画像生成部230B)がピッチ及び位相の情報に基づいて所望のパターンを生成してもよい。このようなフレネルゾーンパターンは電子データであるフレネルゾーンパターン情報240Dとして記憶部240に記憶されるので、所望のパターンの選択及び生成を迅速かつ容易に行うことができる。また、複数のパターンに対応するプレート(基板)を有体物として保持することによる装置の大型化、製作コストの増加、複数のパターン間での特性のばらつき(製造時のばらつき、経時変化、温度変化を含む)による画質の劣化等の問題がない。 When using a Fresnel zone pattern, data of a plurality of Fresnel zone patterns different in phase can be stored as Fresnel zone pattern information 240D in the storage unit 240, and a desired pattern can be selected and used. Further, the image processing unit 230 (multiplication image generation unit 230B) may generate a desired pattern based on the information of pitch and phase. Since such a Fresnel zone pattern is stored in the storage unit 240 as the Fresnel zone pattern information 240D, which is electronic data, it is possible to quickly and easily select and generate a desired pattern. In addition, the size of the apparatus is increased by holding the plates (substrates) corresponding to a plurality of patterns as an entity, the manufacturing cost is increased, and the characteristic variation among the plurality of patterns (manufacturing variation, temporal change, temperature change) There is no problem such as image quality deterioration due to
 <フレネルゾーンパターンの拡大率>
 被写体(光源)が無限遠方に存在する場合は、フレネルゾーンプレート210には平行光が入射し、撮像素子220に形成される投影像はフレネルゾーンプレート210と同じ大きさになるが、被写体が有限距離に存在する場合は広がりを持った光が入射し、距離が近いほど投影像が大きくなる。したがって、合焦させる被写体距離に応じて拡大率が異なるパターンを第1,第2のフレネルゾーンパターンとして使用することにより、所望の距離に合焦した画像を得ることができる。例えば、被写体距離に応じた複数のパターンをフレネルゾーンパターン情報240Dとして記憶部240に記憶しておき、これを読み出して使用することができる。また、1つのフレネルゾーンパターンを基準パターンとして記憶しておき、被写体距離に応じて異なる拡大率で拡大してもよい。この場合、被写体距離無限大に対応しフレネルゾーンプレートと同じ大きさになるパターンを基準パターンとすることができる。図22は、被写体距離に応じてフレネルゾーンパターンの拡大率が異なる様子を示す図である。
<Magnification factor of Fresnel zone pattern>
When the subject (light source) is at infinity, parallel light is incident on the Fresnel zone plate 210, and the projected image formed on the imaging device 220 has the same size as the Fresnel zone plate 210, but the subject is finite. When the light is present at a distance, light having a spread is incident, and the projected image becomes larger as the distance is shorter. Therefore, an image focused at a desired distance can be obtained by using, as the first and second Fresnel zone patterns, patterns having different enlargement ratios according to the subject distance to be focused. For example, a plurality of patterns corresponding to the subject distance may be stored in the storage unit 240 as the Fresnel zone pattern information 240D, and read and used. Also, one Fresnel zone pattern may be stored as a reference pattern, and may be enlarged at a different enlargement factor according to the subject distance. In this case, a pattern corresponding to the subject distance infinity and having the same size as the Fresnel zone plate can be used as the reference pattern. FIG. 22 is a diagram showing how the enlargement ratio of the Fresnel zone pattern differs according to the subject distance.
 なお、拡大率を変えながら複素画像の生成(ステップS120)及び画像の再構成(ステップS130)を繰り返し、再構成した画像の合焦評価値(例えば、画像中に設定した焦点評価領域における輝度信号の積分値)を最大にすることで鮮明な画像を取得してもよい。 The focus evaluation value of the reconstructed image (for example, the luminance signal in the focus evaluation area set in the image) is repeatedly generated while repeating the complex image generation (step S120) and the image reconstruction (step S130) while changing the enlargement ratio. A clear image may be acquired by maximizing the integral value of.
 このように、第1の実施形態に係る撮像装置10はフレネルゾーンプレート210(符号化開口)及びフレネルゾーンパターンを用いてリフォーカスを行うことができる。 Thus, the imaging device 10 according to the first embodiment can perform refocusing using the Fresnel zone plate 210 (coded aperture) and the Fresnel zone pattern.
 <画像の再構成>
 ステップS130では、画像処理部230(フーリエ変換部230C)が、複素画像を2次元複素フーリエ変換して被写体の画像(空間領域の画像)を再構成する(後述)。さらに、画像処理部230(収差補正部230D)は、再構成した画像(空間領域の画像)において、撮像装置本体200に装着されたレンズ(レンズ300,レンズ400等)に起因する収差(非点収差、球面収差、コマ収差、歪曲収差、軸上色収差、倍率色収差等)を補正する。補正は、メモリ120から取得したレンズ情報240Aに基づいて行う。また、収差の補正は画像を再構成する前に行ってもよいし、再構成の後で行ってもよい。再構成の前に収差を補正する場合、例えば投影像に乗算するフレネルゾーンパターン(符号化開口)の形状及び/または大きさを、補正する収差の種類及び/または量に応じて変えることにより行うことができる。一方、再構成の後で収差を補正する場合、例えば補正する収差の種類及び/または程度、画像における位置等の条件に応じて異なるフィルタ(点像復元用のフィルタ等)を適用する等の画像処理により行うことができる。ただし、本発明における収差の補正はこのような態様に限定されるものではない。画像処理部230(表示制御部230F)は、再構成した画像を表示部250に表示する(ステップS140)。また、画像処理部230(フーリエ変換部230C)は、再構成した画像を再構成画像240Fとして記憶部240に記憶させる。
<Reconstruction of image>
In step S130, the image processing unit 230 (Fourier transform unit 230C) performs two-dimensional complex Fourier transform on the complex image to reconstruct an image of the subject (image in the spatial region) (described later). Furthermore, the image processing unit 230 (aberration correction unit 230D) causes the aberration (the astigmatism, the lens 300, the lens 400, and the like) attached to the imaging device main body 200 in the reconstructed image (the image in the space region) Aberration, spherical aberration, coma, distortion, axial chromatic aberration, lateral chromatic aberration, etc. are corrected. The correction is performed based on the lens information 240A acquired from the memory 120. Also, aberration correction may be performed before or after the image is reconstructed. When correcting the aberration before reconstruction, for example, by changing the shape and / or size of the Fresnel zone pattern (coded aperture) to be multiplied to the projection image according to the type and / or amount of aberration to be corrected be able to. On the other hand, when correcting the aberration after the reconstruction, for example, an image such as applying different filters (such as a filter for point image restoration) depending on conditions such as the type and / or degree of aberration to be corrected and the position in the image. It can be done by processing. However, the correction of the aberration in the present invention is not limited to such an aspect. The image processing unit 230 (display control unit 230F) displays the reconstructed image on the display unit 250 (step S140). Further, the image processing unit 230 (Fourier transform unit 230C) stores the reconstructed image in the storage unit 240 as a reconstructed image 240F.
 <画像復元の詳細>
 第1の実施形態における画像復元の詳細について、さらに詳細に説明する。符号化開口(フレネルゾーンプレート)のパターンI(r)は、式(2)によって表現される。
<Details of image restoration>
Details of the image restoration in the first embodiment will be described in more detail. The pattern I (r) of the coded aperture (Fresnel zone plate) is expressed by equation (2).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 I(r)の値が大きいほど、既定の波長帯域における光の透過率が大きくなる。rはフレネルゾーンプレートの半径、β(>0)はパターンの細かさ(ピッチ)を決める定数である。以下、マイナスの値を回避するために、式(3)のようにオフセットをつけて0から1の範囲に収めたI2(r)について考える。 The larger the value of I (r), the greater the light transmission in the predetermined wavelength band. r is the radius of the Fresnel zone plate, and β (> 0) is a constant which determines the fineness (pitch) of the pattern. Hereinafter, in order to avoid a negative value, I2 (r) which is offset within the range of 0 to 1 as in equation (3) will be considered.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 この符号化開口(フレネルゾーンプレート)が撮像素子のセンサ面上から距離dだけ離れて配置されていると仮定する。このとき、距離無限遠の点光源から入射角θで光(平行光)が入射したと仮定すると、符号化開口(フレネルゾーンプレート)の影はΔr(=d×tanθ)だけ平行移動してセンサ上に映る。平行移動した影S(r)は式(4)で表現される。 It is assumed that this coding aperture (Fresnel zone plate) is disposed at a distance d from the sensor surface of the imaging device. At this time, assuming that light (parallel light) is incident at an incident angle θ from a point light source at infinite distance, the shadow of the coded aperture (Fresnel zone plate) moves in parallel by Δr (= d × tan θ) and the sensor It looks up. The translated shadow S (r) is expressed by equation (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 I2(r)及びS(r)は本来2次元画像であり、2変数関数であるが、ここでは簡単のため中心と入射光源を含む平面で切断した断面上の1次元画像のみに注目して考える。しかし、以下の式(5)のように計算すれば容易に2次元の場合に拡張できる。 Although I2 (r) and S (r) are originally two-dimensional images and are two-variable functions, here, for the sake of simplicity, only focusing on one-dimensional images on a cross section cut at a plane including the center and the incident light source Think. However, it can be easily extended to the two-dimensional case by calculating as the following equation (5).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 撮影された影画像(投影像)は、計算機上で画像復元(再構成)されて出力される。画像復元プロセスにおいて、影画像は位置ずれしていないフレネルゾーン開口画像(フレネルゾーンパターン)と乗算される。この内部乗算される関数について、ここでは以下の式(6),(7)で表現される2つの関数の場合を考える。なお、虚数単位をjとする。 The captured shadow image (projected image) is image restored (reconstructed) on a computer and output. In the image restoration process, the shadow image is multiplied with the non-displaced Fresnel zone aperture image (Fresnel zone pattern). Here, regarding the function to be internally multiplied, the case of two functions represented by the following formulas (6) and (7) will be considered. The imaginary unit is j.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 Mr(r)はI(r)と同じ実数関数であるが、オフセット(直流成分)を除去したものである。従来技術(上述した非特許文献1,2及び特許文献1)における画像の再構成は、実数開口の投影像に対して実数関数Mr(r)で表されるフレネルゾーンパターンを乗算する場合に相当する。Mc(r)は複素数関数であり、Mc(r)=cosβr+j×sinβr=cosβr-j×cos(βr+π/2)なので、実部,虚部は位相が(π/2)、すなわち90degずれたフレネルゾーンパターンに対応する。Mc(r)の実数部(cosβr)はMr(r)と同じである。第1の実施形態では、このように複素数関数の実部,虚部にそれぞれ対応し位相が異なる2つのフレネルゾーンパターン(第1,第2のフレネルゾーンパターン)を投影像に乗算して、実部の画像と虚部の画像とからなる複素画像を生成する。 Mr (r) is the same real function as I (r), but with the offset (DC component) removed. The image reconstruction in the prior art ( Non-Patent Documents 1 and 2 and Patent Document 1 described above) is equivalent to the case where the projection image of the real aperture is multiplied by the Fresnel zone pattern represented by the real function Mr (r) Do. Mc (r) is a complex number function, Mc (r) = cosβr 2 + j × sinβr 2 = cosβr 2 -j × cos (βr 2 + π / 2) because the real part, imaginary part phase (π / 2) That is, it corresponds to the Fresnel zone pattern shifted by 90 degrees. The real part (cos β r 2 ) of Mc (r) is the same as Mr (r). In the first embodiment, the projected image is multiplied by the two Fresnel zone patterns (first and second Fresnel zone patterns) corresponding to the real part and imaginary part of the complex function and having different phases as described above. A complex image consisting of an image of a part and an image of an imaginary part is generated.
 内部乗算後の画像(乗算画像)は、それぞれの場合において以下の式(8),(9)により表現される。 The image after multiplication (multiplied image) is expressed by the following equations (8) and (9) in each case.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 内部乗算画像にMr(r),Mc(r)を利用した場合である乗算後画像Fr(r),Fc(r)のそれぞれについて、第1項はオフセット補正などによって除去可能な成分である。第2項は重ね合わされたフレネルゾーン開口の間の「差の周波数」(2つの開口をcosα,cosφで表した場合、cos(α-φ)に対応する)が抽出されたモアレ干渉縞であり、これはフーリエ変換の基底と一致しているため、フーリエ変換を適用することによってデルタ関数に変換されて「点」となり結像に寄与する成分となる。第3項は「和の周波数」(cos(α+φ)に対応する)に相当し、これはフーリエ変換しても結像に寄与せずノイズとして働いてしまう成分となる。 The first term is a component that can be removed by offset correction or the like for each of the multiplied images Fr (r) and Fc (r), which is a case where Mr (r) and Mc (r) are used for the internally multiplied image. The second term is the moiré interference fringes in which the “difference frequency” between the superimposed Fresnel zone apertures (corresponding to cos (α−φ when the two apertures are represented by cos α and cos φ) is extracted, Since this corresponds to the basis of the Fourier transform, it is converted to a delta function by applying the Fourier transform to become a "point" and a component contributing to imaging. The third term corresponds to the “sum frequency” (corresponding to cos (α + φ)), which is a component that does not contribute to image formation even when subjected to Fourier transform and acts as noise.
 Fr(r),Fc(r)に対してそれぞれ適切にオフセット補正を適用し、第1項が消去された状態の画像をFr2(r),Fc2(r)とする。これらに対して実際にフーリエ変換を適用すると、Fr(r),Fc(r)のフーリエ変換をfr(k),fc(k)として、式(10),(11)により表現される。 The offset correction is appropriately applied to each of Fr (r) and Fc (r), and images with the first term eliminated are set as Fr2 (r) and Fc2 (r). When the Fourier transform is actually applied to these, the Fourier transforms of Fr (r) and Fc (r) are expressed by formulas (10) and (11) as fr (k) and fc (k).
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 ここで、ξ(k,β,Δr)は実数の多項式である。これらに対して複素数の絶対値をとると復元画像が得られるが、fr(k)の場合(従来技術の場合)、第1項と第2項は原点に対称な2点を生じてしまうため、点対称に重なった復元画像となってしまう。これに対してfc(k)の場合(第1の実施形態の場合)、このような問題は起こらず正常に画像が再構成される。両者に共通しているのが、fr(r)の第3項とfc(r)の第2項はノイズとして作用する点であり、この項のせいで変調伝達関数(MTF:Modulation Transfer Function)は100%とはなり得ない、これはセンサによるノイズが存在しなくともMTFが100%とならないことを意味する。ただし、このノイズはβの値を大きくすると小さくなるため、βの値を大きくする(細かいパターンにする)ことによって影響を軽減させることは可能である。 Here, ξ (k, β, Δr) is a real number polynomial. If the absolute value of a complex number is taken with respect to these, a restored image can be obtained, but in the case of fr (k) (in the case of the prior art), the first term and the second term produce two symmetrical points at the origin , It becomes a restored image that is overlapped point-symmetrically. On the other hand, in the case of fc (k) (in the case of the first embodiment), such a problem does not occur and the image is reconstructed normally. Common to both is the fact that the third term of fr (r) and the second term of fc (r) act as noise, and due to this term, the modulation transfer function (MTF) Can not be 100%, which means that the MTF will not be 100% in the absence of noise from the sensor. However, since this noise decreases as the value of β increases, it is possible to reduce the effect by increasing the value of β (to make the pattern finer).
 fc(k)(第1の実施形態の場合)の第1項について、位相は光の入射角に依存して回転してしまうが、これに対して複素数の絶対値をとると、無限遠光の到来に対応してデルタ関数(点)に結像することが確認できる。入射光の角度スペクトルから結像画像までの演算は全て線形であるため、重ね合わせが成り立ち、これによって画像の結像が説明できる。 Regarding the first term of fc (k) (in the case of the first embodiment), the phase is rotated depending on the incident angle of light, but when the absolute value of the complex number is taken with respect to this, infinite distance light It can be confirmed that the image is formed into a delta function (point) in response to the arrival of. Since all calculations from the angular spectrum of the incident light to the imaged image are linear, superposition is established, which can explain the imaging of the image.
 なお、2次元の場合について計算すると、周辺光量は(cosγ)の4乗となり、ディストーションは2×β×d×tanγとなる。γは半画角を表す。 Incidentally, when calculated for the two-dimensional case, the peripheral light amount is (4) to (cos γ), and the distortion is 2 × β × d × tan γ. γ represents a half angle of view.
 <第1の実施形態の効果>
 以上説明したように、第1の実施形態に係るワイドコンバージョンレンズ300,400,500,600(レンズ装置100,102,104,106)によれば撮像装置に装着した場合に画角の異なる画像を良好な画質で得ることができる。また第1の実施形態に係る撮像装置10は、第1の実施形態に係るワイドコンバージョンレンズ(レンズ300,400,500,600)を備えることで画角の異なる画像を良好な画質(周辺光量比が高く光学ディストーションが少ない)で得ることができる。また、符号化開口(フレネルゾーンプレート210)によるリフォーカス機能を維持しつつ画角を変更することができる。
<Effect of First Embodiment>
As described above, according to the wide conversion lenses 300, 400, 500, 600 ( lens devices 100, 102, 104, 106) according to the first embodiment, images with different angles of view when mounted on an imaging device It can be obtained with good image quality. In addition, the imaging device 10 according to the first embodiment includes the wide conversion lens ( lens 300, 400, 500, 600) according to the first embodiment so that images with different angle of view have good image quality (peripheral light ratio) High optical distortion). Further, the angle of view can be changed while maintaining the refocusing function by the coded aperture (Fresnel zone plate 210).
 <その他の態様>
 上述した第1の実施形態では投影像に対し第1,第2のフレネルゾーンパターンを乗算して複素画像を生成し、この複素画像を複素フーリエ変換して空間領域の画像を再構成しているが、本発明の撮像装置における画像の再構成は複素画像を用いる態様に限定されるものではない。式(6),(8),(10)等について上述したように、投影像に対しMr(r)のような実数関数を乗算して乗算画像を求め、この乗算画像をフーリエ変換して画像を再構成してもよい。なお、この場合復元画像において被写体像の重なりが生じるが、再構成した画像を半分に切り出して表示する等により対応することができる。また、実数関数を乗算する場合においても、合焦させる被写体距離に応じて拡大率が異なるフレネルゾーンパターンを使用することにより、所望の距離に合焦した画像を得ることができる。
<Other aspects>
In the first embodiment described above, the projected image is multiplied by the first and second Fresnel zone patterns to generate a complex image, and the complex image is complex Fourier transformed to reconstruct an image in the spatial domain. However, the reconstruction of the image in the imaging device of the present invention is not limited to the aspect using a complex image. As described above for equations (6), (8), (10) etc., the projection image is multiplied by a real function such as Mr (r) to obtain a multiplication image, and this multiplication image is subjected to Fourier transform May be reconfigured. In this case, although overlapping of subject images occurs in the restored image, it is possible to cope by displaying the reconstructed image in half and displaying it. Further, even in the case of multiplying by a real number function, it is possible to obtain an image focused at a desired distance by using a Fresnel zone pattern in which the enlargement ratio is different according to the object distance to be focused.
 以上で本発明の実施形態に関して説明してきたが、本発明は上述した実施形態に限定されず、本発明の精神を逸脱しない範囲で種々の変形が可能である。 Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the spirit of the present invention.
10   撮像装置
100、102、104、106  レンズ装置
120  メモリ
130  レンズ側マウント
132  レンズ側端子
200  撮像装置本体
210  フレネルゾーンプレート
220  撮像素子
220A 撮像面
230  画像処理部
230A 投影像入力部
230B 乗算画像生成部
230C フーリエ変換部
230D 収差補正部
230E 情報取得部
230F 表示制御部
230G ROM
240  記憶部
240A レンズ情報
240B 投影像
240C フレネルゾーンプレート情報
240D フレネルゾーンパターン情報
240E 乗算画像
240F 再構成画像
250  表示部
260  操作部
270  本体側マウント
272  本体側端子
300、302、304、306、308、400、402、404、406、408、500、502、504、506、508、600、602、604、606、608  レンズ
310、410、510、610  前群
320、420、520、620  後群
B1、B3、B5、B7   軸上光束
B2、B4、B6、B8   光束
Fr   乗算後画像
FZP1 フレネルゾーンプレート
FZP2 フレネルゾーンプレート
I    パターン
I1、I2、I3、I4   理想レンズ
L    光軸
L1、L2   主光線
Mr   実数関数
P1、P2   平面
S    影
S100~S140 再構成処理の各ステップ
d    距離
γ    半画角
θ    入射角
DESCRIPTION OF SYMBOLS 10 Imaging device 100, 102, 104, 106 Lens device 120 Memory 130 Lens side mount 132 Lens side terminal 200 Imaging device main body 210 Fresnel zone plate 220 Imaging element 220A Imaging surface 230 Image processing part 230A Projection image input part 230B Multiplication image generation part 230C Fourier transform unit 230D aberration correction unit 230E information acquisition unit 230F display control unit 230G ROM
240 storage unit 240A lens information 240B projected image 240C Fresnel zone plate information 240D Fresnel zone pattern information 240E multiplied image 240F reconstructed image 250 display unit 260 operation unit 270 body side mount 272 body side terminals 300, 302, 304, 306, 308, 400, 402, 404, 406, 408, 500, 502, 504, 506, 508, 600, 604, 606, 608 Lenses 310, 410, 510, 610 Front Group 320, 420, 520, 620 Rear Group B1, B3, B5, B7 On-axis luminous flux B2, B4, B6, B8 luminous flux Fr multiplied image FZP1 Fresnel zone plate FZP2 Fresnel zone plate I Pattern I1, I2, I3, I4 Ideal lens L Optical axis L1, L2 chief ray M Real function P1, P2 plane S steps d distance γ half angle θ the angle of incidence of shadow S100 ~ S140 reconstruction process

Claims (12)

  1.  撮像装置の物体側に装着されるワイドコンバージョンレンズであって、以下の条件式(1)及び(2)を満たすワイドコンバージョンレンズ、
     1.1<Ms/Cs   (1)
     0.3<|Ff/DA|<2.0   (2)
     ただし、Csは前記ワイドコンバージョンレンズの最も物体側の面に入射する軸上光束のサジタル方向の長さであり、Msは前記ワイドコンバージョンレンズの最も物体側の面に入射する最大画角の光束のサジタル方向の長さであり、DAは前記ワイドコンバージョンレンズにおける最大の空気間隔であり、Ffは前記最大の空気間隔が存在する位置よりも物体側のレンズ群の焦点距離である。
    A wide conversion lens mounted on the object side of the imaging device, the wide conversion lens satisfying the following conditional expressions (1) and (2):
    1.1 <Ms / Cs (1)
    0.3 <| Ff / DA | <2.0 (2)
    Where Cs is the length in the sagittal direction of the axial luminous flux incident on the surface closest to the object side of the wide conversion lens, and Ms is the luminous flux of the largest angle of view incident on the surface closest to the object side of the wide conversion lens. The length in the sagittal direction is, DA is the maximum air gap in the wide conversion lens, and Ff is the focal length of the lens group on the object side of the position where the maximum air gap exists.
  2.  以下の条件式(3)を満たす請求項1に記載のワイドコンバージョンレンズ、
     2.2<Mt/Ct   (3)
     ただし、Ctは前記ワイドコンバージョンレンズの最も物体側の面に入射する軸上光束のタンジェンシャル方向の長さであり、Mtは前記ワイドコンバージョンレンズの最も物体側の面に入射する最大画角の光束のタンジェンシャル方向の長さを主光線に垂直な面に射影した長さである。
    The wide conversion lens according to claim 1, satisfying the following conditional expression (3):
    2.2 <Mt / Ct (3)
    Where Ct is the length in the tangential direction of the axial luminous flux incident on the surface closest to the object side of the wide conversion lens, and Mt is the luminous flux at the maximum angle of view incident on the surface closest to the object on the wide conversion lens It is the length which projected the length of the tangential direction of to the surface perpendicular to the chief ray.
  3.  以下の条件式(4)を満たす請求項1または2に記載のワイドコンバージョンレンズ、
     |WCTL/WCf|<0.1   (4)
     ただし、WCTLは前記ワイドコンバージョンレンズの物体側に最も近い面である第1面から物体側に最も遠い面である最終面までの長さであり、WCfは前記ワイドコンバージョンレンズの焦点距離である。
    The wide conversion lens according to claim 1 or 2, satisfying the following conditional expression (4):
    | WCTL / WCf | <0.1 (4)
    Here, WCTL is the length from the first surface which is the surface closest to the object side of the wide conversion lens to the final surface which is the surface furthest to the object side, and WCf is the focal length of the wide conversion lens.
  4.  以下の条件式(5)を満たす請求項1から3のいずれか1項に記載のワイドコンバージョンレンズ、
     |Dist|<10%   (5)
     ただし、Distは前記ワイドコンバージョンレンズを理想レンズに装着したときの光学ディストーションである。
    The wide conversion lens according to any one of claims 1 to 3, wherein the following conditional expression (5) is satisfied:
    | Dist | <10% (5)
    However, Dist is an optical distortion when the wide conversion lens is attached to an ideal lens.
  5.  物体側より順に負の屈折力の前群と正の屈折力の後群から構成される請求項1から4のいずれか1項に記載のワイドコンバージョンレンズ。 The wide conversion lens according to any one of claims 1 to 4, comprising a front group of negative refractive power and a rear group of positive refractive power in order from the object side.
  6.  請求項1から5のいずれか1項に記載のワイドコンバージョンレンズと、
     撮像素子を有し、前記ワイドコンバージョンレンズが装着される撮像装置本体と、
     を備える撮像装置。
    A wide conversion lens according to any one of claims 1 to 5;
    An imaging device main body having an imaging element and to which the wide conversion lens is attached;
    An imaging device comprising:
  7.  前記撮像装置本体は、符号化開口と、前記符号化開口により形成される投影像の信号を出力する撮像素子と、前記信号に基づいて空間領域の画像を再構成する画像復元部と、を有する請求項6に記載の撮像装置。 The image pickup apparatus main body has a coded aperture, an image pickup element which outputs a signal of a projected image formed by the coded aperture, and an image restoration unit which reconstructs an image of a spatial region based on the signal. The imaging device according to claim 6.
  8.  前記撮像装置本体は、前記ワイドコンバージョンレンズが前記符号化開口よりも物体側に装着され、前記ワイドコンバージョンレンズ及び前記符号化開口により前記撮像素子に前記投影像が形成される請求項6または請求項7に記載の撮像装置。 7. The imaging device body according to claim 6, wherein the wide conversion lens is mounted on the object side relative to the encoding aperture, and the projection image is formed on the imaging element by the wide conversion lens and the encoding aperture. The imaging device according to 7.
  9.  前記画像復元部は、前記空間領域の画像において前記ワイドコンバージョンレンズに起因する収差を補正する請求項7または請求項8に記載の撮像装置。 The imaging device according to claim 7, wherein the image restoration unit corrects an aberration caused by the wide conversion lens in the image of the space area.
  10.  前記ワイドコンバージョンレンズの情報を取得するレンズ情報取得部を備え、
     前記画像復元部は前記取得した情報に基づいて前記収差を補正する請求項9に記載の撮像装置。
    A lens information acquisition unit that acquires information of the wide conversion lens;
    The imaging device according to claim 9, wherein the image restoration unit corrects the aberration based on the acquired information.
  11.  前記符号化開口はフレネルゾーンプレートであり、
     前記画像復元部は、前記ワイドコンバージョンレンズ及び前記フレネルゾーンプレートにより前記撮像素子に形成される前記投影像に対し前記フレネルゾーンプレートに対応したフレネルゾーンパターンを乗算して乗算画像を生成し、前記乗算画像をフーリエ変換して前記空間領域の画像を再構成する請求項7から10のいずれか1項に記載の撮像装置。
    The coding aperture is a Fresnel zone plate,
    The image restoration unit multiplies the projected image formed on the imaging device by the wide conversion lens and the Fresnel zone plate by a Fresnel zone pattern corresponding to the Fresnel zone plate to generate a multiplied image, and the multiplication is performed. The imaging apparatus according to any one of claims 7 to 10, which performs Fourier transform on an image to reconstruct an image of the spatial region.
  12.  前記ワイドコンバージョンレンズは前記撮像装置に対し取り付け及び取り外しされる交換レンズである請求項7から11のいずれか1項に記載の撮像装置。 The imaging device according to any one of claims 7 to 11, wherein the wide conversion lens is an interchangeable lens attached to and removed from the imaging device.
PCT/JP2018/045990 2017-12-26 2018-12-13 Wide conversion lens and image capture device WO2019131191A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-249070 2017-12-26
JP2017249070A JP2021043230A (en) 2017-12-26 2017-12-26 Wide-conversion lens and imaging apparatus

Publications (1)

Publication Number Publication Date
WO2019131191A1 true WO2019131191A1 (en) 2019-07-04

Family

ID=67067108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/045990 WO2019131191A1 (en) 2017-12-26 2018-12-13 Wide conversion lens and image capture device

Country Status (2)

Country Link
JP (1) JP2021043230A (en)
WO (1) WO2019131191A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024079998A1 (en) * 2022-10-13 2024-04-18 株式会社ジャパンディスプレイ Image-capturing device, subject depth estimation method, and program

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS555378U (en) * 1978-06-26 1980-01-14
JPS63100414A (en) * 1986-10-17 1988-05-02 Ricoh Co Ltd Small-sized wide converter lens
JPH04191718A (en) * 1990-11-27 1992-07-10 Konica Corp Front wide converter
JPH06130290A (en) * 1992-10-16 1994-05-13 Fuji Photo Optical Co Ltd Compact fixed focus lens
JPH0720367A (en) * 1993-07-06 1995-01-24 Sony Corp Variable power lens system
JPH0843731A (en) * 1994-07-27 1996-02-16 Canon Inc Wide converter lens
JPH11305119A (en) * 1998-04-16 1999-11-05 Nikon Corp Fisheye converter and photographic lens system mounting tide same
JP2006106537A (en) * 2004-10-08 2006-04-20 Canon Inc Converter lens
JP2015133594A (en) * 2014-01-10 2015-07-23 株式会社リコー imaging module and imaging device
WO2017149687A1 (en) * 2016-03-02 2017-09-08 株式会社日立製作所 Imaging device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS555378U (en) * 1978-06-26 1980-01-14
JPS63100414A (en) * 1986-10-17 1988-05-02 Ricoh Co Ltd Small-sized wide converter lens
JPH04191718A (en) * 1990-11-27 1992-07-10 Konica Corp Front wide converter
JPH06130290A (en) * 1992-10-16 1994-05-13 Fuji Photo Optical Co Ltd Compact fixed focus lens
JPH0720367A (en) * 1993-07-06 1995-01-24 Sony Corp Variable power lens system
JPH0843731A (en) * 1994-07-27 1996-02-16 Canon Inc Wide converter lens
JPH11305119A (en) * 1998-04-16 1999-11-05 Nikon Corp Fisheye converter and photographic lens system mounting tide same
JP2006106537A (en) * 2004-10-08 2006-04-20 Canon Inc Converter lens
JP2015133594A (en) * 2014-01-10 2015-07-23 株式会社リコー imaging module and imaging device
WO2017149687A1 (en) * 2016-03-02 2017-09-08 株式会社日立製作所 Imaging device

Also Published As

Publication number Publication date
JP2021043230A (en) 2021-03-18

Similar Documents

Publication Publication Date Title
US11290642B2 (en) Image processing apparatus and image processing method that generate a complex image based on a Fresnel Zone pattern
JP6773903B2 (en) Image processing equipment, imaging system, image processing method, and recording medium
JP5435086B2 (en) End-to-end design of super-resolution electro-optic imaging system
JP5681954B2 (en) Imaging apparatus and imaging system
JP5251462B2 (en) Image processing system
US11288780B2 (en) Imaging apparatus
EP1789830A2 (en) Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and centrally obscured aperture
TWI525346B (en) Optical imaging systems and optical systems with extended depth of focus
KR20090055635A (en) Imaging system with improved image quality and associated methods
US20140111606A1 (en) Camera systems and methods for gigapixel computational imaging
WO2019131191A1 (en) Wide conversion lens and image capture device
Fontbonne et al. Experimental validation of hybrid optical–digital imaging system for extended depth-of-field based on co-optimized binary phase masks
Wu et al. Comparison between the plenoptic sensor and the light field camera in restoring images through turbulence
Xie et al. An extended depth-of-field imaging system with a non-rotationally symmetric phase mask
Scrymgeour et al. Advanced Imaging Optics Utilizing Wavefront Coding.
Chen et al. Optics optimization in high-resolution imaging module with extended depth of field
Jovanov et al. Real-time spatially adaptive deconvolution for CFA images
Wood et al. Computational imaging systems for the infrared: revolution or evolution
Bakin Extended Depth of Field Technology in Camera Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18893661

Country of ref document: EP

Kind code of ref document: A1

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18893661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP