WO2009088068A1 - 撮像装置及び光軸制御方法 - Google Patents
撮像装置及び光軸制御方法 Download PDFInfo
- Publication number
- WO2009088068A1 WO2009088068A1 PCT/JP2009/050198 JP2009050198W WO2009088068A1 WO 2009088068 A1 WO2009088068 A1 WO 2009088068A1 JP 2009050198 W JP2009050198 W JP 2009050198W WO 2009088068 A1 WO2009088068 A1 WO 2009088068A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging
- optical axis
- image
- lens
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/02—Bodies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/58—Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- G—PHYSICS
- G02—OPTICS
- G02F—OPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
- G02F1/00—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
- G02F1/29—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
- G02F1/291—Two-dimensional analogue deflection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
Definitions
- the present invention relates to an imaging apparatus and an optical axis control method.
- This application claims priority based on Japanese Patent Application No. 2008-003075 filed in Japan on January 10, 2008, and Japanese Patent Application No. 2008-180689 filed in Japan on July 10, 2008. , The contents of which are incorporated herein.
- An image pickup apparatus represented by a digital camera includes an image pickup element, an imaging optical system (lens optical system), an image processor, a buffer memory, a flash memory (card type memory), an image monitor, an electronic circuit and a mechanical mechanism for controlling these, and the like. It is composed of A solid-state electronic device such as a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor is usually used for the image sensor.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge Coupled Device
- the light quantity distribution imaged on the image sensor is photoelectrically converted, and the obtained electric signal is processed by an image processor and a buffer memory.
- a DSP Digital Signal Processor
- a DRAM Dynamic Random Access Memory
- the captured image is recorded and accumulated in a card type flash memory or the like, and the recorded and accumulated image can be displayed on a monitor.
- An optical system for forming an image on an image sensor is usually composed of several aspheric lenses in order to eliminate aberrations.
- a driving mechanism (actuator) that changes the focal length of the combination lens and the distance between the lens and the image sensor is necessary.
- the imaging device has increased pixel count and resolution, the imaging optical system has lower aberration and higher accuracy, and has a zoom function, autofocus function, and camera shake. Advanced functions such as correction functions are advancing. Along with this, there is a problem that the imaging device becomes large and it is difficult to reduce the size and thickness.
- an imaging lens device that includes a solid lens array, a liquid crystal lens array, and a single imaging device arranged in a planar shape has been proposed (for example, Patent Document 1).
- the same number of images as the number of lens arrays 2001 are divided and imaged on a single image sensor 2003.
- a plurality of images obtained from the image sensor 2003 are subjected to image processing by the arithmetic unit 2004 to reconstruct the entire image. Further, focus information is detected from the arithmetic unit 2004, and each liquid crystal lens of the liquid crystal lens array 2002 is driven via the liquid crystal drive unit 2005 to perform auto focus.
- the imaging lens device of Patent Document 1 has an autofocus function and a zoom function and can be downsized by combining a liquid crystal lens and a solid lens.
- an image pickup apparatus including one non-solid lens (liquid lens, liquid crystal lens), a solid lens array, and one image pickup device (for example, Patent Document 2).
- FIG. 39 it is composed of a liquid crystal lens 2131, a compound eye optical system 2120, an image synthesizer 2115, and a drive voltage calculation unit 2142.
- the same number of images as the number of lens arrays are formed on a single image sensor 2105, and the image is reconstructed by image processing.
- a focus adjustment function can be realized with a small size and a thin shape by combining one non-solid lens (liquid lens, liquid crystal lens) and a solid lens array.
- the thin camera disclosed in Patent Document 3 achieves high definition of the composite image by combining the imaging lens array and the imaging element having the light shielding unit. Further, by combining a liquid lens with the imaging lens array and the imaging device, it is possible to realize a high definition of the composite image.
- the present invention has been made in view of such circumstances, and in order to realize a high-quality image pickup apparatus, the relative position of the optical system and the image pickup element is easily adjusted without requiring manual work. It is an object of the present invention to provide an imaging apparatus and an optical axis control method that can be used.
- the present invention includes a plurality of image sensors, a plurality of solid lenses that form images on each of the image sensors, and a plurality of optical axis controllers that control the directions of the optical axes of light incident on the image sensors. It is characterized by providing.
- the optical axis control unit is configured by a non-solid lens capable of changing a refractive index distribution, and by changing the refractive index distribution of the non-solid lens, the light incident on the imaging element is changed.
- the optical axis is deflected.
- the optical axis control unit includes a refracting plate and an inclination angle changing unit that changes an inclination angle of the refracting plate, and the inclination angle changing unit changes the inclination angle of the refracting plate by the inclination angle changing unit.
- the optical axis of light incident on the image sensor is deflected.
- the optical axis control unit includes a variable apex angle prism, and deflects an optical axis of light incident on the image sensor by changing an apex angle of the variable apex angle prism.
- the optical axis control unit includes a moving unit that moves the solid lens, and deflects an optical axis of light incident on the imaging element by moving the solid lens. .
- the optical axis control unit includes a moving unit that moves the image sensor, and controls the optical axis of light incident on the image sensor by moving the image sensor. .
- the present invention is characterized in that the optical axis control unit controls the direction of the optical axis based on a relative positional relationship with a known imaging target.
- the present invention is characterized in that each of the plurality of imaging elements has a different pixel pitch.
- the present invention is characterized in that each of the plurality of solid lenses has a different focal length.
- the present invention is characterized in that the plurality of imaging devices are arranged by rotating at different angles around the optical axis.
- the present invention includes a plurality of imaging elements, a plurality of solid lenses that form images on each of the imaging elements, and a non-solid lens capable of changing a refractive index distribution. And a focus control unit that changes a focal length of the solid lens by changing a rate distribution.
- the present invention includes a plurality of image sensors, a plurality of solid lenses that form images on each of the image sensors, and a plurality of optical axis controllers that control the directions of the optical axes of light incident on the image sensors.
- An optical axis control method in an imaging apparatus comprising: the optical axis control unit controlling a direction of the optical axis based on a relative positional relationship between a known imaging target and the optical axis control unit.
- the present invention includes a plurality of imaging elements, a plurality of solid lenses that form images on each of the imaging elements, and a non-solid lens capable of changing a refractive index distribution.
- An optical axis control method in an imaging apparatus comprising: a focus control unit that changes a focal length of the solid lens by changing a rate distribution, wherein the focus control unit includes a known imaging target and the image sensor. The focal length of the solid lens is controlled based on the relative positional relationship.
- the direction of the optical axis is controlled based on the relative position between the imaging target and the plurality of optical axis controllers, it is possible to set the optical axis at an arbitrary position on the imaging element surface, and focus adjustment An imaging device with a wide range can be realized.
- FIG. 1 is a block diagram illustrating a configuration of an imaging apparatus according to a first embodiment of the present invention. It is a detailed block diagram of the unit imaging part of the imaging device by 1st Embodiment shown in FIG. It is a block diagram of a liquid crystal lens. It is a schematic diagram explaining the function of the liquid crystal lens used for the imaging device by 1st Embodiment. It is a schematic diagram explaining the liquid crystal lens of the imaging device by 1st Embodiment. It is a schematic diagram explaining the image pick-up element of the image pick-up device by 1st Embodiment shown in FIG. It is a detailed schematic diagram of an image sensor. It is a block diagram which shows the whole structure of the imaging device 1 shown in FIG.
- FIG. 2 is a detailed block diagram of a video processing unit of the imaging apparatus according to the first embodiment.
- FIG. It is a detailed block diagram of a video composition processing unit of video processing of the imaging device according to the first embodiment.
- 2 is a detailed block diagram of a video processing control unit of the imaging apparatus according to the first embodiment.
- FIG. It is a flowchart explaining an example of operation
- FIG. 3 is a schematic diagram illustrating a state of imaging by the imaging apparatus 1.
- FIG. It is a schematic diagram explaining a high-definition subpixel. It is another schematic diagram explaining a high-definition subpixel. It is explanatory drawing which shows the relationship between an imaging target (object) and image formation. It is another explanatory drawing which shows the relationship between an imaging target (subject) and image formation. It is another explanatory drawing which shows the relationship between an imaging target (subject) and image formation.
- 6 is a schematic diagram illustrating the operation of the imaging apparatus 1.
- FIG. 6 is another schematic diagram illustrating the operation of the imaging apparatus 1.
- FIG. 6 is another schematic diagram illustrating the operation of the imaging apparatus 1.
- FIG. 1 is a functional block diagram showing the overall configuration of the imaging apparatus according to the first embodiment of the present invention.
- the imaging apparatus 1 shown in FIG. 1 includes six system unit imaging units 2 to 7.
- the unit imaging unit 2 includes an imaging lens 8 and an imaging element 14.
- the unit imaging unit 3 is an imaging lens 9 and an imaging element 15,
- the unit imaging unit 4 is an imaging lens 10 and an imaging element 16,
- the unit imaging unit 5 is an imaging lens 11 and an imaging element 17, and the unit imaging unit 6 is an imaging lens 12.
- the image pickup device 18 and the unit image pickup unit 7 include an image pickup lens 13 and an image pickup device 19.
- Each of the imaging lenses 8 to 13 forms an image of the light from the imaging target on the corresponding imaging elements 14 to 19, respectively.
- Reference numerals 20 to 25 shown in FIG. 1 indicate optical axes of light incident on the image sensors 14 to 19, respectively.
- the image formed by the imaging lens 9 is photoelectrically converted by the imaging element 15 to convert the optical signal into an electrical signal.
- the electrical signal converted by the image sensor 15 is converted into a video signal by a parameter set in advance by the video processing unit 27.
- the video processing unit 27 outputs the converted video signal to the video composition processing unit 38.
- the video composition processing unit 38 inputs the video signals converted by the video processing units 26 and 28 to 31 corresponding to the electrical signals output from the other unit imaging units 2 and 4 to 7.
- the video composition processing unit 38 synthesizes the six video signals picked up by the unit image pickup units 2 to 7 into one video signal while synchronizing them, and outputs it as a high-definition video.
- the video composition processing unit 38 when the synthesized high-resolution video is deteriorated from a preset determination value, the video composition processing unit 38 generates a control signal based on the determination result and outputs the control signal to the six control units 32 to 37. .
- the control units 32 to 37 perform optical axis control of the corresponding imaging lenses 8 to 13 based on the input control signal. Then, the video composition processing unit 38 determines the high-definition video again. If this determination result is good, a high-definition video is output, and if it is bad, the operation of controlling the imaging lens is repeated.
- the unit imaging unit 3 includes a liquid crystal lens (non-solid lens) 301 and an optical lens (solid lens) 302.
- the control unit 33 includes four voltage control units 33a, 33b, 33c, and 33d that control the voltage applied to the liquid crystal lens 301.
- the voltage control units 33a, 33b, 33c, and 33d determine the voltage to be applied to the liquid crystal lens 301 based on the control signal generated by the video composition processing unit 38, and control the liquid crystal lens 301. Since the imaging lenses and control units of the other unit imaging units 2 and 4 to 7 shown in FIG. 1 have the same configuration, detailed description thereof is omitted here.
- the liquid crystal lens 301 includes a transparent first electrode 303, a second electrode 304, a transparent third electrode 305, and a liquid crystal disposed between the second electrode 304 and the third electrode 305.
- a first insulating layer 307 disposed between the layer 306, the first electrode 303 and the second electrode 304, and a second insulating layer disposed between the second electrode 304 and the third electrode 305.
- 308 a third insulating layer 311 disposed outside the first electrode 303, and a fourth insulating layer 312 disposed outside the third electrode 305.
- the second electrode 304 has a circular hole, and is composed of four electrodes 304a, 304b, 304c, and 304d divided vertically and horizontally as shown in the front view of FIG.
- the voltage can be applied independently to each electrode.
- the liquid crystal layer 306 has liquid crystal molecules aligned in one direction so as to face the third electrode 305, and by applying a voltage between the electrodes 303, 304, and 305 that sandwich the liquid crystal layer 306, Alignment control is performed.
- the insulating layer 308 is made of, for example, a transparent glass having a thickness of about several hundreds ⁇ m in order to increase the diameter.
- the dimensions of the liquid crystal lens 301 are shown below.
- the size of the circular hole of the second electrode 304 is about ⁇ 2 mm, the distance from the first electrode 303 is 70 ⁇ m, and the thickness of the second insulating layer 308 is 700 ⁇ m.
- the thickness of the liquid crystal layer 306 is 60 ⁇ m.
- the first electrode 303 and the second electrode 304 are different layers in this embodiment mode, they may be formed on the same surface. In that case, the shape of the first electrode 303 is smaller than the circular hole of the second electrode 304 and is arranged at the hole position of the second electrode 304, and the electrode is taken out at the divided portion of the second electrode 304. It is set as the structure which provided the part. At this time, the electrodes 304a, 304b, 304c, and 304d constituting the first electrode 303 and the second electrode can be independently voltage controlled. With this configuration, the overall thickness can be reduced.
- the operation of the liquid crystal lens 301 shown in FIG. 3 will be described.
- a voltage is applied between the transparent third electrode 305 and the second electrode 304 made of an aluminum thin film or the like, and at the same time, the first electrode 303 and the second electrode 304 are applied.
- an electric field gradient targeted for the axis can be formed on the central axis 309 of the second electrode 304 having a circular hole.
- the liquid crystal molecules of the liquid crystal layer 306 are aligned in the direction of the electric field gradient due to the axial target electric field gradient around the edge of the circular electrode formed in this way.
- the refractive index distribution of the extraordinary light changes from the center to the periphery of the circular electrode due to the change in the orientation distribution of the liquid crystal layer 306, and thus can function as a lens.
- the refractive index distribution of the liquid crystal layer 306 can be freely changed by applying a voltage to the first electrode 303 and the second electrode 304, and optical characteristics such as a convex lens and a concave lens can be freely controlled. Is possible.
- an effective voltage of 20 Vrms is applied between the first electrode 303 and the second electrode 304, and an effective voltage of 70 Vrms is applied between the second electrode 304 and the third electrode 305.
- an effective voltage of 90 Vrms is applied between the first electrode 303 and the third electrode 305 to function as a convex lens.
- the liquid crystal driving voltage (voltage applied between the electrodes) is a sine wave or a rectangular wave AC waveform with a duty ratio of 50%.
- the voltage value to be applied is expressed as an effective voltage (rms: root mean square value).
- an AC sine wave of 100 Vrms has a voltage waveform having a peak value of ⁇ 144V.
- 1 kHz is used as the frequency of the AC voltage.
- the refractive index was axially symmetric when the same voltage was applied.
- the distribution is an asymmetric distribution with the axis shifted with respect to the second electrode central axis 309 having a circular hole, and the effect of deflecting the incident light from the straight direction is obtained.
- the direction of incident light deflection can be changed by appropriately changing the voltage applied between the divided second electrode 304 and third electrode 305.
- the optical axis denoted by reference numeral 309
- the position is shifted to the position indicated by reference numeral 310.
- the shift amount is 3 ⁇ m, for example.
- FIG. 4 is a schematic diagram for explaining the optical axis shift function of the liquid crystal lens 301.
- the imaging device by controlling the voltage applied between the electrodes 304a, 304b, 304c, and 304d constituting the second electrode and the third electrode 305 for each of the electrodes 304a, 304b, 304c, and 304d, the imaging device The center axis of the liquid crystal lens and the center axis of the refractive index distribution of the liquid crystal lens can be shifted. Since this corresponds to the lens being displaced in the xy plane with respect to the image pickup device surface, the light beam input to the image pickup device can be deflected in the u and v planes.
- FIG. 5 shows a detailed configuration of the unit imaging unit 3 shown in FIG.
- the optical lens 302 in the unit imaging unit 3 includes two optical lenses 302a and 302b, and the liquid crystal lens 301 is disposed between the optical lenses 302a and 302b.
- Each of the optical lenses 302a and 302b is composed of one or a plurality of lenses.
- Light rays incident from the object plane are collected by an optical lens 302a disposed on the object plane side of the liquid crystal lens 301, and are incident on the liquid crystal lens 301 in a state where the spot is reduced. At this time, the incident angle of the light beam to the liquid crystal lens 301 is almost parallel to the optical axis.
- the light beam emitted from the liquid crystal lens 301 is imaged on the surface of the image sensor 15 by the optical lens 302b disposed on the image sensor 15 side of the liquid crystal lens 301.
- the diameter of the liquid crystal lens 301 can be reduced, the voltage applied to the liquid crystal lens 301 can be reduced, the lens effect can be increased, and the thickness of the second insulating layer 308 can be reduced.
- the lens thickness can be reduced.
- one image pickup lens is arranged for one image pickup device.
- a plurality of second electrodes 304 are formed on the same substrate, and a plurality of second electrode 304 is formed.
- a configuration in which a liquid crystal lens is integrated may be used. That is, in the liquid crystal lens 301, since the hole portion of the second electrode 304 corresponds to the lens, a plurality of patterns of the second electrode 304 are arranged on one substrate, whereby each of the second electrodes 304 is arranged.
- the hole portion has a lens effect. Therefore, by arranging the plurality of second electrodes 304 on the same substrate in accordance with the arrangement of the plurality of image sensors, it is possible to deal with all the image sensors with a single liquid crystal lens unit.
- the number of liquid crystal layers is one.
- the responsiveness is improved while maintaining the same light collecting property. It is also possible to do. This is because the response speed deteriorates as the thickness of the liquid crystal layer increases.
- the lens effect can be obtained in all the polarization directions with respect to the light incident on the liquid crystal lens by changing the direction of polarization between the liquid crystal layers.
- the number of electrode divisions is exemplified as a four-division type as an example, but the number of electrode divisions can be changed according to the direction in which the electrode is desired to move.
- a CMOS imaging device can be used as the imaging device of the imaging apparatus according to the present embodiment.
- the image sensor 15 is composed of pixels 501 in a two-dimensional array.
- the pixel size of the CMOS image sensor of this embodiment is 5.6 ⁇ m ⁇ 5.6 ⁇ m
- the pixel pitch is 6 ⁇ m ⁇ 6 ⁇ m
- the effective number of pixels is 640 (horizontal) ⁇ 480 (vertical).
- the pixel is a minimum unit of an imaging operation performed by the imaging device.
- one pixel corresponds to one photoelectric conversion element (for example, a photodiode).
- the pixel Among the pixel sizes of 5.6 ⁇ m, there is a light receiving part with a certain area (spatial spread), and the pixel averages and integrates the light incident on the light receiving part to convert it into an electric signal and convert it into an electric signal To do.
- the averaging time is controlled by an electronic or mechanical shutter or the like, and its operating frequency generally matches the frame frequency of the video signal output from the imaging apparatus, for example, 60 Hz.
- FIG. 7 shows a detailed configuration of the image sensor 15.
- the pixel 501 of the CMOS image sensor 15 amplifies the signal charge photoelectrically converted by the photodiode 515 by the amplifier 516.
- a signal of each pixel is selected by a vertical scanning method by a vertical scanning circuit 511 and a horizontal scanning circuit 512, and is taken out as a voltage or a current.
- a CDS (Correlated Sampling) 518 is a circuit that performs correlated double sampling, and can suppress 1 / f noise among random noises generated by the amplifier 516 and the like.
- the pixels other than the pixel 501 have the same configuration and function.
- the monochrome CMOS image sensor 15 is used, but a color-compatible CMOS image sensor in which R, G, and B color filters are individually attached to each pixel can also be used. Using a Bayer structure in which the repetition of R, G, G, and B is arranged in a checkered pattern, colorization can be easily achieved with a single image sensor.
- P001 is a CPU (Central Processing ⁇ ⁇ Unit) that controls the overall processing operation of the imaging apparatus 1 and is sometimes called a microcontroller (microcomputer).
- P002 is a ROM (Read Only Memory) configured by a non-volatile memory, and stores a setting value necessary for the CPU / P001 program and each processing unit.
- P003 is a RAM (Random Access Memory) that stores temporary data of the CPU.
- P004 is a VideoRAM, which mainly stores video signals and image signals in the middle of calculation, and is composed of SDRAM (Synchronous Dynamic RAM) or the like.
- FIG. 8 shows a configuration in which the RAM P003 is stored for storing programs in the CPU P001 and the VideoRAM P004 is used for storing images.
- P005 is a system bus to which a CPU / P001, a ROM / P002, a RAM / P003, a VideoRAM / P004, a video processing unit 27, a video composition processing unit 38, and a control unit 33 are connected.
- the system bus P005 is also connected to internal blocks of the video processing unit 27, the video composition processing unit 38, and the control unit 33, which will be described later.
- the CPU P001 controls the system bus P005 as a host, and setting data necessary for video processing, image processing, and optical axis control flows bidirectionally. Further, for example, the system bus P005 is used when an image being processed by the video composition processing unit 38 is stored in the VideoRAM ⁇ P004. A bus for an image signal that requires a high transfer speed and a low-speed data bus may be different bus lines.
- the system bus P005 is connected to an external interface such as a USB or flash memory card (not shown) and a display drive controller of a liquid crystal display as a viewfinder.
- FIG. 9 is a block diagram illustrating a configuration of the video processing unit 27.
- reference numeral 601 denotes a video input processing unit
- 602 denotes a correction processing unit
- 603 denotes a calibration parameter storage unit.
- the video input processing unit 601 inputs the video signal captured from the unit imaging unit 3, performs signal processing such as knee processing and gamma processing, and also performs white balance control.
- the output of the video input processing unit 601 is passed to the correction processing unit 602, and distortion correction processing based on calibration parameters obtained by a calibration procedure described later is performed. For example, distortion caused by the mounting error of the image sensor 15 is calibrated.
- the calibration parameter storage unit 603 is a RAM (Random Access Memory) and stores a calibration value (calibration value).
- the corrected video signal that is output from the correction processing unit 602 is output to the video composition processing unit 38.
- the data stored in the calibration parameter storage unit 603 is updated by the CPU P001 when the imaging apparatus is powered on, for example.
- the calibration parameter storage unit 603 may be a ROM (Read Only Memory), and the stored data may be determined and stored in the ROM by a calibration procedure at the time of factory shipment.
- the video input processing unit 601, the correction processing unit 602, and the calibration parameter storage unit 603 are each connected to the system bus P005.
- the aforementioned gamma processing characteristics of the video input processing unit 601 are stored in the ROM P002.
- the video input processing unit 601 receives the data stored in the ROM P002 via the system bus P005 by the program of the CPU P001.
- the correction processing unit 602 writes image data in the middle of calculation to VideoRAM ⁇ P004 via the system bus P005 or reads out from the VideoRAM ⁇ P004.
- a monochrome CMOS image sensor 15 is used.
- the image processing unit 601 performs Bayer interpolation processing. It will be.
- FIG. 10 is a block diagram showing a configuration of the video composition processing unit 38.
- the composition processing unit 701 performs composition processing on the imaging results of the plurality of unit imaging units 2 to 7.
- the composition processing can improve the resolution of the image as will be described later.
- the synthesis parameter storage unit 702 stores image shift amount data obtained from, for example, three-dimensional coordinates between unit imaging units derived by calibration described later.
- the composition processing unit 701 shifts the image based on this shift amount and composes it.
- the determination unit 703 detects the power of the high-band component of the video signal by, for example, Fourier transforming the result of the synthesis process.
- the image sensor is a wide VGA (854 pixels ⁇ 480 pixels). Further, it is assumed that the video output that is the output of the video composition processing unit 38 is a high-vision signal (1920 pixels ⁇ 1080 pixels). In this case, the frequency band determined by the determination unit 703 is approximately 20 MHz to 30 MHz. The upper limit of the video frequency band at which a wide VGA video signal can be reproduced is approximately 10 MHz to 15 MHz.
- the synthesis processing unit 701 performs synthesis processing to restore a component of 20 MHz to 30 MHz.
- the image pickup device is a wide VGA, but the image pickup optical system mainly including the image pickup lenses 8 to 13 is required to have a characteristic that does not deteriorate the band of the high vision signal.
- the control units 32 to 37 are controlled so that the power of the frequency band (20 MHz to 30 MHz component in the above example) of the combined video signal is maximized.
- the determination unit 703 performs a Fourier transform process, and determines the magnitude of energy above a specific frequency (for example, 20 MHz).
- the effect of restoring the video signal band that exceeds the band of the image sensor changes depending on the phase when the image formed on the image sensor is sampled within a range determined by the size of the pixel.
- the control lenses 32 to 37 are used to control the imaging lenses 8 to 13. Specifically, the control unit 33 controls the liquid crystal lens 301 in the imaging lens 9.
- the ideal state of the control result is a state in which the sampling phase of the imaging result of each unit imaging unit is shifted in the horizontal, vertical, and diagonal directions by 1 ⁇ 2 of the pixel size.
- the energy of the high band component as a result of the Fourier transform is maximized. That is, control is performed so that the energy of the result of the Fourier transform is maximized by a feedback loop that controls the liquid crystal lens and determines the resultant synthesis process.
- this control method controls the imaging lens 2 and the imaging lenses 4 to 7 via the control units 32 and 34 to 37 other than the control unit 33 with the video signal from the video processing unit 27 as a reference.
- the optical axis phase of the imaging lens 2 is controlled by the control unit 32.
- the optical axis phase is similarly controlled for the other imaging lenses 4 to 7.
- the phase offset averaged by the image sensor is optimized. In other words, when sampling an image formed on the image sensor with pixels, the sampling phase is controlled to an ideal state for high definition by controlling the optical axis phase. As a result, it becomes possible to synthesize high-definition and high-quality video signals.
- the determination unit 703 determines the synthesis processing result, and if a high-definition and high-quality video signal is synthesized, maintains the control value and outputs the high-definition and high-quality video signal as video. On the other hand, if a high-definition and high-quality video signal cannot be synthesized, the imaging lens is controlled again.
- the output of the video composition processing unit 38 is, for example, a video signal, which is output to a display (not shown), or passed to an image recording unit (not shown) and recorded on a magnetic tape or an IC card.
- the synthesis processing unit 701, the synthesis parameter storage unit 702, and the determination unit 703 are each connected to the system bus P005.
- the synthesis parameter storage unit 702 is composed of a RAM. For example, it is updated via the system bus P005 by the CPU P001 when the imaging apparatus is powered on. Further, the composition processing unit 701 writes the image data in the middle of the calculation to the VideoRAM / P004 via the system bus P005, or reads it from the VideoRAM / P004.
- reference numeral 801 denotes a voltage control unit
- 802 denotes a liquid crystal lens parameter storage unit.
- the voltage control unit 801 controls the voltage of each electrode of the liquid crystal lens 301 included in the imaging lens 9 in accordance with a control signal from the determination unit 703 of the video composition processing unit 38.
- the voltage to be controlled is determined based on the parameter value read from the liquid crystal lens parameter storage unit 802.
- the electric field distribution of the liquid crystal lens 301 is ideally controlled, and the optical axis is controlled as shown in FIG. .
- Such control ideally controls the pixel phase, resulting in improved resolution of the video output signal.
- control unit 33 If the control result of the control unit 33 is in an ideal state, the energy detection of the result of the Fourier transform, which is the process of the determination unit 703, is maximized. In order to achieve such a state, the control unit 33 forms a feedback loop by the imaging lens 9, the video processing unit 27, and the video synthesis processing unit 38 so that high-frequency energy can be greatly obtained. To control.
- the voltage control unit 801 and the liquid crystal lens parameter storage unit 802 are each connected to the system bus P005.
- the liquid crystal lens parameter storage unit 802 is constituted by a RAM, for example, and is updated by the CPU P001 via the system bus P005 when the image pickup apparatus 1 is turned on.
- the calibration parameter storage unit 603, the composite parameter storage unit 702, and the liquid crystal lens parameter storage unit 802 shown in FIGS. 9 to 11 may be configured to be selectively used according to the stored addresses using the same RAM or ROM. Further, a configuration may be used in which some addresses of ROM • P002 and RAM • P003 are used.
- FIG. 12 is a flowchart showing the operation of the imaging apparatus 1.
- the correction processing unit 602 reads calibration parameters from the calibration parameter storage unit 603 (step S901).
- the correction processing unit 602 performs correction for each of the unit imaging units 2 to 7 based on the read calibration parameters (step S902). This correction is to remove distortion for each of the unit imaging units 2 to 7 described later.
- the synthesis processing unit 701 reads a synthesis parameter from the synthesis parameter storage unit 702 (step S903).
- the composition processing unit 701 executes sub-pixel video composition high-definition processing based on the read composition parameters (step S904).
- a high-definition image is constructed based on information having different phases in units of subpixels.
- the determination unit 703 executes high-definition determination (step S905), and determines whether it is high-definition (step S906).
- the determination unit 703 holds a determination threshold value inside, determines the high-definition degree, and passes the determination result information to each of the control units 32 to 37.
- Each control unit 32 to 37 maintains the same value of the liquid crystal lens parameter without changing the control voltage when high definition is achieved (step S907).
- the control units 32 to 37 change the control voltage of the liquid crystal lens 301 (step S908).
- the CPU P001 manages the control end condition, for example, determines whether or not the device power-off condition is satisfied (step S909).
- control end condition If the control end condition is not satisfied, the process returns to step S903. Repeat the process. On the other hand, if the control end condition is satisfied, the process ends.
- the control end condition may be set such that the number of high-definition determinations is 10 in advance when the apparatus is turned on, and the processes in steps S903 to S909 may be repeated for the specified number of times.
- the image size, the magnification, the rotation amount, and the shift amount are synthesis parameters, and are parameters read from the synthesis parameter storage unit 702 in the synthesis parameter reading process (step S903).
- one high-definition image is obtained from four unit imaging units. From four images picked up by the individual unit image pickup units, they are superimposed on one coordinate system using parameters of the rotation amount and the shift amount. Then, the filter calculation is performed using the four images and the weighting coefficient based on the distance. For example, the filter uses cubic (third order approximation).
- the determination unit 703 extracts a signal within the defined range (step S1001). For example, when one frame in a frame is defined as a definition range, a frame memory block (not shown) is separately provided, and signals for one screen are stored in advance. For example, in the case of VGA resolution, one screen is two-dimensional information composed of 640 ⁇ 480 pixels. The determination unit 703 performs Fourier transform on the two-dimensional information to convert time-axis information into frequency-axis information (step S1002). Next, a high-frequency signal is extracted by an HPF (High-pass filter) (step S1003).
- HPF High-pass filter
- the image pickup device 9 is a VGA signal (640 pixels ⁇ 480 pixels) having an aspect ratio of 4: 3, 60 fps (Frame Per Second) (progressive), and a video output signal output from the video composition processing unit is Quad ⁇ .
- VGA Assume the case of VGA. Assume that the limit resolution of the VGA signal is about 8 MHz and a signal of 10 to 16 MHz is reproduced by the synthesis process. In this case, the HPF has a characteristic of passing a component of 10 MHz or more, for example.
- the determination unit 703 performs determination by comparing the signal of 10 MHz or higher with a threshold value (step S1004). For example, when the DC (direct current) component resulting from Fourier transform is set to 1, the threshold value of energy of 10 MHz or higher is set to 0.5, and the threshold value is compared with the threshold value.
- the definition range is the line unit (the unit of horizontal synchronization repetition, the number of effective pixels in the case of a high-definition signal). If it is defined in units of 1920 pixels, the frame memory block becomes unnecessary, and the circuit scale can be reduced.
- the Fourier transform is repeatedly executed, for example, 1080 times of the number of lines, and the threshold comparison judgment for 1080 times for each line is integrated to determine the high-definition degree of one screen. Good. Further, the determination may be made using several frames of threshold comparison determination results in units of screens.
- the threshold determination may use a fixed threshold, but may adaptively change the threshold.
- the feature of the image being judged may be extracted separately, and the threshold value may be switched based on the result. For example, image features may be extracted by histogram detection. Further, the current threshold value may be changed in conjunction with the past determination result.
- step S908 executed by the control units 32 to 37 shown in FIG. 12 will be described with reference to FIG.
- the processing operation of the control unit 33 will be described as an example, but the processing operations of the control units 32 and 34 to 37 are the same.
- the voltage control unit 801 reads out the current parameter value of the liquid crystal lens from the liquid crystal lens parameter storage unit 802 (step S1101).
- the voltage control unit 801 updates the parameter value of the liquid crystal lens (step S1102).
- the liquid crystal lens parameters have a past history.
- the voltage of the voltage control unit 33a is 40V, 45V, 50V, and 5V in the past history.
- the voltage values of the voltage control unit 33b, the voltage control unit 33c, and the voltage control unit 33d are determined.
- the voltage of the voltage control unit 33a is updated to 55V. In this manner, the voltage values applied to the electrodes 304a, 304b, 304c, and 304d of the four liquid crystal lenses are sequentially updated. Further, the updated value updates the value of the liquid crystal lens parameter as a history.
- the captured images of the plurality of unit imaging units 2 to 7 are synthesized in sub-pixel units, the degree of high definition is determined, and the control voltage is changed so as to maintain high definition performance.
- an image pickup apparatus with high image quality can be realized.
- a sample when an image formed on the image sensor by the imaging lenses 8 to 13 is sampled with pixels of the image sensor by applying different voltages to the divided electrodes 304a, 304b, 304c, and 304d. Change the conversion phase.
- the ideal state of the control is a state in which the sampling phase of the imaging result of each unit imaging unit is shifted in the horizontal, vertical, and diagonal directions by 1 ⁇ 2 of the pixel size.
- the determination unit 703 determines whether the state is ideal.
- This processing operation is, for example, processing performed at the time of factory production of the imaging apparatus 1, and this camera calibration is performed by performing a specific operation such as simultaneously pressing a plurality of operation buttons when the imaging apparatus is powered on.
- This camera calibration process is executed by the CPU P001.
- an operator who adjusts the imaging device 1 prepares a checker pattern or checkered test chart with a known pattern pitch, and acquires images by capturing images in 30 different postures of the checker pattern while changing the posture and angle. (Step S1201).
- the CPU P001 analyzes the captured image for each of the unit imaging units 2 to 7, and derives an external parameter value and an internal parameter value for each of the unit imaging units 2 to 7 (step S1202).
- six external parameter values are external information including three-dimensional rotation information and translation information of the posture of the camera.
- a general camera model there are a total of six external parameters including a three-axis vector of yaw, pitch, and roll indicating the camera attitude with respect to world coordinates, and a three-axis component of a translation vector indicating a translation component.
- the internal parameters are the image center (u 0 , v 0 ) where the optical axis of the camera intersects the image sensor, the angle and aspect ratio of the coordinates assumed on the image sensor, and the focal length.
- the CPU P001 stores the obtained parameters in the calibration parameter storage unit 603 (step S1203).
- the individual camera distortion of the unit imaging units 2 to 7 is corrected. That is, since a checker pattern that was originally a straight line may be imaged as a curve due to camera distortion, parameters for returning the checker pattern to a straight line are derived by this camera calibration process, and the unit imaging units 2 to 7 is corrected.
- the CPU ⁇ P001 derives the parameters between the unit imaging units 2 to 7 and the external parameters between the unit imaging units 2 to 7 (step S1204), and stores them in the composite parameter storage unit 702 and the liquid crystal lens parameter storage unit 802.
- the stored parameters are updated (steps S1205 and S1206). This value is used in the sub-pixel video composition high-definition processing S904 and the control voltage change S908.
- the CPU or microcomputer in the imaging device has a camera calibration function
- a separate personal computer is prepared and the same processing is executed on this personal computer.
- the configuration may be such that only the parameters are downloaded to the imaging apparatus.
- the state of projection by the camera is considered using a pinhole camera model as shown in FIG.
- the pinhole camera model all the light reaching the image plane passes through a pinhole, which is one point at the center of the lens, and forms an image at a position intersecting the image plane.
- the coordinate system that takes the intersection of the optical axis and the image plane as the origin and takes the x-axis and y-axis to match the camera element placement axis is called the image coordinate system.
- the camera lens center is the origin and the optical axis is the Z-axis.
- a coordinate system taking the X axis and the Y axis in parallel with the x axis and the y axis is called a camera coordinate system.
- the three-dimensional coordinates M [X, Y, Z] T in the world coordinate system (Xw, Yw, Zw) which is a coordinate system representing the space, and the image coordinate system (x, y) which is the projection thereof.
- the point m [u, v] T is associated with the equation (1).
- A is called an internal parameter matrix, and is a matrix like the following equation (2).
- ⁇ and ⁇ are scale factors formed by the product of the pixel size and the focal length, (u 0 , v 0 ) is the image center, and ⁇ is a parameter representing the distortion of the coordinate axes of the image.
- [R t] is an external parameter matrix, which is a 4 ⁇ 3 matrix in which a 3 ⁇ 3 rotation matrix R and a translation vector t are arranged.
- a ⁇ T A ⁇ 1 includes 6 unknowns in a 3 ⁇ 3 target matrix as shown in Equation (8), and two equations can be established for each H. If obtained above, the internal parameter A can be determined.
- a -T A -1 has a target.
- Equations (6) and (7) become as follows:
- V is a 2n ⁇ 6 matrix.
- b is obtained as an eigenvector corresponding to the minimum eigenvalue of V T V.
- n 2
- ⁇ 0
- the expression [0 1 0 0 0 0] b 0 Is obtained by adding to the equation (13).
- Optimum external parameters can be obtained by optimizing the parameters by the non-linear least square method using the parameters obtained so far as initial values.
- camera calibration can be performed by using three or more images taken with the internal parameters fixed from different viewpoints. At this time, generally, the larger the number of images, the higher the parameter estimation accuracy. Also, the error increases when the rotation between images used for calibration is small.
- FIG. 18 shows a point M on the target object plane with the above-described liquid crystal lens by using a basic image sensor 15 (referred to as a basic camera) and an adjacent image sensor 16 adjacent thereto (referred to as an adjacent camera).
- a basic image sensor 15 referred to as a basic camera
- an adjacent image sensor 16 adjacent thereto referred to as an adjacent camera
- the image is projected (photographed) onto a point m1 or m2 on each image sensor.
- FIG. 19 shows FIG. 18 using the pinhole camera model shown in FIG.
- the relationship between the point M on the world coordinate system and the point m on the image coordinate system can be expressed by using the central projection matrix P from the viewpoint of the mobility of the camera, etc.
- the corresponding point m 2 between the calculated basic image and the adjacent image is obtained in units of sub-pixels.
- Corresponding point matching using camera parameters has an advantage that the corresponding points can be instantaneously calculated only by matrix calculation because the camera parameters have already been obtained.
- (x u , y u ) are image coordinates of an imaging result of an ideal lens having no distortion
- (x d , y d ) are image coordinates of a lens having distortion.
- the coordinate systems of these coordinates are both the above-described image coordinate system x-axis and y-axis.
- R is the distance from the image center to (x u , y u ).
- the image center is determined by the aforementioned internal parameters u0 and v0. Assuming the above model, if the coefficients k1 to k5 and internal parameters are derived by calibration, the difference in imaging coordinates depending on the presence or absence of distortion can be obtained, and distortion caused by the real lens can be corrected.
- FIG. 20 is a schematic diagram illustrating an imaging state of the imaging apparatus 1.
- the unit imaging unit 3 including the imaging element 15 and the imaging lens 9 images the imaging range a.
- the unit imaging unit 4 including the imaging element 16 and the imaging lens 10 images the imaging range b.
- the two unit imaging units 3 and 4 image substantially the same imaging range. For example, when the arrangement interval of the imaging elements 15 and 16 is 12 mm, the focal length of the unit imaging units 3 and 4 is 5 mm, the distance to the imaging range is 600 mm, and the optical axes of the unit imaging units 3 and 4 are parallel, the imaging range The area in which a and b are different is about 3%. In this way, the same part is imaged, and the composition processing unit 38 performs high definition processing.
- the horizontal axis of FIG. 21 shows the expansion of the space.
- the expansion of the space indicates both the case of the actual space and the expansion of the virtual space on the image sensor. These are synonymous because they can be mutually converted and converted by using external parameters and internal parameters.
- the horizontal axis in FIG. 21 is the time axis. In this case as well, when displayed on the display, it is recognized that the space is widened by the observer's eyes. Therefore, the time axis of the video signal is synonymous with the expansion of space.
- the vertical axis in FIG. 21 represents amplitude and intensity. Since the intensity of the object reflected light is photoelectrically converted by a pixel of the image sensor and output as a voltage level, it may be regarded as an amplitude.
- the unit imaging units 2 to 7 capture the image as shown in the graph G2 in FIG.
- the integration is performed using an LPF (Low Pass Filter).
- An arrow A2 in FIG. 21 is the spread of the pixels of the image sensor.
- a graph G3 in FIG. 21 is a result of imaging with different unit imaging units 2 to 7, and the light is integrated by the spread of the pixel indicated by the arrow A3 in FIG.
- the contour (profile) of reflected light that is less than the spread determined by the resolution (pixel size) of the image sensor cannot be reproduced by the image sensor.
- a feature of the present invention is that both phase relationships have an offset in the graphs G2 and G3 in FIG.
- the combining processing unit By capturing light with such an offset and optimally combining it with the combining processing unit, it is possible to reproduce the contour shown in the graph G4 of FIG.
- the graph G4 in FIG. 21 that most reproduces the contour of the graph G1 in FIG. 21, and the imaging corresponding to the width of the arrow in the graph G4 in FIG. It is equivalent to the performance of the pixel size of the element.
- a non-solid lens typified by a liquid crystal lens and a plurality of unit imaging units including imaging elements are used to obtain a video output exceeding the resolution limit by the above-described averaging (integration using LPF). Is possible.
- FIG. 22 is a schematic diagram showing a relative phase relationship between two unit imaging units.
- sampling is synonymous with sampling, and refers to processing for extracting analog signals at discrete positions. Since FIG. 22 assumes the case where two unit imaging units are used, the phase relationship of 0.5 pixel size is ideal as indicated by reference numeral C1.
- the one-dimensional phase relationship has been described with reference to FIG.
- the phase control of the two-dimensional space can be performed by the operation shown in FIG.
- two-dimensional phase control is realized by performing phase control of the unit imaging unit on one side with respect to the reference one in two dimensions (horizontal, vertical, horizontal + vertical). May be.
- the above-described interpolation filtering process uses, for example, a cubic (third order approximation) method. This is a weighting process based on the distance to the interpolation point.
- the resolution limit of the image sensor is VGA
- the imaging lens has the ability to pass the Quad-VGA band, and the Quad-VGA band component equal to or higher than VGA is imaged at the VGA resolution as aliasing. Using this aliasing distortion, the high-band component of Quad-VGA is restored by the video composition process.
- FIGS. 23A to 23C are diagrams showing the relationship between the imaging target (subject) and the imaging. This figure is based on a pinhole model that ignores lens distortion. An image pickup apparatus with a small lens distortion can be explained by this model, and can be explained only by geometric optics.
- P1 is an imaging target and is separated by an imaging distance H.
- the pinholes O and O ′ correspond to the imaging lenses of the two unit imaging units, and are schematic diagrams in which one image is captured by the two unit imaging units of the imaging elements M and N.
- FIG. FIG. 23B shows a state where an image of P1 is formed on the pixels of the image sensor. In this way, the phase of the image formed with the pixel is determined. This phase is determined by the positional relationship (baseline length B) of the imaging elements, the focal length f, and the imaging distance H.
- the design value may differ depending on the mounting accuracy of the image sensor, and the relationship varies depending on the imaging distance.
- the phases are matched as shown in FIG. 23C.
- the light intensity distribution image in FIG. 23B schematically shows the light intensity for a certain spread. With respect to such light input, the image sensor averages within the range of pixel expansion. As shown in FIG. 23B, when the two unit imaging units capture at different phases, the same light intensity distribution is averaged at different phases. In the case of resolution, a high bandwidth higher than VGA resolution) can be reproduced. Since there are two unit imaging units, a phase shift of 0.5 pixels is ideal.
- FIG. 24A and 24B are schematic diagrams for explaining the operation of the imaging apparatus 1.
- FIG. A state in which an image is picked up by an image pickup apparatus including two unit image pickup units is illustrated. Each image sensor is shown enlarged in pixel units for convenience of explanation.
- the plane of the imaging element is defined in two dimensions u and v, and FIG. 24A corresponds to a u-axis cross section.
- the distance from the optical axis of each image of P1 is u1, u'1.
- the relative phase with respect to the pixels of the image sensors M and N at the positions where P0 and P1 are imaged on the image sensors M and N determines the image shift performance. This relationship is determined by the imaging distance H, the focal length f, and the baseline length B that is the distance between the optical axes of the imaging elements.
- FIG. 24B is a schematic diagram of an operation of restoring and generating one image by calculating the same images of the captured images.
- Pu indicates the pixel size in the u direction
- Pv indicates the pixel size in the v direction.
- FIG. 24B shows a relationship in which the pixels are shifted by half of each other, which is an ideal state for performing image shift and generating a high-definition image.
- FIG. 25A and FIG. 25B are schematic diagrams in the case where the image sensor N is attached with a deviation of half the pixel size from the design due to attachment errors, for example, with respect to FIG. 24A and FIG. 24B.
- the mutual relationship between u1 and u'1 is the same phase for the pixels of each image sensor.
- both images are formed at positions on the left side of the pixel.
- 26A and 26B are schematic diagrams when the optical axis shift of the present invention is operated with respect to FIGS. 25A and 25B.
- the movement in the right direction called optical axis shift in FIG. 26A is an image of the operation.
- the position at which the imaging target forms an image can be controlled with respect to the pixels of the imaging device.
- An ideal phase relationship can be achieved as shown in FIG. 26B.
- FIG. 27A is a schematic diagram illustrating a case where the subject is switched from the state in which P0 is captured at the imaging distance H0 to the object P1 at the distance H1.
- P0 forms an image at the center of the pixel of the image sensor M.
- the image sensor N forms an image around the pixel.
- FIG. 27B is a schematic diagram illustrating the phase relationship between the imaging elements when the subject is P1. After changing the subject to P1 as shown in FIG. 27B, the phases of each other substantially coincide.
- FIG. 28A it is possible to control the ideal phase relationship as shown in FIG. 28B by moving the optical axis by the optical axis shift means when the subject P1 is imaged. Can be achieved.
- a distance measuring unit for measuring the distance.
- the distance may be measured with the imaging device of the present invention.
- An example of measuring distance using a plurality of cameras (unit imaging units) is common in surveying and the like.
- the distance measurement performance is in inverse proportion to the distance to the distance measurement object in proportion to the base line length which is the distance between the cameras and the focal length of the camera.
- the imaging apparatus of the present invention has, for example, an eight-eye configuration, that is, a configuration including eight unit imaging units.
- the measurement distance that is, the distance to the subject is 500 mm
- four cameras with short distances between the optical axes (baseline lengths) among the eight-eye cameras are assigned to imaging and image shift processing, and the remaining baselines are long with respect to each other.
- the distance to the subject is as long as 2000 mm
- high resolution processing of image shift is performed using eight eyes, and ranging is performed by determining the amount of blur by analyzing the resolution of the captured image, for example. You may make it the structure performed by the process which estimates a distance.
- the accuracy of distance measurement may be improved by using other distance measuring means such as TOF (Time of Flight) together.
- TOF Time of Flight
- FIG. 29A is a schematic diagram of imaging P1 and P2 considering the depth ⁇ r.
- the difference (u1-u2) in distance from each optical axis is expressed by equation (22).
- (U1-u2) ⁇ r ⁇ u1 / H (22)
- u1-u2 is a value determined by the base line length B, the imaging distance H, and the focal length f.
- these conditions B, H, and f are fixed and regarded as constants.
- an ideal optical axis relationship is obtained by the optical axis shift means.
- FIG. 29B shows a condition in which the influence of depth falls within the range of one pixel, assuming a pixel size of 6 ⁇ m, an imaging distance of 600 mm, and a focal length of 5 mm as an example. Since the effect of image shift is sufficient under the condition that the influence of depth falls within the range of one pixel, it is possible to avoid image shift performance deterioration due to depth if used properly depending on the application, such as narrowing the angle of view. It becomes.
- optical axis shift control for controlling the optical axis of light incident on the image sensor is Since it can be achieved by means other than a non-solid lens such as a liquid crystal lens, other embodiments will be described below.
- the image pickup apparatus has a refracting plate disposed between a plurality of solid lenses and a plurality of image pickup elements.
- a refracting plate 3001 instead of the liquid crystal lens 301 of the unit imaging unit 3 shown in FIG. 2, a refracting plate 3001 whose incident surface and output surface are parallel is arranged between the solid lens 302 and the image sensor 15, and the piezoelectric element.
- FIG. As shown in “Principle diagram of optical axis shift by tilting of refracting plate” in FIG.
- the optical axis shift amount by refracting plate 3001 varies depending on the tilt angle. It becomes possible. However, since the direction of the optical axis shift caused by tilting the refracting plate 3001 is only one axis direction (Y direction in the drawing), a refracting plate that shifts the optical axis in the X direction is separately arranged. As a result, the optical axis can be shifted in the biaxial direction, and the optical axis can be shifted in any direction within the image sensor surface.
- the image pickup apparatus uses a plurality of variable apex angle prisms, a plurality of solid lenses, and a plurality of image pickup elements.
- the incident light is deflected by changing the apex angle of the variable apex angle prism 3101 by means such as an actuator using the piezoelectric element 3102, the image is formed on the imaging element 15 by the solid lens 302.
- the position of is changed, and the optical axis can be shifted.
- the optical axis shift direction is only one axial direction (Y direction in the figure) as shown in FIG. 31, a variable apex angle prism that shifts the optical axis in the X direction is arranged separately.
- the optical axis can be shifted in an arbitrary direction.
- the image pickup apparatus uses a plurality of solid-state lenses and a plurality of image pickup image-pickup elements, and the whole or a part of the solid-state lens is substantially perpendicular to the optical axis by moving means such as an actuator using a plurality of piezoelectric elements. Move in the direction.
- the imaging lens 3209 includes three solid lenses 3201, 3202, and 3203, and the solid lens 3202 that is one part of the imaging lens 3209 is moved by an actuator 3204 using a piezoelectric element that can move in the X direction.
- the solid lens 3202 which is a part of the imaging lens 3209, moves, so that the incident light is deflected and connected by the imaging lens 3209.
- the position on the image sensor 15 to be imaged is changed, and the optical axis can be shifted.
- the optical axis can be shifted independently in the XY directions on the image sensor surface.
- the imaging apparatus uses a plurality of solid lenses and a plurality of imaging elements, and moves the plurality of imaging elements directly by moving means such as an actuator using a piezoelectric element.
- moving means such as an actuator using a piezoelectric element.
- the control of the optical axis shift amount is performed not on the imaging lens but on the imaging element 15.
- FIG. 33 shows a configuration in which each of the image pickup elements of the six systems of unit image pickup units 2 to 7 shown in FIG. 1 is directly moved by a moving means such as an actuator using a piezoelectric element. As shown in FIG.
- a piezoelectric element is used as an actuator as a moving means.
- the present invention is not limited to this, and a solenoid actuator using electromagnetic force, an actuator using a motor and a gear mechanism, and pressure are used. It is also possible to use means such as an actuator.
- the control means is not limited to voltage control. Regarding the optical axis shift direction, the uniaxial direction and the biaxial direction have been described, but the present invention is not limited to this.
- the optical axis shift control is performed by another method instead of a non-solid lens such as a liquid crystal lens, it is possible to increase the degree of freedom of the configuration for realizing the optical axis control.
- a non-solid lens such as a liquid crystal lens
- the components that make up the mobile phone terminal when the lens diameter cannot be increased or the focal length cannot be secured sufficiently It is possible to use an appropriate method according to the layout.
- FIGS. 34A and 34B a sixth embodiment will be described with reference to FIGS. 34A and 34B.
- the resolution can be increased by performing the optical axis shift as described above.
- the resolution can be increased by performing the focus shift in the same manner as the optical axis shift at the imaging distance where the resolution cannot be increased due to the correspondence between the imaging distance and the pixel pitch of the image sensor.
- FIG. 34A is a schematic diagram for explaining a case where the subject changes from a state at the point P0 at the imaging distance H0 to a state at the point P1 at the distance H1.
- the point P0 forms an image on the central portion of the pixel Mn of the image sensor M
- the image sensor N forms an image on the end portion of the pixel Nn.
- the image is shifted by a pixel (u′0 shown in FIG. 34A).
- the imaging position is shifted by one pixel (u′1 shown in FIG. 34A), and therefore the phases of the pixels of the image pickup device M and the image pickup device N substantially match.
- the state shown in the upper diagram of FIG. 34B is obtained. Accordingly, the phases of the pixels are substantially coincident and high resolution cannot be achieved.
- the focus of the pinhole O ′ is shifted in the direction of the image sensor N (the focal length f is shortened), so that the state shown in the upper diagram of FIG. 34B is changed to the lower diagram. Since the state can be changed, the pixels of the image sensor M and the image sensor N are shifted by half a pixel, and an optimal phase relationship can be obtained. However, if the focal length f of the pinhole O ′ is shortened until the optimum phase relationship is reached, the image on the image sensor N may be blurred.
- the purpose of shortening the focal length f of the pinhole O ′ is to avoid a state in which the phases of the pixels of the image pickup device M and the image pickup device N are substantially coincident with each other. Just move it.
- the change of the focal length can be realized by the liquid crystal lens 301 shown in FIG. 3, and the refractive index distribution of the liquid crystal layer 306 can be freely set by applying a voltage to the first electrode 303 and the second electrode 304. Change to In this case, the electrodes 304a, 304b, 304c, and 304d shown in FIG. Thereby, the liquid crystal lens 301 functions as a convex lens, and the focal length of the optical lens 302 shown in FIG. 2 can be changed.
- each liquid crystal lens of each liquid crystal lens array may be driven via each control unit 32 to 37 to operate the autofocus function.
- a high-resolution image of a moving subject can be taken.
- the intersection of the solid lines represents the pixels of the image sensor M
- the intersection of the dotted lines represents the pixels of the image sensor N.
- the values of the pixels Puv necessary for high definition are (pixel Mi, j, Ni, j), (pixel Mi, j + 1, Ni, j + 1), (pixel Mi + 1, j, Ni + 1) located around the pixel Puv. , J) and (pixels Mi + 1, j + 1, Ni + 1, j + 1) and the pixel Puv
- the filter coefficient is used for the calculation.
- the seventh embodiment described above an image sensor having a different pixel pitch is used for each of the six image sensors.
- the imaging magnification is optically changed using six image sensors having the same pixel pitch.
- the pixel pitch of the image sensor may be changed. That is, when the pixel pitches of the image sensor N and the image sensor N are the same, the image formed on the image sensor M becomes larger or smaller by changing (magnifying or reducing) the magnification of the imaging lens of the image sensor M. Thus, it is possible to avoid that the phases of the respective pixels substantially coincide with each other.
- the image sensor shown in FIG. 37 has one image sensor (image sensor N) as a fulcrum (for example, the upper left corner of the image sensor) with respect to the other image sensor (image sensor M) and the optical axis. It is rotated and fixed around a parallel axis. Also in this case, as in the case shown in FIG. 36, the pixel positions in the u-axis and v-axis directions are different, so that the phases of the pixel pitches can be prevented from matching.
- the image pickup apparatus is arranged so that the pixel pitches of the image pickup devices 14 to 19 are different from each other, and the above-described operation for controlling the direction of the optical axis is combined to increase the resolution. You may do it. By doing in this way, it becomes possible to obtain a high-resolution image easily.
- the image quality is deteriorated due to crosstalk on the image sensor, and it is difficult to improve the image quality.
- the optical axis of the light incident on the image sensor is controlled. Accordingly, it is possible to realize an imaging device that can eliminate crosstalk and obtain high image quality.
- the image formed on the imaging device is captured by image processing. Therefore, the resolution of the imaging device needs to be larger than the required imaging resolution.
- the liquid crystal Since it is possible to control not only the optical axis direction of the lens but also the optical axis of light incident on the image sensor at an arbitrary position, the size of the image sensor can be reduced, and a light, thin and small size is required. It can be mounted on a portable terminal or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
Description
本願は、2008年1月10日に、日本に出願された特願2008-003075号、および、2008年7月10日に、日本に出願された特願2008-180689に基づき優先権を主張し、その内容をここに援用する。
撮像装置の高画質化、高機能化の要求に応じて、撮像素子は多画素化、高精細化し、結像光学系はより低収差、高精度化され、かつズーム機能、オートフォーカス機能、手振れ補正機能等の高機能化が進んでいる。それに伴い、撮像装置が大きくなり、小型化、薄型化が困難になるという問題がある。
2~7 単位撮像部
8~13 撮像レンズ
14~19 撮像素子
20~25 光軸
26~31 映像処理部
32~37 制御部
38 映像合成処理部
また、例えば映像合成処理部38の演算途中の画像をVideoRAM・P004に格納する際にこのシステムバスP005を使用する。高速転送速度が必要な画像信号用のバスと、低速のデータバスを異なるバスラインとしてもよい。システムバスP005には、図示しないUSBやフラッシュメモリカードのような外部とのインターフェースや、ビューファインダとしての液晶表示器の表示駆動コントローラが接続される。
また、映像合成処理部38の出力であるビデオ出力がハイビジョン信号(1920画素×1080画素)であると仮定する。この場合、判定部703で判断する周波数帯域は、およそ20MHzから30MHzである。ワイドVGAの映像信号が再現可能な映像周波数の帯域上限はおよそ10MHz~15MHzである。このワイドVGAの信号を用いて、合成処理部701で合成処理することにより、20MHz~30MHzの成分を復元する。ここで、撮像素子はワイドVGAであるが、主に撮像レンズ8~13からなる撮像光学系は、ハイビジョン信号の帯域を劣化させない特性を持つことが条件となる。
w=1-2×d2+d3 (0≦d<1)
=4-8×d+5×d2-d3 (1≦d<2)
=0 (2≦d)
これにより、(6)式と(7)式は次式のようになる。
算出されたPを用いることで、三次元空間中の点と二次元画像平面上の点との対応関係が記述できる。図19に示す構成において、基本カメラの中心射影行列をP1とし、隣接カメラの中心射影行列をP2とする。画像平面1上の点m1から、その点と対応する画像平面2上の点m2を求めるために、以下の方法を用いる。
(1)(16)式よりm1から三次元空間中の点Mを求める。中心射影行列Pは、3×4の行列であるため、Pの疑似逆行列を用いて求める。
測定距離、すなわち被写体までの距離が500mmの場合は、8眼カメラのうち互いの光軸間距離(基線長)の短い4つのカメラで撮像、イメージシフト処理に割り当て、残りの互いに基線長の長い4つのカメラで被写体までの距離を測定する。また、被写体までの距離が2000mmと遠い場合は、8眼を使用してイメージシフトの高解像処理を行い、測距は例えば撮像した画像の解像度を解析することでボケ量を判定して、距離を推定するような処理で行うような構成にしてもよい。前述の500mmの場合にも、例えばTOF(Time of Flight)のような他の測距手段を併用することで、測距の精度を向上させてもよい。
(u1-u2)=Δr×u1/H ・・・(22)
ここでは、これらの条件B,H,fを固定して定数とみなす。また、光軸シフト手段により、理想的な光軸関係としていると仮定する。Δrと画素の位置(撮像素子に結像する像の、光軸からの距離)との関係は、(23)式となる。
Δr=(u1-u2)×H/u1 ・・・(23)
図31に示すように、可変頂角プリズム3101の頂角を圧電素子3102を用いたアクチュエータなどの手段により変化させることにより、入射光が偏向するため、固体レンズ302によって結像する撮像素子15上の位置が変わり、光軸をシフトすることが可能となる。ただしこの場合も光軸シフトの方向は図31に示す通り1軸方向(図中ではY方向)のみであるため、X方向に光軸シフトするような可変頂角プリズムを、別に配置することにより、任意の方向に光軸シフトすることが可能となる。
ピンホールO’移動量(焦点距離を短くする)<前方被写界深度/倍率・・・(24)
ピンホールO’移動量(焦点距離を長くする)<後方被写界深度/倍率・・・(25)
ここで、
前方被写界深度=許容錯乱円径×絞り値×撮像距離2/(焦点距離2+許容錯乱円径×絞り値×撮像距離)
後方被写界深度=許容錯乱円径×絞り値×撮像距離2/(焦点距離2-許容錯乱円径×絞り値×撮像距離)
倍率=撮像距離/像点距離≒撮像距離/焦点距離
被写界深度=前方被写界深度+後方被写界深度である。
前方被写界深度=132.2(mm)
後方被写界深度=242.7(mm)
倍率=120となる。従って、(24)、(25)式は、
ピンホールO’移動量(焦点距離を短くする)<1.1(mm)
ピンホールO’移動量(焦点距離を長くする)<2.0(mm)となる。
この移動量の範囲内でピンホールO’の焦点距離を制御することにより、光軸シフトのときと同様に高解像度化が可能となる。
第7の実施形態においては、図1に示す6つの撮像素子14~19それぞれについて、画素ピッチが異なる撮像素子を使用する。
w=1-2×d2+d3 (0≦d<1)
=4-8×d+5×d2-d3 (1≦d<2)
=0 (2≦d)
p’=p/cosθ
Claims (13)
- 複数の撮像素子と、
前記撮像素子のそれぞれに像を結像させる複数の固体レンズと、
前記撮像素子にそれぞれに入射する光の光軸の方向を制御する複数の光軸制御部と
を備えることを特徴とする撮像装置。 - 前記光軸制御部は、屈折率分布を変化させることが可能な非固体レンズで構成され、前記非固体レンズの屈折率分布を変化させることにより、前記撮像素子に入射する光の光軸を偏向させることを特徴とする請求項1に記載の撮像装置。
- 前記光軸制御部は、屈折板と該屈折板の傾斜角を変える傾斜角変更手段とから構成され、前記傾斜角変更手段によって、前記屈折板の傾斜角を変えることにより、前記撮像素子に入射する光の光軸を偏向させることを特徴とする請求項1に記載の撮像装置。
- 前記光軸制御部は、可変頂角プリズムから構成され、前記可変頂角プリズムの頂角を変えることにより、前記撮像素子に入射する光の光軸を偏向させることを特徴とする請求項1に記載の撮像装置。
- 前記光軸制御部は、前記固体レンズを移動させる移動手段で構成され、前記固体レンズを移動させることにより、前記撮像素子に入射する光の光軸を偏向させることを特徴とする請求項1に記載の撮像装置。
- 前記光軸制御部は、前記撮像素子を移動させる移動手段で構成され、前記撮像素子を移動させることにより、前記撮像素子に入射する光の光軸を制御することを特徴とする請求項1に記載の撮像装置。
- 前記光軸制御部は、既知の撮像対象との相対位置関係に基づいて前記光軸の方向を制御することを特徴とする請求項1に記載の撮像装置。
- 前記複数の撮像素子のそれぞれは、画素のピッチが異なることを特徴とする請求項1に記載の撮像装置。
- 前記複数の固体レンズのそれぞれは、焦点距離が異なることを特徴とする請求項1に記載の撮像装置。
- 前記複数の撮像装置は、光軸周りにそれぞれ異なる角度で回転させて配置したことを特徴とする請求項1に記載の撮像装置。
- 複数の撮像素子と、
前記撮像素子のそれぞれに像を結像させる複数の固体レンズと、
屈折率分布を変化させることが可能な非固体レンズで構成され、前記非固体レンズの屈折率分布を変化させることにより、前記固体レンズの焦点距離を変更する焦点制御部と
を備えることを特徴とする撮像装置。 - 複数の撮像素子と、
前記撮像素子のそれぞれに像を結像させる複数の固体レンズと、
前記撮像素子にそれぞれに入射する光の光軸の方向を制御する複数の光軸制御部と
を備える撮像装置における光軸制御方法であって、
前記光軸制御部が、既知の撮像対象と前記光軸制御部との相対位置関係に基づいて前記光軸の方向を制御することを特徴とする光軸制御方法。 - 複数の撮像素子と、
前記撮像素子のそれぞれに像を結像させる複数の固体レンズと、
屈折率分布を変化させることが可能な非固体レンズで構成され、前記非固体レンズの屈折率分布を変化させることにより、前記固体レンズの焦点距離を変更する焦点制御部と
を備える撮像装置における光軸制御方法であって、
前記焦点制御部が、既知の撮像対象と前記撮像素子との相対位置関係に基づいて前記固体レンズの焦点距離を制御することを特徴とする光軸制御方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009801018029A CN101911671B (zh) | 2008-01-10 | 2009-01-09 | 摄像装置和光轴控制方法 |
US12/812,142 US8619183B2 (en) | 2008-01-10 | 2009-01-09 | Image pickup apparatus and optical-axis control method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008003075 | 2008-01-10 | ||
JP2008-003075 | 2008-01-10 | ||
JP2008-180689 | 2008-07-10 | ||
JP2008180689A JP4413261B2 (ja) | 2008-01-10 | 2008-07-10 | 撮像装置及び光軸制御方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009088068A1 true WO2009088068A1 (ja) | 2009-07-16 |
Family
ID=40853177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/050198 WO2009088068A1 (ja) | 2008-01-10 | 2009-01-09 | 撮像装置及び光軸制御方法 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2009088068A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011039650A (ja) * | 2009-08-07 | 2011-02-24 | Jm:Kk | 室内改装費用見積システム |
CN114422708A (zh) * | 2022-03-15 | 2022-04-29 | 深圳市海清视讯科技有限公司 | 图像获取方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08265804A (ja) * | 1995-03-20 | 1996-10-11 | Canon Inc | 撮像装置 |
JPH11252467A (ja) * | 1998-03-02 | 1999-09-17 | Sharp Corp | 撮像装置 |
JP2000193925A (ja) * | 1998-12-24 | 2000-07-14 | Olympus Optical Co Ltd | 光偏向器及びそれを備えたブレ防止装置 |
JP2005181720A (ja) * | 2003-12-19 | 2005-07-07 | Nippon Hoso Kyokai <Nhk> | 立体画像撮像装置および立体画像表示装置 |
JP2005277606A (ja) * | 2004-03-23 | 2005-10-06 | Fuji Photo Film Co Ltd | 撮影装置 |
JP2006251613A (ja) * | 2005-03-14 | 2006-09-21 | Citizen Watch Co Ltd | 撮像レンズ装置 |
-
2009
- 2009-01-09 WO PCT/JP2009/050198 patent/WO2009088068A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08265804A (ja) * | 1995-03-20 | 1996-10-11 | Canon Inc | 撮像装置 |
JPH11252467A (ja) * | 1998-03-02 | 1999-09-17 | Sharp Corp | 撮像装置 |
JP2000193925A (ja) * | 1998-12-24 | 2000-07-14 | Olympus Optical Co Ltd | 光偏向器及びそれを備えたブレ防止装置 |
JP2005181720A (ja) * | 2003-12-19 | 2005-07-07 | Nippon Hoso Kyokai <Nhk> | 立体画像撮像装置および立体画像表示装置 |
JP2005277606A (ja) * | 2004-03-23 | 2005-10-06 | Fuji Photo Film Co Ltd | 撮影装置 |
JP2006251613A (ja) * | 2005-03-14 | 2006-09-21 | Citizen Watch Co Ltd | 撮像レンズ装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011039650A (ja) * | 2009-08-07 | 2011-02-24 | Jm:Kk | 室内改装費用見積システム |
CN114422708A (zh) * | 2022-03-15 | 2022-04-29 | 深圳市海清视讯科技有限公司 | 图像获取方法、装置、设备及存储介质 |
CN114422708B (zh) * | 2022-03-15 | 2022-06-24 | 深圳市海清视讯科技有限公司 | 图像获取方法、装置、设备及存储介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4413261B2 (ja) | 撮像装置及び光軸制御方法 | |
JP4529010B1 (ja) | 撮像装置 | |
US20200252597A1 (en) | System and Methods for Calibration of an Array Camera | |
JP6702323B2 (ja) | カメラモジュール、固体撮像素子、電子機器、および撮像方法 | |
Venkataraman et al. | Picam: An ultra-thin high performance monolithic camera array | |
JP4699995B2 (ja) | 複眼撮像装置及び撮像方法 | |
US8885067B2 (en) | Multocular image pickup apparatus and multocular image pickup method | |
CN102037717B (zh) | 使用具有异构成像器的单片相机阵列的图像拍摄和图像处理 | |
JP4322921B2 (ja) | カメラモジュールおよびそれを備えた電子機器 | |
US20100097491A1 (en) | Compound camera sensor and related method of processing digital images | |
JP2012123296A (ja) | 電子機器 | |
JP5677366B2 (ja) | 撮像装置 | |
US10594932B2 (en) | Camera module performing a resolution correction and electronic device including the same | |
JP7312185B2 (ja) | カメラモジュール及びその超解像度映像処理方法 | |
JP2010130628A (ja) | 撮像装置、映像合成装置および映像合成方法 | |
WO2009088068A1 (ja) | 撮像装置及び光軸制御方法 | |
JP5393877B2 (ja) | 撮像装置および集積回路 | |
US11948316B2 (en) | Camera module, imaging device, and image processing method using fixed geometric characteristics | |
JP6056160B2 (ja) | 自動合焦装置、自動合焦方法及びプログラム | |
US7898591B2 (en) | Method and apparatus for imaging using sensitivity coefficients | |
JP4558781B2 (ja) | カメラの焦点検出装置 | |
US20220232166A1 (en) | Range measurement apparatus, storage medium and range measurement method | |
JP7289996B2 (ja) | 検出装置、撮像装置、検出方法、及びプログラム | |
KR20210114846A (ko) | 고정된 기하학적 특성을 이용한 카메라 모듈, 촬상 장치 및 이미지 처리 방법 | |
JP2005134617A (ja) | 焦点検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980101802.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09700479 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12812142 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 4461/CHENP/2010 Country of ref document: IN |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09700479 Country of ref document: EP Kind code of ref document: A1 |