WO2019026600A1 - Camera module and image capture device - Google Patents

Camera module and image capture device Download PDF

Info

Publication number
WO2019026600A1
WO2019026600A1 PCT/JP2018/026658 JP2018026658W WO2019026600A1 WO 2019026600 A1 WO2019026600 A1 WO 2019026600A1 JP 2018026658 W JP2018026658 W JP 2018026658W WO 2019026600 A1 WO2019026600 A1 WO 2019026600A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensing
area
imaging
optical system
region
Prior art date
Application number
PCT/JP2018/026658
Other languages
French (fr)
Japanese (ja)
Inventor
典宏 田部
宜邦 野村
龍平 秦
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2019026600A1 publication Critical patent/WO2019026600A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/30Measuring the intensity of spectral lines directly on the spectrum itself
    • G01J3/36Investigating two or more bands of a spectrum by separate detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/50Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
    • G01J3/51Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors using colour filters
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present technology relates to a camera module and an imaging device, and more particularly to a camera module and an imaging device capable of obtaining more appropriate spectral information.
  • pixels provided with color filters of R (red), G (green), and B (blue) are disposed at the central portion of the light receiving surface, and at least R, G, B at the peripheral portion of the light receiving surface.
  • RGB red
  • G green
  • B blue
  • a camera module in which pixels provided with color filters of other colors different from each color are arranged (see, for example, Patent Document 1).
  • RGB information that is, color image information of a target subject is acquired by pixels of respective colors of R, G, and B at the central portion of the angle of view. Further, in the peripheral portion of the angle of view, that is, in the vicinity of the end of the angle of view, spectral information of the light source is acquired by pixels of other colors different from R, G and B. Therefore, not only target image information but also spectral information can be obtained using a single solid-state imaging device.
  • one optical system is used, and light from a subject is led to a pixel for acquiring image information or a pixel for acquiring spectral information.
  • information of an object at the center of the angle of view of the camera module is obtained as image information, and information of an object at the peripheral portion of the angle of view is obtained as spectral information. Therefore, when the object of interest for spectral detection is at the center of the angle of view, it is not possible to obtain appropriate information as spectral information.
  • the present technology has been made in view of such a situation, and makes it possible to obtain more appropriate spectral information.
  • the camera module includes an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least pixels of the imaging area. And an imaging optical system for guiding light from the outside to the imaging area, and a sensing optical system for guiding light from the outside to the sensing area.
  • an imaging area provided with a pixel for capturing an image, and a pixel having a sensing area provided with a pixel capable of acquiring spectral information different from at least the pixels of the imaging area.
  • An array unit, an imaging optical system that guides light from the outside to the imaging area, and a sensing optical system that guides light from the outside to the sensing area are provided in the camera module.
  • the imaging device according to the second aspect of the present technology is an imaging device similar to the camera module according to the first aspect.
  • the present technology arranges a pixel for obtaining target image information and a pixel for obtaining spectral information on a light receiving surface of an imaging unit, and an optical system for obtaining image information and the optical system By providing an optical system for obtaining different spectral information, appropriate spectral information can be obtained.
  • the present technology can be applied to various electronic devices such as a camera module having an image sensor and an optical system, a digital still camera including such a camera module, a digital video camera, and a mobile phone.
  • the information acquired together with the image information is not limited to the spectral information, and may be any information as long as it is information on the subject of the image information.
  • the information on the subject in the image information may be, for example, distance information to the subject, parallax information or shape information on the subject, information on heat of the subject, information on HDR (High Dynamic Range), or the like.
  • FIG. 1 shows a cross section of the camera module 11 as viewed from a direction perpendicular to the optical axis direction of the optical system of the camera module 11.
  • the camera module 11 includes an imaging unit 21, an imaging optical system 22, a sensing optical system 23-1, and a sensing optical system 23-2.
  • the imaging unit 21 is formed of, for example, a solid-state imaging device such as a complementary metal oxide semiconductor (CMOS) image sensor, and the light receiving surface of the imaging unit 21 includes a plurality of pixels that receive light incident from an external object and photoelectrically convert it. It is done.
  • CMOS complementary metal oxide semiconductor
  • imaging region R11 which is a region of the central portion of the light receiving surface of the imaging unit 21
  • image information of the subject that is, pixels for capturing an image of the subject are provided.
  • a region surrounded by the light shielding plate 24-1 and the light shielding plate 24-2 disposed substantially perpendicular to the light receiving surface of the imaging unit 21 is an imaging region R11.
  • the pixels provided in the imaging region R11 are also referred to as imaging pixels, and an image signal of an image composed of pixel signals obtained by the imaging pixels, that is, an image of a subject is also referred to as a captured image.
  • pixels for obtaining spectral information of the object are provided in the region near the end of the light receiving surface of the imaging unit 21, that is, in the sensing region R12-1 and the sensing region R12-2 which are peripheral regions of the light receiving surface. It is done.
  • a region surrounded by the light shielding plate 24-1 and the light shielding plate 24-3 disposed substantially perpendicular to the light receiving surface of the imaging unit 21 is a sensing region R12-1.
  • a region surrounded by the light shielding plate 24-2 and the light shielding plate 24-4 disposed substantially perpendicularly to the light receiving surface is a sensing region R12-2.
  • the light shielding plates 24-1 to 24-4 may be simply referred to as the light shielding plate 24, and the sensing regions R12-1 and R12-2 need to be particularly distinguished from each other. If not, it is also simply referred to as sensing region R12.
  • the pixels provided in the sensing region R12 are also referred to as sensing pixels, and an image formed of pixel signals obtained by the sensing pixels is also referred to as a sensing image.
  • the pixels provided in the sensing area R12 include at least pixels in which spectral information different from the spectral information obtained by the pixels provided in the imaging area R11 is obtained.
  • the captured image is a RGB color image.
  • a pixel provided with a color filter of R (red) hereinafter also referred to as R pixel
  • a pixel provided with a G (green) color filter hereinafter referred to as G
  • Pixels (also referred to as pixels) and pixels provided with color filters of B (blue) hereinafter also referred to as B pixels) are arranged as imaging pixels.
  • R pixels, G pixels, and B pixels are arranged in a Bayer arrangement.
  • Spectral information on each of the R, G, and B color components can be obtained from the pixels provided in such an imaging region R11.
  • pixels provided with color filters of other colors different from at least the colors R, G, and B are disposed as sensing pixels.
  • colors different from the colors R, G, and B are W (white), C (cyan), M (magenta), Y (yellow), and the like.
  • a pixel provided with a W color filter is also referred to as a W pixel
  • a pixel provided with a C color filter is also referred to as a C pixel
  • a pixel provided with an M color filter is also referred to as an M pixel.
  • the pixel provided with the color filter of is also referred to as a Y pixel.
  • pixels provided with IR (infrared) color filters in the sensing region R12 (hereinafter also referred to as IR pixels) and pixels provided with E (emerald) color filters (hereinafter referred to as E pixels)
  • IR pixels infrared
  • E pixels pixels provided with E (emerald) color filters
  • a plurality of pixels of different colors are arranged in a predetermined pattern.
  • a predetermined arrangement pattern in which pixels of a plurality of different colors, that is, pixels provided with a plurality of different color filters are arranged adjacent to each other is referred to as an array unit.
  • the array unit includes the pixels of all the color components in the sensing area R12, and the array unit is the minimum unit of the repeating pattern consisting of the combination of the pixels of the plurality of color components, that is, the pixels are repeatedly arranged periodically. It is an area of a pixel array that is the smallest unit of the array pattern of pixels. For example, assuming that C pixel, M pixel, and Y pixel are provided in sensing region R12, at least one pixel of each color component of those C pixel, M pixel, and Y pixel is included in the array unit. It is done.
  • the pixel arrangement of the sensing region R12 is a pixel arrangement in which such arrangement units are repeatedly arranged in a vertical direction and a horizontal direction, that is, in a matrix form (periodically arranged).
  • the pixel array of the sensing region R12 is a pixel array in which a plurality of pixels of each color component are periodically and repeatedly arranged.
  • the W pixel, C pixel, M pixel, Y pixel, IR pixel and E pixel are pixels provided with color filters (wavelength components) different from the color components of R pixel, G pixel and B pixel. is there. Therefore, it is possible to obtain information of a color different from each color (wavelength) of R pixel, G pixel, and B pixel, that is, different spectral information, from W pixel, C pixel, M pixel, Y pixel, IR pixel, E pixel. It is.
  • R pixel, G pixel, and B pixel are provided in imaging region R11, and at least W pixel, C pixel, M pixel, Y pixel, IR pixel, and E pixel are provided in sensing region R12. The description will be continued assuming that any pixel is provided.
  • the R pixel, the G pixel, and the B pixel may be provided in the sensing region R12.
  • the color filters provided to the pixels in the imaging area R11 and the color filters provided to the pixels in the sensing area R12 may be formed in any manner.
  • the color filter of each pixel of the imaging area R11 and the sensing area R12 may be, for example, a general absorption type organic material, a dielectric multilayer film, or the like, or may be formed of a plasmon resonance body.
  • the technique for forming the color filter of the pixel by a plasmon resonator is described in, for example, Japanese Patent Application Laid-Open No. 2012-59865.
  • a pixel signal obtained by receiving light from the outside and photoelectrically converting it by the sensing pixel arranged in the sensing region R12, that is, spectral information obtained from the sensing image is the light receiving amount of light of each color component (wavelength component) Information indicating That is, it is the spectral information of the light from the light source within the angle of view of the camera module 11.
  • Such spectral information is used, for example, in various image processing such as white balance adjustment for a captured image, sensing processing such as detection of an activity state of a subject in the captured image, and the like.
  • the imaging optical system 22 is an imaging lens composed of one or a plurality of optical lenses, and guides light incident from the outside to the imaging area R11 of the imaging unit 21. That is, the imaging optical system 22 condenses light incident from the outside to form an image on the imaging region R11.
  • the sensing optical system 23-1 is made of, for example, a diffusion plate (diffuser), a cylindrical lens, or the like, and guides light incident from the outside to the sensing region R12-1 of the imaging unit 21.
  • the sensing optical system 23-2 includes, for example, a diffusion plate or a cylindrical lens, and guides light incident from the outside to the sensing area R12-2 of the imaging unit 21.
  • sensing optical system 23-1 and sensing optical system 23-2 will be simply referred to as sensing optical system 23 unless it is necessary to distinguish them.
  • the camera module 11 configured as described above, different optical systems are used to obtain a captured image and a sensing image.
  • one imaging lens is used to obtain a captured image and a sensing image. That is, light collected by one imaging lens is incident on the imaging region R11 and the sensing region R12.
  • the captured image is an image of the subject at the center of the angle of view of the imaging lens, that is, an image of the subject at the center of the field of view of the imaging lens.
  • the sensing image is an image of a subject in the peripheral portion (portion near the end) of the angle of view of the imaging lens.
  • spectral information on the subject on the captured image can not be obtained.
  • an object of interest related to spectral detection that is, a subject whose spectral information is to be acquired is at the center of the angle of view of the imaging lens
  • spectral information on the subject can not be obtained. That is, appropriate spectral information can not be obtained.
  • an imaging optical system 22 is provided as an optical system for obtaining a captured image, and sensing optical as an optical system for obtaining spectral information separately from the imaging optical system 22.
  • a system 23 is provided.
  • the sensing optical system 23 by arranging the sensing optical system 23 according to the object of interest for spectral detection, it is possible to obtain spectral information of the object of interest, that is, appropriate spectral information.
  • the sensing optical system 23 is appropriately disposed, the angle of view of the imaging optical system 22, that is, spectral information of the same angle of view as the angle of view of the captured image, or the angle of view corresponding to the central portion of the angle of view of the captured image Spectral information can be obtained.
  • spectral information can be obtained for an object at least at the central portion of the captured image. That is, appropriate spectral information can be acquired by a single sensor called the imaging unit 21.
  • the number of parts of the camera module 11 can be reduced, and cost reduction and miniaturization can be realized. can do.
  • a lens and a color filter may be disposed as a module for each area including a plurality of pixels of an imaging unit to obtain a multi-wavelength image. Conceivable. However, in such a case, since it is necessary to provide a lens array and a color filter array separately from the imaging unit, the number of parts increases, which makes it difficult to miniaturize the camera module and also increases the cost.
  • the camera module 11 it is not necessary to provide a color filter array separately from the imaging unit, and it is not necessary to provide a lens for each color filter, so the number of parts may be small. Cost reduction can be realized.
  • the sensing optical system 23 is not necessarily required to have an optical element (optical member) such as an optical lens, and it may be any type as long as it can guide light from the outside to the sensing region R12. It may be any optical element (optical member) such as an optical lens, and it may be any type as long as it can guide light from the outside to the sensing region R12. It may be any optical element (optical member) such as an optical lens, and it may be any type as long as it can guide light from the outside to the sensing region R12. It may be any type as long as it can guide light from the outside to the sensing region R12. It may be any type as long as it can guide light from the outside to the sensing region R12. It may be any optical element (optical member) such as an optical lens, and it may be any type as long as it can guide light from the outside to the sensing region R12. It may be any optical element (optical member) such as an optical lens, and it may be any type as long as it can guide light from the outside to the sensing region R12. It
  • the sensing optical system 23 includes a pinhole array formed of pinholes of a cylindrical structure or a polygonal prism structure formed by the light shielding plate 24 or the like, such a pinhole array and a diffusion plate, a lens array, a lens array and a diffusion It may be a plate or the like.
  • FIG. 2 is a diagram showing a configuration example of a camera module to which the present technology is applied.
  • portions corresponding to the case in FIG. 1 are denoted with the same reference numerals, and the description thereof will be appropriately omitted.
  • FIG. 2 is a view of the camera module 11 as viewed from the optical axis direction of the imaging optical system 22.
  • the optical axis direction of the imaging optical system 22 is also referred to as Z direction
  • the left and right direction is also referred to as X direction in the drawing
  • the up and down direction is also referred to as Y direction in the drawing.
  • These X direction, Y direction, and Z direction are directions orthogonal to each other.
  • the area at the center of the light receiving surface of the imaging unit 21 is an imaging area R11. Further, in the drawing of the light receiving surface of the imaging unit 21, the area at the left end is the sensing area R12-1, and similarly, the area at the right end of the light receiving surface of the imaging unit 21 is the sensing area R12-2. It has become.
  • an imaging optical system 22 is disposed on the front side in the drawing of the imaging region R11.
  • the area of the area of the cross section of the imaging optical system 22 is smaller than the area of the imaging area R11. More specifically, when viewed from the Z direction, the imaging optical system 22 is included in the imaging region R11.
  • a sensing optical system 23-1 is disposed on the near side in the drawing of the sensing area R12-1, and a sensing optical system 23-2 is disposed on the near side in the drawing of the sensing area R12-2.
  • the area of the cross section of the imaging optical system 22 is smaller than the imaging area R11, so that the sensing is performed immediately above the sensing area R12 adjacent to the imaging optical system 22.
  • An optical system 23 can be arranged. Thereby, it is possible to prevent deviation between the angle of view of the imaging optical system 22 and the angle of view of the sensing optical system 23, and to obtain spectral information with an angle of view substantially the same as the angle of view of the imaging optical system 22. It will be.
  • the imaging unit 21 is configured, for example, as shown in FIG.
  • the imaging unit 21 illustrated in FIG. 3 includes a pixel array 51, a row scanning circuit 52, a PLL (Phase Locked Loop) 53, a DAC (Digital Analog Converter) 54, a column ADC (Analog Digital Converter) circuit 55, a column scanning circuit 56, and A sense amplifier 57 is provided.
  • PLL Phase Locked Loop
  • DAC Digital Analog Converter
  • ADC Analog Digital Converter
  • the pixel array 51 has a plurality of pixels 61 arranged in a two-dimensional manner.
  • the pixels 61 are respectively disposed at points where horizontal signal lines H connected to the row scanning circuit 52 and vertical signal lines V connected to the column ADC circuit 55 intersect, and photodiodes performing photoelectric conversion, It is composed of several types of transistors for reading out the accumulated signal.
  • the pixel 61 includes the photodiode 71, the transfer transistor 72, the floating diffusion 73, the amplification transistor 74, the selection transistor 75, and the reset transistor 76, as shown enlarged on the right side of FIG. .
  • the photodiode 71 is a photoelectric conversion element that receives light incident from the outside and performs photoelectric conversion, and accumulates the charge obtained by photoelectric conversion.
  • the charge stored in the photodiode 71 is transferred to the floating diffusion 73 via the transfer transistor 72.
  • the floating diffusion 73 is connected to the gate of the amplification transistor 74.
  • the row scanning circuit 52 controls the selection transistor 75 via the horizontal signal line H to turn on the selection transistor 75.
  • the pixel 61 is in the selected state.
  • the signal of the selected pixel 61 is read out to the vertical signal line V as a pixel signal corresponding to the accumulated charge amount of the charge accumulated in the photodiode 71 by driving the amplification transistor 74 with a source follower (Source follower). . That is, a voltage signal corresponding to the charge transferred from the photodiode 71 and accumulated in the floating diffusion 73 is output from the selection transistor 75 to the vertical signal line V as a pixel signal.
  • Source follower Source follower
  • the row scanning circuit 52 sequentially outputs driving signals for driving (transferring, selecting, resetting, and the like) the pixels 61 of the pixel array 51 for each row.
  • the PLL 53 generates and outputs a clock signal of a predetermined frequency required to drive each block in the imaging unit 21 based on a clock signal supplied from the outside.
  • the DAC 54 generates and outputs a ramp signal having a shape (generally sawtooth shape) that returns to a predetermined voltage value after the voltage drops at a predetermined inclination from a predetermined voltage value.
  • the column ADC circuit 55 includes the comparators 81 and the counters 82 in the number corresponding to the columns of the pixels 61 of the pixel array 51, and from the pixel signals output from the pixels 61, CDS (Correlated Double Sampling) A sampling operation is performed to extract a signal level, and a digital pixel signal is output.
  • CDS Correlated Double Sampling
  • the comparator 81 compares the ramp signal supplied from the DAC 54 with the pixel signal (luminance value) output from the pixel 61, and supplies the resultant comparison result signal to the counter 82. Then, the counter 82 A / D converts the pixel signal by counting a counter clock signal of a predetermined frequency in accordance with the comparison result signal output from the comparator 81.
  • the column scanning circuit 56 sequentially supplies a signal for outputting a pixel signal to the counter 82 of the column ADC circuit 55 at a predetermined timing.
  • the sense amplifier 57 amplifies the pixel signal supplied from the column ADC circuit 55 and outputs the amplified signal to the outside of the imaging unit 21.
  • the area portion of the pixel array 51 is a light receiving surface, and the imaging region R11 and the sensing region R12 described above are provided on the light receiving surface.
  • the pixels 61 constituting the pixel array 51 function as imaging pixels. Further, among the pixels 61 constituting the pixel array 51, the pixels 61 provided in the sensing region R12 function as sensing pixels. In addition, color filters of respective color components such as R, G, B, C, M, and Y are formed in the opening portion of the pixel 61.
  • the portion shown by the arrow Q11 is a view of the camera module 11 as viewed in the Z direction, as in the case of FIG.
  • FIG 4 is an enlarged view of the sensing area R12-2 of the camera module 11, and the part shown by the arrow Q13 is the sensing area R12-2 of the camera module 11 and the sensing optical system. It is the figure which expanded a part of 23-2.
  • the portion shown by the arrow Q12 is a view of the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
  • the hatching (pattern) applied to the square representing the sensing pixel here represents the color of the color filter provided on the sensing pixel. That is, the sensing pixels subjected to the same hatch are pixels of the same color component provided with the same color filter, and the pixels hatched different from each other are provided with different color filters, different colors It is a pixel of a component.
  • sensing pixels of four types of color components are formed in the sensing region R12, and the above-described array unit is formed by four sensing pixels (2 pixels ⁇ 2 pixels) adjacent to each other.
  • the array unit includes sensing pixels of four types of color components.
  • each region of 2 pixels ⁇ 2 pixels which is an arrangement unit, is surrounded by a light shielding plate 121 formed immediately above the sensing region R12.
  • the light shielding plate 121 corresponds to the light shielding plate 24 shown in FIG.
  • each rectangular area surrounded by the light shielding plate 121 in the sensing area R12 is also referred to as a light shielding area.
  • a region R ⁇ b> 41 of 2 pixels ⁇ 2 pixels is one light shielding region.
  • the light shielding area is an area including sensing pixels of all color components provided on the sensing area R12, ie, It may be a region including a region of at least one sequence unit. Therefore, for example, an area of 4 pixels ⁇ 4 pixels, that is, an area consisting of 4 array units adjacent to each other may be regarded as one light shielding area.
  • each array unit is surrounded by the light shielding plate 121 to be a light shielding region, and for each of the four light shielding regions, the pinhole 122-1 to the pinhole 122-4 are formed by the light shielding plate 121. Each is formed.
  • pinholes 122-1 to 122-4 will be simply referred to as pinholes 122 unless it is necessary to distinguish them.
  • a light shielding plate 121 which is long in a direction (Z direction) perpendicular to the light receiving surface of the imaging unit 21 is provided, and a rectangular prism-shaped space portion formed by the light shielding plate 121 is long. It is one pinhole 122.
  • each pinhole 122 has a quadrangular prism shape, but the pinhole 122 may have any shape such as a cylindrical shape or a polygonal prism shape.
  • the pinhole 122 is an aspect-added pinhole whose length in the Z direction is longer than the length in the X direction or Y direction, that is, the depth in the Z direction is deeper.
  • each pinhole 122 that is, the diameter (length) in the X direction or Y direction is made longer than the length in the X direction or Y direction of the light shielding region, that is, the region of the array unit. .
  • the area of the opening of the pinhole 122 is made larger than the area of the light shielding region, ie, the arrangement unit.
  • a pinhole array composed of a plurality of pinholes 122 arranged in (arranged) in the X direction and the Y direction disposed (provided) immediately above the sensing region R12 portion on the light receiving surface is used as the sensing optical system 23.
  • the sensing optical system 23 a pinhole array composed of a plurality of pinholes 122 arranged in (arranged) in the X direction and the Y direction disposed (provided) immediately above the sensing region R12 portion on the light receiving surface is used as the sensing optical system 23.
  • the light incident from the external subject is guided to the sensing pixels in the light shielding area of the sensing area R ⁇ b> 12 by the respective pinholes 122 constituting the sensing optical system 23.
  • the subject is on the left side in the figure, and light from the subject passes through the pinhole 122 from the left side in the figure and enters the sensing pixel.
  • the sensing region R12 is provided adjacent to the imaging region R11, and the depth in the Z direction of the pinhole 122 is sufficiently relative to the width (length in the X direction or Y direction) of the light shielding region. It's getting deeper.
  • the hole 122 leads to the light shielding area of the sensing area R12. In other words, light incident on the pinhole 122 at an angle (incident angle) of a certain degree or more is blocked by the light blocking plate 121 or the like and is not incident on the light blocking area.
  • the sensing image includes information of the subject substantially directly in front of the imaging unit 21, that is, information of the subject substantially at the center of the captured image.
  • spectral information is generated based on pixel signals read from each sensing pixel in the light-shielded area, that is, a sensing image.
  • spectral information indicating the intensity (incident light amount) of incident light of each of the four color components in one light shielding region Is obtained. Then, an average value, a weighted average value, a sum, a weighted addition value, and the like for each color component of spectral information obtained in all the light-shielded regions are determined, and they are used as one final spectral information.
  • the camera module 11 can not obtain spectral information for each area in the central portion of the captured image. That is, two-dimensional spectral information for each region can not be obtained. Therefore, spectral information obtained by the camera module 11 is information of each color component in the entire region of the central part of the angle of view of the captured image, that is, 0-dimensional spectral information.
  • the pixel arrangement of the sensing region R12 is a pixel arrangement in which arrangement units are periodically (regularly) repeated. That is, the color arrangement of the sensing pixels in the light shielding area is the same in all the light shielding areas.
  • sensing pixels such as signal processing for obtaining one final spectral information from spectral information of each light shielding region. It is possible to prevent the signal processing in the subsequent stage of the signal from becoming complicated. That is, for example, signal processing in the subsequent stage can be easily performed as compared with the case where the sensing pixels of each color component are randomly arranged in the entire sensing region R12.
  • the sensing optical system 23 as an optical system different from the imaging optical system 22 and further configuring the sensing optical system 23 with a pinhole array, spectral information of the central part of the angle of view of the captured image is obtained. You can get it. That is, since the subject in the central portion of the captured image is highly likely to be an object of interest for spectral detection, more appropriate spectral information can be obtained.
  • the pinhole array as the sensing optical system 23 is configured by the pinhole 122 with an aspect, accurate spectral information can be obtained without being affected by the incident angle dependency of the color filter. be able to.
  • the color filter provided in the sensing pixel has an incident angle dependency, and the spectral characteristics of the color filter change according to the incident angle of light on the color filter. That is, the wavelength distribution of the light reception intensity of the light received by the sensing pixel changes according to the incident angle of the light to the sensing pixel (color filter).
  • the peripheral area of the light receiving surface that is, the area near the end is used as a sensing area, and light is guided to the sensing area by the imaging optical system common to the imaging area. Light will be incident at an angle. Therefore, the spectrum changes due to the incident angle dependency of the above-mentioned color filter depending on the position of the sensing pixel, and accurate spectral information can not be detected.
  • the sensing optical system 23 is provided separately from the imaging optical system 22, and the sensing optical system 23 is configured by the pinhole 122 with an aspect. Therefore, light is incident on each sensing pixel from a direction substantially perpendicular to the light receiving surface, so that accurate spectral information can be detected without being affected by the incident angle dependency of the color filter. Moreover, regardless of the position of the sensing pixel, the incident angle of light is substantially the same for all sensing pixels, that is, light is incident under the same optical condition, and thus dispersion of spectral information may occur depending on the position of the sensing pixel. Absent.
  • the camera module 11 it is sufficient to provide a pinhole array as the sensing optical system 23, and the number of parts can be reduced, so that the cost and size of the camera module 11 can be reduced.
  • the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2.
  • the pinhole 122 is formed by the light shielding plate 121
  • the pinhole 122 is formed by a member such as a mirror or metal or a reflector. It is also good. That is, in place of the light shielding plate 121, the pinhole 122 may be formed by surrounding the light shielding region with a reflecting member such as a mirror, a metal, or a reflecting plate.
  • the aspect ratio of the pinhole 122 which is a pinhole with an aspect, that is, the ratio of the length in the X direction or Y direction of the pinhole 122 to the length (depth) in the Z direction of the pinhole 122 Spectral information can be detected for a desired area of area.
  • a pinhole 122 with an appropriate aspect ratio, it is also possible to detect spectral information at an angle of view substantially the same as the angle of view of the imaging optical system 22.
  • the length in the X direction and the length in the Y direction of the imaging region R11 are X (for example, X mm) and Y (for example, Y mm), respectively, and the focal length of the imaging optical system 22 is f (f For example, suppose that it is fmm).
  • the length in the X direction and the length in the Y direction of the pinhole 122 are respectively X ′ (eg, X ′ mm) and Y ′ (eg, Y ′ mm), and the length in the Z direction of the pinhole 122 (Depth) is assumed to be Z '(e.g., Z' mm).
  • the length of the pinhole 122 in the Z direction is the length of the light shielding plate 121 in the Z direction.
  • the Z direction is a direction perpendicular to the imaging area R11 and the sensing area R12, and can also be referred to as a depth direction (longitudinal direction) of the pinhole 122, that is, an optical axis direction of the sensing optical system 23.
  • the pinhole 122 has a quadrangular prism shape, and the opening of the pinhole 122 has a rectangular shape.
  • the aspect ratio of the pinhole 122 in the X direction to the Z direction (depth) is X '/ Z'
  • the aspect ratio of the pinhole 122 in the Y direction to the Z direction is Y '/ Z'.
  • the angle of view of the imaging optical system 22 and the pinhole 122 (sensing The angle of view of the optical system 23), that is, the angle of view of the spectral information, is substantially the same.
  • the ratio of the length Z ′ in the depth direction of the pinhole 122 to the length X ′ or Y ′ which is the length in the X direction or the Y direction perpendicular to the depth direction of the pinhole 122 is imaging If the pinhole 122 is formed to be approximately equal to the ratio of the focal length f of the optical system 22 to X or Y which is the length in the X direction or Y direction of the imaging region R11, the angle of view of the imaging optical system 22 Spectral information of substantially the same angle of view can be obtained.
  • Second Embodiment ⁇ Example of configuration of sensing optical system>
  • a pinhole array and a diffusion plate may be combined to form the sensing optical system 23.
  • the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 5, parts corresponding to those in FIG. 4 are given the same reference numerals, and the description thereof will be omitted as appropriate.
  • the portion shown by the arrow Q21 shows the camera module 11 viewed from the Z direction
  • the portion shown by the arrow Q22 shows an enlarged view of the portion of the sensing region R12-2.
  • the portion shown by the arrow Q21 and the portion shown by the arrow Q22 are the same as the portion shown by the arrow Q11 and the portion shown by the arrow Q12 of FIG.
  • a portion indicated by an arrow Q23 is a view of a sensing region R12-2 of the camera module 11 and a part of the sensing optical system 23-2 as viewed from the X direction.
  • the pinhole 122 is formed by the light shielding plate 121, which is the same as the case in FIG. 4, but in the pinhole array composed of the pinhole 122, the side opposite to the imaging unit 21 side, that is, the object It differs from the case in FIG. 4 in that a diffusion plate 151 is provided on the side.
  • the sensing optical system 23 is configured of a pinhole array consisting of pinholes 122 and a diffusion plate 151 disposed on the object side of the pinhole array.
  • light from the subject first enters the diffusion plate 151 and is diffused by the diffusion plate 151. Then, the light is incident from the subject, and the light diffused by the diffusion plate 151 is incident to the sensing pixels in the light shielding area through the respective pinholes 122.
  • the sensing optical system 23 includes the pinhole array including the pinhole 122 and the diffusion plate 151, appropriate spectral information can be obtained without being affected by the incident angle dependency of the color filter.
  • the spectral information obtained in this embodiment is 0-dimensional spectral information as in the example shown in FIG.
  • the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2. Also, the sensing optical system 23 may be configured by one pinhole 122 and the diffusion plate 151.
  • the wavelength resolution of spectral information may be improved by substantially increasing the types of color filters of the sensing pixel by utilizing the incident angle dependency of the color filter provided in the sensing pixel.
  • the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 6, parts corresponding to those in FIG. 4 are given the same reference numerals, and the description thereof will be omitted as appropriate.
  • a portion indicated by an arrow Q31 is a view of the camera module 11 as viewed from the Z direction.
  • the portion shown by the arrow Q31 is the same as the portion shown by the arrow Q11 in FIG.
  • FIG. 6 is an enlarged view of the sensing area R12-2 of the camera module 11, and the part shown by the arrow Q33 is a sensing area R12-2 of the camera module 11 and a sensing optical system. It is the figure which expanded the part of 23-2.
  • the portion shown by the arrow Q32 is a view of the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
  • a hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
  • the entire sensing region R12-2 is surrounded by the light shielding plate 181 formed immediately above the sensing region R12-2, and the entire sensing region R12-2 is one light shielding region.
  • the light blocking plate 181 corresponds to the light blocking plate 24 shown in FIG.
  • the entire sensing region R12-2 is surrounded by the light shielding plate 181 to be one light shielding region, and the light shielding region is a diffusion plate 182 and a cylindrical lens which is an optical lens (imaging lens) 183 are provided.
  • the diffusion plate 182 and the cylindrical lens 183 constitute a sensing optical system 23-2.
  • a cylindrical lens 183 having a cross section substantially the same size as the sensing area R12-2 is disposed on the subject side of the imaging unit 21, and the cylindrical lens 183 is opposite to the subject side, that is, the sensing area R12-2 side.
  • a diffuser plate 182 is further arranged on the side.
  • spectral information obtained by the camera module 11 is information of each color component in the entire area including the central portion of the angle of view of the captured image and the peripheral area thereof. That is, the spectral information obtained in this embodiment is the zeroth-order spectral information similar to the example shown in FIG.
  • the light diffused by the diffusion plate 182 is condensed by the cylindrical lens 183 and forms an image in each area of the light shielding area (sensing area R12), even light from the same subject depends on the position of the sensing pixel The incident angles of light to the sensing pixels are different.
  • the cylindrical lens 183 causes the light from the diffusion plate 182 to be incident on sensing pixels at different positions on the light shielding region at different incident angles.
  • light from near the center position of the diffusion plate 182 is from a direction substantially perpendicular to the light receiving surface as shown by the arrow AR11. It will be incident. That is, the incident angle of light is approximately 0 degrees.
  • the light from near the central position of the diffusion plate 182 is oblique to the light receiving surface as shown by the arrow AR12. It is incident from.
  • the incident angle of light when entering the sensing pixels differs depending on the position on the light shielding area (sensing area R12) where the sensing pixels are present.
  • sensing pixels of the same color component even in the sensing pixels of the same color component, the spectral characteristics change depending on the position of the sensing pixels, so sensing pixels of the same color component arranged at different positions are substantially as sensing pixels of different color components. It will work.
  • spectral information of different color components can be obtained from pixel signals of those two sensing pixels.
  • the types of color components (color filters) of the sensing pixels in the sensing region R12 can be substantially increased, and the wavelength resolution of spectral information can be improved. That is, the number of wavelength channels of spectral information can be increased.
  • sensing pixel the information of which color component can be obtained from the color filter provided in the sensing pixel and the arrangement position of the sensing pixel on the sensing region R12.
  • the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2.
  • the entire sensing area R12-2 is one light shielding area, and the diffusion plate 182 and the cylindrical lens 183 are provided for the light shielding area.
  • a plurality of light shielding regions may be provided in the sensing region R12-1 and the sensing region R12-2, and the diffusion plate 182 and the cylindrical lens 183 may be provided for each of the plurality of light shielding regions.
  • the sensing optical system 23 may be configured by one or more diffusion plates 182 and one or more cylindrical lenses 183.
  • the sensing optical system 23 may be configured of another optical lens different from the cylindrical lens 183 and the diffusion plate 182.
  • the color filter of each pixel in particular the color filter of the sensing pixel, is a color filter in which the difference in spectral dependence at each incident angle of the dielectric multilayer film, the plasmon resonator, etc. is remarkable. desirable.
  • the sensing area R12 is divided into a plurality of light shielding areas, and an imaging lens is provided as the sensing optical system 23 for each of the light shielding areas, so that the angle of view substantially the same as the angle of view of the imaging optical system 22 (captured image) Two-dimensional spectral information can be obtained.
  • the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 7, portions corresponding to the case in FIG. 4 are denoted with the same reference numerals, and the description thereof will be appropriately omitted.
  • the portion indicated by the arrow Q41 is a view of the camera module 11 as viewed from the Z direction.
  • the portion shown by the arrow Q41 is the same as the portion shown by the arrow Q11 in FIG.
  • FIG. 7 is an enlarged view of the sensing area R12-2 of the camera module 11, and the part shown by the arrow Q43 is the sensing area R12-2 of the camera module 11 and the sensing optical system. It is the figure which expanded the part of 23-2.
  • the portion shown by the arrow Q42 shows the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
  • a hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
  • the sensing region R12-2 is divided into three light shielding regions by the light shielding plate 211 formed immediately above the sensing region R12-2.
  • the light shielding plate 211 corresponds to the light shielding plate 24 shown in FIG.
  • each of three regions on the sensing region R12-2 is surrounded by the light shielding plate 211 to be a light shielding region, and the imaging lens 212-1 to the imaging lens 212 Three are provided.
  • a sensing optical system 23-2 is configured of a lens array including the imaging lenses 212-1 to 212-3.
  • the imaging lens 212-1 is disposed on the object side of the uppermost light shielding area in the drawing, and the imaging lens on the object side of the light shielding area at the center of the sensing area R 12-2.
  • An imaging lens 212-3 is disposed on the object side of the lowermost light shielding area in the drawing.
  • the imaging lenses 212-1 to 212-3 will be simply referred to as imaging lenses 212 unless it is necessary to distinguish them.
  • light from the subject is condensed by the imaging lens 212 and is guided to each sensing pixel of the light shielding area disposed immediately below the imaging lens 212. That is, the light from the subject is imaged on the light shielding area by the imaging lens 212.
  • a sensing pixel from a subject through the imaging lens 212 Light incident on a sensing pixel from a subject through the imaging lens 212 is photoelectrically converted in the sensing pixel, and a pixel signal corresponding to the amount of charge obtained as a result is read from each sensing pixel to generate spectral information. Be done.
  • the sensing optical system 23 is configured by the imaging lens 212
  • the image of the subject is formed by the imaging lens 212 in each light-shielded area.
  • the angle of view of each imaging lens 212 is substantially the same as the angle of view of the imaging optical system 22 (captured image).
  • an image composed of pixel signals read out from the respective sensing pixels in the light shielding area is an image substantially the same as the captured image, and more specifically, an image which differs from the captured image only in color components. Therefore, the spectral information obtained by the camera module 11 is information (spectrum of each color component for each area) for the entire area (entire angle of view of the imaged image) including the central portion of the angle of view of the imaged image and its peripheral area. It becomes two-dimensional spectral information which shows).
  • spectral information indicating the intensity (incident light quantity) of the incident light of each color component in each area in each area in the angle of view (observation visual field) of the imaging lens 212 is obtained for each light shielding area.
  • an average value, a weighted average value, a sum, a weighted addition value, and the like for each color component are obtained for each region of spectral information obtained in all the light-shielded regions, and the final spectral information is obtained.
  • two-dimensional spectral information can be obtained for the entire angle of view of the captured image.
  • the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2 shown in FIG.
  • the sensing area R12 since the sensing area R12 has a long shape in the vertical direction in the figure, the sensing area R12 is divided into a plurality of light shielding areas, and an image can be formed on the image of the subject for each of the light shielding areas.
  • the lens 212 is disposed. However, the entire sensing area R12 may be one light blocking area, and one imaging lens capable of forming an image of an object may be disposed as the sensing optical system 23.
  • sensing optical system 23 is divided into a plurality of light blocking areas, and an imaging lens is provided as the sensing optical system 23 for each of the light blocking areas, the angle of view (observation field) differs for each light blocking area (imaging lens) The spectral information may be detected.
  • the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 8, parts corresponding to the case in FIG. 4 are given the same reference numerals, and the description thereof will be omitted as appropriate.
  • the portion shown by the arrow Q51 is a view of the camera module 11 as viewed from the Z direction.
  • the part shown by the arrow Q51 is the same as the part shown by the arrow Q11 in FIG.
  • FIG. 8 is an enlarged view of the sensing area R12-2 of the camera module 11, and the area shown by the arrow Q53 is an enlarged view of the sensing area R12-1 of the camera module 11.
  • the portion shown by the arrow Q52 is a view of the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
  • a hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
  • the sensing region R12-2 is divided into three light shielding regions by the light shielding plate 241 formed immediately above the sensing region R12-2.
  • the light shielding plate 241 corresponds to the light shielding plate 24 shown in FIG.
  • each light shielding area represents the angle of view of the imaging optical system 22, that is, the captured image.
  • the angle of view (observation field) of each light-shielded area is the left half in the figure of the area of angle of view of the imaging optical system 22.
  • sensing region R12-1 is also divided into three light shielding regions by the light shielding plate, and the angle of view of each of the light shielding regions is represented by a dotted line surrounding those light shielding regions.
  • the right half region is shown.
  • each of the sensing optical systems 23 can guide light from an external subject at a different angle of view to each of the sensing regions R12 located immediately below the sensing optical systems 23. There is.
  • each light shielding area is a partial area of the angle of view of the imaging optical system 22, and each area within the angle of view of the imaging optical system 22 is necessarily within the angle of view of at least one light shielding area of the sensing area R12.
  • two-dimensional spectral information of the same angle of view as the angle of view of the imaging optical system 22 can be obtained.
  • spectral information for the right half of the field angle area of the imaging optical system 22 is obtained, and in the sensing area R12-2, the image of the imaging optical system 22 is Spectral information is obtained for the left half of the corner area.
  • which region in the angle of view of the imaging optical system 22 corresponds to the angle of view of each light shielding region can be determined by appropriately selecting the sensing optical system 23.
  • a lens such as an appropriate focal length or lens diameter as an imaging lens constituting the sensing optical system 23.
  • the system 23-2 may be configured by an imaging lens such as an appropriate focal length.
  • the part shown by arrow Q54 is a view of the sensing area R12-2 of the camera module 11 and the part of the sensing optical system 23-2 as viewed from the X direction.
  • each of the three regions on the sensing region R12-2 is surrounded by the light shielding plate 241 to be a light shielding region, and the imaging lens 242-1 to the imaging lens 242 are provided on the object side with respect to those light shielding regions. -3 is provided.
  • a sensing optical system 23-2 is configured by a lens array including the imaging lenses 242-1 to 242-3 capable of forming an image of a subject on the sensing region R12-2.
  • the imaging lens 242-1 is disposed on the object side of the light shielding area with respect to the uppermost light shielding area in the figure, and the imaging lens on the object side of the light shielding area at the center of the sensing area R12-2.
  • An imaging lens 242-3 is disposed on the object side of the lowermost light shielding area in the drawing.
  • the imaging lenses 242-1 to 242-3 will be simply referred to as an imaging lens 242 unless it is necessary to distinguish them.
  • the light from the subject is condensed by the imaging lens 242, and is guided to each sensing pixel of the light shielding area disposed immediately below the imaging lens 242. That is, light from the subject is imaged on the light shielding area by the imaging lens 242.
  • the sensing optical system 23 is configured by the imaging lens 242
  • the image of the subject is formed by the imaging lens 242 in each light-shielded area.
  • the angle of view of each imaging lens 242 is substantially the same as the area of the left half of the angle of view of the imaging optical system 22 (captured image).
  • an image composed of pixel signals read out from the sensing pixels of each light-shielded area is an image of the left half of the captured image, more specifically, an image different from the image of the left half of the captured image by only the color component. Therefore, spectral information obtained in the sensing region R12-2 is two-dimensional spectral information indicating information (spectrum) of each color component for each region, for the region at the left half of the angle of view of the captured image.
  • the configurations of the sensing area R12-1 and the sensing optical system 23-1 are also the same as the sensing area R12-2 and the sensing optical system 23 shown in FIG. 8 except that the angle of view (observation visual field) of the sensing optical system 23-1 is different.
  • the configuration is the same as that of the second embodiment.
  • An image consisting of pixel signals read from sensing pixels in each light-shielded area of sensing area R12-1 is an image in the right half of the captured image, more specifically, an image different from the image in the right half of the captured image . Therefore, spectral information obtained in the sensing region R12-1 is two-dimensional spectral information indicating information (spectrum) of each color component for each region, targeting the region on the right half of the angle of view of the captured image.
  • each color component for each area targeting the entire area (entire angle of view of the imaged image) finally including the central part of the angle of view of the imaged image and its peripheral area It is possible to obtain two-dimensional spectral information indicating the information (spectrum) of
  • the camera module 11 having the configuration shown in FIG. 8 can basically obtain spectral information similar to that of the camera module 11 having the configuration shown in FIG. 7.
  • each light shielding area is substantially the same as the angle of view of the imaging optical system 22, while in the example shown in FIG.
  • the angle of view of the light shielding area is a partial area within the angle of view of the imaging optical system 22.
  • the sensing area R12 has a long shape in the vertical direction in the figure, one sensing area R12 is divided into a plurality of light shielding areas, and the imaging lens 242 is disposed for each of the light shielding areas. It is a structure. However, one entire sensing area R12 may be one light blocking area, and one imaging lens may be disposed as the sensing optical system 23.
  • the optical member for guiding light to each light shielding area is the imaging lens 242 capable of forming the image of the subject on the sensing area R12
  • it is not necessarily required to be the imaging lens , And other optical lenses.
  • at least two or more sensing optical systems 23 of the plurality of sensing optical systems 23 need to be able to guide light from the outside to the sensing area R12 (light shielding area) at different angles of view.
  • the spectral characteristic of the pixel is also caused by the presence or absence of an IR (Infrared) cut filter (infrared light blocking filter) for blocking incident infrared light. Is known to change.
  • IR Infrared
  • the types of color filters of the sensing pixel are substantially increased, and the wavelength resolution of spectral information is increased. It may be improved.
  • the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 9, parts corresponding to those in FIG. 7 are assigned the same reference numerals, and the description thereof will be omitted as appropriate.
  • the portion shown by the arrow Q61 is a view of the camera module 11 as viewed from the Z direction.
  • the portion shown by the arrow Q61 is the same as the portion shown by the arrow Q41 of FIG. 7, so the description thereof is omitted.
  • FIG. 9 is an enlarged view of the imaging area R11 and the sensing area R12 of the camera module 11, and the part shown by the arrow Q63 is the sensing area R12-2 of the camera module 11 and the sensing. It is the figure which expanded the part of optical system 23-2.
  • the part shown by the arrow Q62 shows the view when the imaging area R11 and the sensing area R12 are viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
  • a hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
  • imaging pixels corresponding to the pixels 61 shown in FIG. 3 are also provided in the imaging region R11, illustration of squares representing the respective imaging pixels is omitted here.
  • an IR cut filter is provided so as not to receive unnecessary infrared light.
  • Such an IR cut filter is formed, for example, immediately above or below a color filter provided in an imaging pixel.
  • an IR cut filter may be provided, or an IR cut filter may not be provided, but in this example, the types of color filters are substantially increased. In order to cause this, an IR cut filter is provided in part of the sensing pixels.
  • an IR cut filter is provided for the pixels in the area R61 formed of the imaging area R11 and the sensing area R12-2.
  • both the color filter and the IR cut filter are provided in the sensing pixel in the sensing region R12-2.
  • the arrangement of the color filter of the sensing pixel in the sensing region R12-1 and the arrangement of the color filter of the sensing pixel in the sensing region R12-2 are the same. That is, the color filter provided in the sensing pixel at the predetermined position in sensing region R12-1 is the same as the color filter provided in the sensing pixel at the position in sensing region R12-2 corresponding to the predetermined position. It has become a thing.
  • the IR cut filter may be formed in the area R61 slightly wider than the imaging area R11, so that the cost can be reduced. Is possible.
  • the sensing region R12-1 is divided into three light shielding regions by a light shielding plate 271 formed immediately above the sensing region R12-1.
  • a sensing area R12-2 is divided into three light shielding areas by a light shielding plate 211 formed immediately above the sensing area R12-2.
  • the light shielding plate 271 and the light shielding plate 211 correspond to the light shielding plate 24 shown in FIG.
  • each of three regions on the sensing region R12-2 is surrounded by the light shielding plate 211 to be a light shielding region, and the imaging lens 212-1 to the imaging lens 212 Three are provided.
  • a sensing optical system 23-2 is configured of a lens array including the imaging lenses 212-1 to 212-3.
  • the sensing optical system 23-2 shown in FIG. 9 has the same configuration as the sensing optical system 23-2 shown in FIG. From this, in the example of FIG. 9 as well, in the sensing region R12-2, information (spectrum) of each color component for each region is shown for the entire region including the central portion of the angle of view of the captured image and its peripheral region. Two-dimensional spectral information will be obtained.
  • the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2 shown in FIG.
  • the IR cut filter is not provided in the sensing pixel in the sensing region R12-1, but the IR cut filter is provided in the sensing pixel in the sensing region R12-2.
  • the spectral characteristics are different between the sensing pixel in the sensing region R12-1 and the sensing pixel in the sensing region R12-2. Therefore, spectral information of color components different from each other between the pixel signal of the sensing pixel in the sensing region R12-1 and the pixel signal of the sensing pixel in the sensing region R12-2 provided with the same color filter as the sensing pixel Is obtained.
  • spectral information which is similar to the spectral information obtained in the sensing region R12-2 but different in color component is obtained.
  • one-dimensional final two-dimensional spectral information including information of more color components from the spectral information obtained in the sensing region R12-1 and the spectral information obtained in the sensing region R12-2 You can get
  • the IR cut filter in a part of sensing pixels in the sensing area R12, the types of color components (color filters) of the sensing pixels in the sensing area R12 are substantially increased, and the wavelength of spectral information Resolution can be improved. That is, the number of wavelength channels of spectral information can be increased.
  • FIG. 9 the case where an IR cut filter is provided in some of the sensing pixels in the configuration shown in FIG. 7 has been described as an example, but in addition, in the examples shown in FIG. 4, FIG. 5, FIG. Is also applicable.
  • the camera module 11 when configured as shown in FIG. 4, FIG. 5, FIG. 6, and FIG. 8, for example, sensing in the sensing region R12 of either sensing region R12-1 or sensing region R12-2.
  • the IR cut filter may be provided in the pixel, and the IR cut filter may not be provided in the sensing pixel in the other sensing region R12. This also substantially increases the types of color filters and can improve wavelength resolution.
  • the sensing area R12 may be provided at any position, and any number of sensing areas R12 may be provided. .
  • sensing regions R12 may be provided adjacent to the regions of the upper, lower, left, and right end portions of the imaging region R11.
  • FIG. 10 parts corresponding to the case in FIG. 2 are given the same reference numerals, and the description thereof will be omitted as appropriate.
  • FIG. 10 is a view of the camera module 11 as viewed from the optical axis direction (Z direction) of the imaging optical system 22.
  • the light receiving surface of the imaging unit 21, that is, the region at the central portion (central portion) of the pixel array 51 is used as the imaging region R11, and the region adjacent to the left in the figure of the imaging region R11 is the sensing region R12. It is considered to be -1.
  • the area adjacent to the right is the sensing area R12-2, and in the drawing of the imaging area R11, the area adjacent to the upper is the sensing area R12-3. In the figure of R11, the area adjacent to the lower side is taken as a sensing area R12-4.
  • the sensing region R12-1 to the sensing region R12-4 are provided in the left end, the right end, the upper end, and the lower end of the light receiving surface of the imaging unit 21, respectively.
  • the sensing area R12-3 and the sensing area R12-4 are areas in which a plurality of sensing pixels are formed as in the sensing area R12-1 and the sensing area R12-2.
  • the sensing optical system 23-1 to the sensing optical system 23-4 are respectively provided on the front side, that is, the object side.
  • the sensing optical system 23-3 is an optical system for guiding the light from the subject to the sensing area R12-3
  • the sensing optical system 23-4 is for guiding the light from the subject to the sensing area R12-4. It is an optical system.
  • the sensing optical system 23-3 and the sensing optical system 23-4 have the same configuration as the sensing optical system 23-1 and the sensing optical system 23-2.
  • the sensing area R12-1 to the sensing area R12-4 are provided for the imaging area R11.
  • the invention is not limited to this, and at least one of the sensing area R12-1 to the sensing area R12-4 may be provided.
  • sensing region R12-1 and the sensing region R12-2 may be provided for the imaging region R11, or only the sensing region R12-3 and the sensing region R12-4 may be provided.
  • sensing region R12-1 may be provided for the imaging region R11, only the sensing region R12-2 may be provided, or only the sensing region R12-3 is provided. Alternatively, only the sensing region R12-4 may be provided.
  • any one of the four corners of the light receiving surface of the pixel array 51 may be used as a sensing region.
  • the camera module 11 is configured, for example, as shown in FIG. In FIG. 11, parts corresponding to the case in FIG. 2 are given the same reference numerals, and the description thereof will be omitted as appropriate.
  • FIG. 11 is a view of the camera module 11 as viewed from the optical axis direction (Z direction) of the imaging optical system 22.
  • either the region R31-1 or the region R31-2 in the light receiving surface of the imaging unit 21, that is, the central portion (central portion) of the pixel array 51 is used as an imaging region. That is, the region R31-1 and the region R31-2 correspond to the imaging region R11 shown in FIG. 1, and the imaging pixels described above are formed in the region R31-1 and the region R31-2.
  • the region R31-1 is set as an imaging region, and imaging of the captured image is performed at an angle of view of 4: 3.
  • the region R31-2 is taken as the imaging region.
  • the region R31-1 and the region R31-2 will be simply referred to as the region R31 unless it is necessary to distinguish them.
  • the camera module 11 may be configured to be able to switch between the 16: 9 angle of view and the 4: 3 angle of view.
  • the four corners of the light receiving surface of the imaging unit 21 that is, the region R32-1 of the upper left corner in the drawing of the light receiving surface and the upper right in the drawing of the light receiving surface.
  • the regions of the corner region R32-2, the light receiving surface in the lower right corner region R32-3, and the light receiving surface in the lower left corner region R32-4 are not used as imaging regions.
  • At least one of the regions R32-1 to R32-4 is used as a sensing region.
  • the regions R32-1 to R32-4 correspond to the sensing region R12 shown in FIG. 1, and the sensing pixels described above are formed in the regions R32-1 to R32-4.
  • the regions R32-1 to R32-4 are simply referred to as a region R32 unless there is a need to distinguish them.
  • an imaging optical system 22 is provided on the near side, ie, the object side in the drawing of the regions R31-1 and R31-2 used as imaging regions.
  • the imaging optical system 22 guides light from an external subject to a region R31-1 or a region R31-2 as an imaging region.
  • the sensing optical system 301-1 is provided on the front side, that is, the object side in the drawing of the region R32-2 used as the sensing region. Is provided with a sensing optical system 301-2.
  • the sensing optical system 301-3 is provided on the front side, and in the drawing of the region R32-4 used as the sensing region, sensing is performed on the front side An optical system 301-4 is provided.
  • sensing optical system 301-1 to sensing optical system 301-4 corresponds to the sensing optical system 23 shown in FIG. 1, and light from an external subject is taken from the area R32-1 to the area R32- Lead to each of the four.
  • sensing optical system 301-1 to sensing optical system 301-4 will be simply referred to as sensing optical system 301 unless it is necessary to distinguish them.
  • Each sensing optical system 301 has a configuration similar to that of the sensing optical system 23 described above.
  • an area at a corner of the light receiving surface such as any one of the four areas R32 and one of the four areas R32, is used as a sensing area. Also in this case, appropriate spectral information can be obtained.
  • the camera module 11 described above can be applied to various electronic devices such as an imaging device such as a digital still camera or a digital video camera, a mobile phone having an imaging function, and other devices having an imaging function.
  • FIG. 12 is a block diagram illustrating a configuration example of an imaging device as an electronic device to which the present technology is applied.
  • An imaging device 501 illustrated in FIG. 12 includes an optical system 511, a shutter device 512, a solid-state imaging device 513, a control circuit 514, a signal processing circuit 515, a monitor 516, and a memory 517. It can be imaged.
  • the optical system 511 includes one or a plurality of lenses, guides light from a subject (incident light) to the solid-state imaging device 513, and forms an image on the light receiving surface of the solid-state imaging device 513.
  • the shutter device 512 is disposed between the optical system 511 and the solid-state imaging device 513, and controls the light irradiation period and the light shielding period to the solid-state imaging device 513 according to the control of the control circuit 514.
  • the solid-state imaging device 513 accumulates signal charges for a certain period in accordance with the light imaged on the light receiving surface via the optical system 511 and the shutter device 512.
  • the signal charge stored in the solid-state imaging device 513 is transferred in accordance with a drive signal (timing signal) supplied from the control circuit 514.
  • the control circuit 514 outputs a drive signal for controlling the transfer operation of the solid-state imaging device 513 and the shutter operation of the shutter device 512 to drive the solid-state imaging device 513 and the shutter device 512.
  • the signal processing circuit 515 performs various types of signal processing on the signal charge output from the solid-state imaging device 513.
  • An image (image data) obtained by the signal processing circuit 515 performing signal processing is supplied to a monitor 516 for display, or supplied to a memory 517 for recording.
  • the present technology can also be applied to the imaging device 501 configured as described above. That is, for example, the optical system 511 corresponds to the imaging optical system 22 and the sensing optical system 23, and the solid-state imaging device 513 corresponds to the imaging unit 21. In other words, the optical system 511 to the solid-state imaging device 513 correspond to the camera module 11. Further, spectral information is generated by the signal processing circuit 515 based on the pixel signal output from the sensing pixel of the solid-state imaging device 513.
  • FIG. 13 is a view showing a use example using the camera module 11 described above.
  • the camera module 11 described above can be used, for example, in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays as described below.
  • a device that captures images for viewing such as a digital camera or a portable device with a camera function-For safe driving such as automatic stop, recognition of driver's condition, etc.
  • a device provided for traffic such as an on-vehicle sensor for capturing images of the rear, surroundings, inside of a car, a monitoring camera for monitoring a traveling vehicle or a road, a distance measuring sensor for measuring distance between vehicles, etc.
  • Devices used for home appliances such as TVs, refrigerators, air conditioners, etc. to perform imaging and device operation according to the gesture ⁇ Endoscopes, devices for performing blood vessel imaging by receiving infrared light, etc.
  • Equipment provided for medical and healthcare use-Equipment provided for security such as surveillance cameras for crime prevention, cameras for personal identification, etc.
  • -Skin measuring equipment for photographing skin, photographing for scalp Beauty such as a microscope Equipment provided for use-Equipment provided for sports use, such as action cameras and wearable cameras for sports applications, etc.-Used for agriculture, such as cameras for monitoring the condition of fields and crops apparatus
  • present technology can also be configured as follows.
  • a pixel array unit having an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least the pixels of the imaging area; An imaging optical system for guiding external light to the imaging area; A sensing optical system for guiding light from the outside to the sensing area.
  • the camera module according to (1) wherein the pixel array in the sensing area is an array unit in which an array unit including each of the pixels provided with each of a plurality of different color filters is repeatedly arranged.
  • the imaging region is provided at the center of the pixel array unit, The camera module according to any one of (1) to (3), wherein the sensing area is provided in at least one of the upper end, the lower end, the left end, and the right end of the pixel array unit.
  • the imaging region is provided at the center of the pixel array unit, The camera module according to any one of (1) to (3), wherein the sensing area is provided in at least one of the four corner areas of the pixel array unit.
  • the sensing optical system is configured of a pinhole or a pinhole array provided immediately above the sensing area.
  • the pixel array in the sensing area is a pixel array in which an array unit including each of the pixels provided with each of a plurality of different color filters is repeatedly arranged.
  • the camera module according to (6), wherein the area of the opening portion of the pinhole or the area of the opening portion of the pinhole constituting the pinhole array is larger than the area of the array unit.
  • the ratio of the length in the depth direction to the length in the direction perpendicular to the depth direction in the pinhole or the pinhole constituting the pinhole array is the focal length of the imaging optical system and the imaging area
  • the camera module according to (6) or (7) having substantially the same ratio as the depth direction and the length in the direction perpendicular to the depth direction.
  • the sensing optical system includes the pinhole or the pinhole array, and the pinhole or the diffusion plate disposed on the opposite side of the sensing area to the pinhole array (6) to (6) 8) The camera module according to any one of the above.
  • the sensing optical system includes one or more optical lenses and one or more diffusion plates disposed on the opposite side to the sensing area with respect to the optical lenses (1) to (5).
  • the camera module according to any one of the above. (11) The camera module according to (10), wherein the optical lens is a cylindrical lens. (12) The camera module according to any one of (1) to (5), wherein the sensing optical system includes one or more imaging lenses.
  • the camera module according to any one of (1) to (5), including a plurality of sensing optical systems that guide light from the outside at different angles of view for each of the plurality of sensing regions.
  • the pixel array portion is provided with the sensing area consisting of pixels provided with an infrared light shielding filter, and the sensing area consisting of pixels not provided with the infrared light shielding filter (1) to (1) 13)
  • the camera module according to any one of the above.
  • a pixel array unit having an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least the pixels of the imaging area; An imaging optical system for guiding external light to the imaging area; An imaging device that guides light from the outside to the sensing area.

Abstract

This technology relates to a camera module and an image capture device which make it possible to obtain more appropriate spectral information. This camera module is provided with: a pixel array unit having an imaging region in which pixels for capturing an image are provided, and a sensing region in which pixels different from at least the pixels in the imaging region and capable of acquiring spectral information are provided; an imaging optical system that guides light from outside to the imaging region; and a sensing optical system that guides the light from outside to the sensing region. This technology is applicable to an image capture device.

Description

カメラモジュールおよび撮像装置Camera module and imaging device
 本技術は、カメラモジュールおよび撮像装置に関し、特に、より適切な分光情報を得ることができるようにしたカメラモジュールおよび撮像装置に関する。 The present technology relates to a camera module and an imaging device, and more particularly to a camera module and an imaging device capable of obtaining more appropriate spectral information.
 従来、R(赤)、G(緑)、およびB(青)の各色のカラーフィルタが設けられた画素を受光面の中央部分に配置し、受光面の周辺部分に少なくともR、G、Bの各色とは異なる他の色のカラーフィルタが設けられた画素を配置したカメラモジュールが提案されている(例えば、特許文献1参照)。 Conventionally, pixels provided with color filters of R (red), G (green), and B (blue) are disposed at the central portion of the light receiving surface, and at least R, G, B at the peripheral portion of the light receiving surface. There has been proposed a camera module in which pixels provided with color filters of other colors different from each color are arranged (see, for example, Patent Document 1).
 このようなカメラモジュールでは、画角中心部分ではR、G、およびBの各色の画素によりRGB情報、つまり目的とする被写体のカラーの画像情報が取得される。また、画角の周辺部分、つまり画角の端近傍の部分では、R、G、Bの各色とは異なる他の色の画素等により光源の分光情報が取得される。したがって、単一の固体撮像素子を用いて目的とする画像情報だけでなく分光情報も得ることができる。 In such a camera module, RGB information, that is, color image information of a target subject is acquired by pixels of respective colors of R, G, and B at the central portion of the angle of view. Further, in the peripheral portion of the angle of view, that is, in the vicinity of the end of the angle of view, spectral information of the light source is acquired by pixels of other colors different from R, G and B. Therefore, not only target image information but also spectral information can be obtained using a single solid-state imaging device.
特開2012-59865号公報JP 2012-59865 A
 しかしながら、上述した技術では、適切な分光情報を得ることができない場合があった。 However, with the above-mentioned technology, there were cases where it was not possible to obtain appropriate spectral information.
 例えば上述のカメラモジュールでは、1つの光学系が用いられて被写体からの光が画像情報を取得するための画素や、分光情報を取得するための画素へと導かれる。 For example, in the above-described camera module, one optical system is used, and light from a subject is led to a pixel for acquiring image information or a pixel for acquiring spectral information.
 そのため、カメラモジュールの画角中心部分にある被写体の情報が画像情報として得られ、画角の周辺部分にある被写体の情報が分光情報として得られることになる。したがって、分光検出に関する興味物体が画角中心部分にあるときには、分光情報として適切な情報を得ることができない。 Therefore, information of an object at the center of the angle of view of the camera module is obtained as image information, and information of an object at the peripheral portion of the angle of view is obtained as spectral information. Therefore, when the object of interest for spectral detection is at the center of the angle of view, it is not possible to obtain appropriate information as spectral information.
 本技術は、このような状況に鑑みてなされたものであり、より適切な分光情報を得ることができるようにするものである。 The present technology has been made in view of such a situation, and makes it possible to obtain more appropriate spectral information.
 本技術の第1の側面のカメラモジュールは、画像を撮像するための画素が設けられたイメージング領域、および少なくとも前記イメージング領域の画素とは異なる分光情報を取得可能な画素が設けられたセンシング領域を有する画素アレイ部と、外部からの光を前記イメージング領域へと導くイメージング光学系と、外部からの光を前記センシング領域へと導くセンシング光学系とを備える。 The camera module according to the first aspect of the present technology includes an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least pixels of the imaging area. And an imaging optical system for guiding light from the outside to the imaging area, and a sensing optical system for guiding light from the outside to the sensing area.
 本技術の第1の側面においては、画像を撮像するための画素が設けられたイメージング領域、および少なくとも前記イメージング領域の画素とは異なる分光情報を取得可能な画素が設けられたセンシング領域を有する画素アレイ部と、外部からの光を前記イメージング領域へと導くイメージング光学系と、外部からの光を前記センシング領域へと導くセンシング光学系とがカメラモジュールに設けられる。 According to a first aspect of the present technology, there is provided an imaging area provided with a pixel for capturing an image, and a pixel having a sensing area provided with a pixel capable of acquiring spectral information different from at least the pixels of the imaging area. An array unit, an imaging optical system that guides light from the outside to the imaging area, and a sensing optical system that guides light from the outside to the sensing area are provided in the camera module.
 本技術の第2の側面の撮像装置は、第1の側面のカメラモジュールと同様の撮像装置である。 The imaging device according to the second aspect of the present technology is an imaging device similar to the camera module according to the first aspect.
 本技術の第1の側面および第2の側面によれば、より適切な分光情報を得ることができる。 According to the first and second aspects of the present technology, more appropriate spectral information can be obtained.
 なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載された何れかの効果であってもよい。 In addition, the effect described here is not necessarily limited, and may be any effect described in the present disclosure.
本技術について説明する図である。It is a figure explaining this art. カメラモジュールの構成例を示す図である。It is a figure which shows the structural example of a camera module. 撮像部の構成例を示す図である。It is a figure which shows the structural example of an imaging part. センシング光学系の構成例を示す図である。It is a figure which shows the structural example of a sensing optical system. センシング光学系の構成例を示す図である。It is a figure which shows the structural example of a sensing optical system. センシング光学系の構成例を示す図である。It is a figure which shows the structural example of a sensing optical system. センシング光学系の構成例を示す図である。It is a figure which shows the structural example of a sensing optical system. センシング光学系の構成例を示す図である。It is a figure which shows the structural example of a sensing optical system. センシング光学系の構成例を示す図である。It is a figure which shows the structural example of a sensing optical system. センシング領域の他の配置例を示す図である。It is a figure which shows the example of another arrangement | positioning of a sensing area | region. センシング領域の他の配置例を示す図である。It is a figure which shows the example of another arrangement | positioning of a sensing area | region. 撮像装置の構成例を示す図である。It is a figure showing an example of composition of an imaging device. カメラモジュールの使用例を示す図である。It is a figure which shows the usage example of a camera module.
 以下、図面を参照して、本技術を適用した実施の形態について説明する。 Hereinafter, embodiments to which the present technology is applied will be described with reference to the drawings.
〈本技術の概要について〉
 本技術は、撮像部の受光面に目的とする画像情報を得るための画素と、分光情報を得るための画素とを配置するとともに、画像情報を得るための光学系と、その光学系とは異なる分光情報を得るための光学系とを設けることで、適切な分光情報を得ることができるようにするものである。
<About the outline of this technology>
The present technology arranges a pixel for obtaining target image information and a pixel for obtaining spectral information on a light receiving surface of an imaging unit, and an optical system for obtaining image information and the optical system By providing an optical system for obtaining different spectral information, appropriate spectral information can be obtained.
 例えば、本技術はイメージセンサと光学系を有するカメラモジュールや、そのようなカメラモジュールを備えるデジタルスチルカメラ、デジタルビデオカメラ、携帯電話機などの各種の電子機器に適用することが可能である。 For example, the present technology can be applied to various electronic devices such as a camera module having an image sensor and an optical system, a digital still camera including such a camera module, a digital video camera, and a mobile phone.
 また、画像情報とともに取得する情報は分光情報に限らず、画像情報の被写体に関する情報であれば、どのような情報であってもよい。 Further, the information acquired together with the image information is not limited to the spectral information, and may be any information as long as it is information on the subject of the image information.
 すなわち、画像情報の被写体に関する情報は、例えば被写体までの距離情報、被写体の視差情報や形状情報、被写体の熱に関する情報、HDR(High Dinamic Range)に関する情報などであってもよい。 That is, the information on the subject in the image information may be, for example, distance information to the subject, parallax information or shape information on the subject, information on heat of the subject, information on HDR (High Dynamic Range), or the like.
 以下では、本技術を、画像情報と分光情報を取得可能なカメラモジュールに適用した場合を例として説明を続ける。 In the following, the description will be continued with an example in which the present technology is applied to a camera module capable of acquiring image information and spectral information.
 まず、本技術の概要について説明する。 First, an overview of the present technology will be described.
 本技術を適用したカメラモジュールは、例えば図1に示すように構成される。図1は、カメラモジュール11の光学系の光軸方向と垂直な方向から見たときの、カメラモジュール11の断面を示している。 A camera module to which the present technology is applied is configured, for example, as shown in FIG. FIG. 1 shows a cross section of the camera module 11 as viewed from a direction perpendicular to the optical axis direction of the optical system of the camera module 11.
 カメラモジュール11は、撮像部21、イメージング光学系22、センシング光学系23-1、およびセンシング光学系23-2を有している。 The camera module 11 includes an imaging unit 21, an imaging optical system 22, a sensing optical system 23-1, and a sensing optical system 23-2.
 撮像部21は、例えばCMOS(Complementary Metal Oxide Semiconductor)イメージセンサ等の固体撮像素子からなり、撮像部21の受光面には、外部の被写体から入射した光を受光して光電変換する画素が複数配置されている。 The imaging unit 21 is formed of, for example, a solid-state imaging device such as a complementary metal oxide semiconductor (CMOS) image sensor, and the light receiving surface of the imaging unit 21 includes a plurality of pixels that receive light incident from an external object and photoelectrically convert it. It is done.
 ここでは、撮像部21の受光面の中心部分の領域であるイメージング領域R11には、被写体の画像情報、つまり被写体の画像を撮像するための画素が設けられている。特に、この例では撮像部21の受光面における、その受光面と略垂直に配置された遮光板24-1および遮光板24-2により囲まれた領域がイメージング領域R11となっている。 Here, in the imaging region R11 which is a region of the central portion of the light receiving surface of the imaging unit 21, image information of the subject, that is, pixels for capturing an image of the subject are provided. In particular, in this example, a region surrounded by the light shielding plate 24-1 and the light shielding plate 24-2 disposed substantially perpendicular to the light receiving surface of the imaging unit 21 is an imaging region R11.
 なお、以下、イメージング領域R11に設けられた画素をイメージング画素とも称し、それらのイメージング画素で得られた画素信号からなる画像の画像信号、つまり被写体の画像を撮像画像とも称することとする。 Hereinafter, the pixels provided in the imaging region R11 are also referred to as imaging pixels, and an image signal of an image composed of pixel signals obtained by the imaging pixels, that is, an image of a subject is also referred to as a captured image.
 これに対して、撮像部21の受光面の端近傍の領域、つまり受光面の周辺領域であるセンシング領域R12-1およびセンシング領域R12-2には、被写体の分光情報を得るための画素が設けられている。 On the other hand, in the region near the end of the light receiving surface of the imaging unit 21, that is, in the sensing region R12-1 and the sensing region R12-2 which are peripheral regions of the light receiving surface, pixels for obtaining spectral information of the object are provided. It is done.
 特に、この例では撮像部21の受光面における、その受光面と略垂直に配置された遮光板24-1および遮光板24-3により囲まれた領域がセンシング領域R12-1となっている。また、撮像部21の受光面における、その受光面と略垂直に配置された遮光板24-2および遮光板24-4により囲まれた領域がセンシング領域R12-2となっている。 In particular, in this example, a region surrounded by the light shielding plate 24-1 and the light shielding plate 24-3 disposed substantially perpendicular to the light receiving surface of the imaging unit 21 is a sensing region R12-1. Further, in the light receiving surface of the imaging unit 21, a region surrounded by the light shielding plate 24-2 and the light shielding plate 24-4 disposed substantially perpendicularly to the light receiving surface is a sensing region R12-2.
 なお、以下、遮光板24-1乃至遮光板24-4を特に区別する必要のない場合、単に遮光板24とも称し、以下、センシング領域R12-1およびセンシング領域R12-2を特に区別する必要のない場合、単にセンシング領域R12とも称することとする。 Hereinafter, when it is not necessary to distinguish the light shielding plates 24-1 to 24-4, the light shielding plates 24-1 to 24-4 may be simply referred to as the light shielding plate 24, and the sensing regions R12-1 and R12-2 need to be particularly distinguished from each other. If not, it is also simply referred to as sensing region R12.
 さらに、以下、センシング領域R12に設けられた画素をセンシング画素とも称し、それらのセンシング画素で得られた画素信号からなる画像をセンシング画像とも称することとする。 Furthermore, hereinafter, the pixels provided in the sensing region R12 are also referred to as sensing pixels, and an image formed of pixel signals obtained by the sensing pixels is also referred to as a sensing image.
 センシング領域R12に設けられた画素のなかには、少なくともイメージング領域R11に設けられた画素で得られる分光情報とは異なる分光情報が得られる画素が含まれている。 The pixels provided in the sensing area R12 include at least pixels in which spectral information different from the spectral information obtained by the pixels provided in the imaging area R11 is obtained.
 例えば撮像画像がRGBのカラー画像であるとする。この場合、イメージング領域R11には、R(赤)の色のカラーフィルタが設けられた画素(以下、R画素とも称する)、G(緑)の色のカラーフィルタが設けられた画素(以下、G画素とも称する)、およびB(青)の色のカラーフィルタが設けられた画素(以下、B画素とも称する)がイメージング画素として配置されている。例えばイメージング領域R11では、R画素、G画素、およびB画素がベイヤー配列で配置されている。 For example, it is assumed that the captured image is a RGB color image. In this case, in the imaging region R11, a pixel provided with a color filter of R (red) (hereinafter also referred to as R pixel) and a pixel provided with a G (green) color filter (hereinafter referred to as G) Pixels (also referred to as pixels) and pixels provided with color filters of B (blue) (hereinafter also referred to as B pixels) are arranged as imaging pixels. For example, in the imaging region R11, R pixels, G pixels, and B pixels are arranged in a Bayer arrangement.
 このようなイメージング領域R11に設けられた画素からは、R、G、およびBの各色成分についての分光情報が得られることになる。 Spectral information on each of the R, G, and B color components can be obtained from the pixels provided in such an imaging region R11.
 これに対して、センシング領域R12には、少なくともR、G、およびBの各色とは異なる他の色のカラーフィルタが設けられた画素がセンシング画素として配置されている。 On the other hand, in the sensing area R12, pixels provided with color filters of other colors different from at least the colors R, G, and B are disposed as sensing pixels.
 例えばR、G、およびBの各色とは異なる他の色は、W(ホワイト)、C(シアン)、M(マゼンダ)、Y(イエロー)などとされる。 For example, other colors different from the colors R, G, and B are W (white), C (cyan), M (magenta), Y (yellow), and the like.
 以下では、特にWのカラーフィルタが設けられた画素をW画素とも称し、Cのカラーフィルタが設けられた画素をC画素とも称し、Mのカラーフィルタが設けられた画素をM画素とも称し、Yのカラーフィルタが設けられた画素をY画素とも称する。 Hereinafter, in particular, a pixel provided with a W color filter is also referred to as a W pixel, a pixel provided with a C color filter is also referred to as a C pixel, and a pixel provided with an M color filter is also referred to as an M pixel. The pixel provided with the color filter of is also referred to as a Y pixel.
 その他、例えばセンシング領域R12には、IR(赤外)のカラーフィルタが設けられた画素(以下、IR画素とも称する)や、E(エメラルド)のカラーフィルタが設けられた画素(以下、E画素とも称する)などがセンシング画素として設けられても勿論よい。 In addition, for example, pixels provided with IR (infrared) color filters in the sensing region R12 (hereinafter also referred to as IR pixels) and pixels provided with E (emerald) color filters (hereinafter referred to as E pixels) Naturally, it may be provided as a sensing pixel.
 センシング領域R12では、複数の異なる色の画素が所定パターンで配列されている。例えば、複数の異なる色の画素、つまり複数の異なるカラーフィルタが設けられた画素が互いに隣接するように並べられた所定の配列パターンを配列単位と称することとする。 In the sensing region R12, a plurality of pixels of different colors are arranged in a predetermined pattern. For example, a predetermined arrangement pattern in which pixels of a plurality of different colors, that is, pixels provided with a plurality of different color filters are arranged adjacent to each other is referred to as an array unit.
 配列単位には、センシング領域R12内にある全色成分の画素が含まれており、配列単位は、複数の色成分の画素の組み合わせからなる繰り返しパターンの最小単位、つまり周期的に繰り返し配置される画素の配列パターンの最小単位となる画素配列の領域である。例えばセンシング領域R12にC画素、M画素、およびY画素が設けられているものとすると、配列単位には、それらのC画素、M画素、およびY画素の各色成分の画素が少なくとも1つは含まれている。 The array unit includes the pixels of all the color components in the sensing area R12, and the array unit is the minimum unit of the repeating pattern consisting of the combination of the pixels of the plurality of color components, that is, the pixels are repeatedly arranged periodically. It is an area of a pixel array that is the smallest unit of the array pattern of pixels. For example, assuming that C pixel, M pixel, and Y pixel are provided in sensing region R12, at least one pixel of each color component of those C pixel, M pixel, and Y pixel is included in the array unit. It is done.
 センシング領域R12の画素配列は、このような配列単位が縦方向および横方向に、つまり行列状に繰り返し配置された(周期的に配置された)画素配列となっている。 The pixel arrangement of the sensing region R12 is a pixel arrangement in which such arrangement units are repeatedly arranged in a vertical direction and a horizontal direction, that is, in a matrix form (periodically arranged).
 換言すれば、センシング領域R12の画素配列は、複数の各色成分の画素が周期的に繰り返し配置された画素配列となっている。 In other words, the pixel array of the sensing region R12 is a pixel array in which a plurality of pixels of each color component are periodically and repeatedly arranged.
 W画素やC画素、M画素、Y画素、IR画素、E画素は、R画素、G画素、およびB画素の各色成分とは異なる色成分(波長成分)のカラーフィルタが設けられている画素である。したがって、W画素やC画素、M画素、Y画素、IR画素、E画素からは、R画素、G画素、およびB画素の各色(波長)とは異なる色の情報、つまり異なる分光情報を取得可能である。 The W pixel, C pixel, M pixel, Y pixel, IR pixel and E pixel are pixels provided with color filters (wavelength components) different from the color components of R pixel, G pixel and B pixel. is there. Therefore, it is possible to obtain information of a color different from each color (wavelength) of R pixel, G pixel, and B pixel, that is, different spectral information, from W pixel, C pixel, M pixel, Y pixel, IR pixel, E pixel. It is.
 以下では、イメージング領域R11にはR画素、G画素、およびB画素が設けられており、センシング領域R12には少なくともW画素、C画素、M画素、Y画素、IR画素、およびE画素のうちの何れかの画素が設けられているものとして説明を続ける。なお、センシング領域R12にR画素やG画素、B画素が設けられていても勿論よい。また、イメージング領域R11内の画素に設けられるカラーフィルタや、センシング領域R12内の画素に設けられるカラーフィルタは、どのようにして形成されてもよい。すなわち、イメージング領域R11やセンシング領域R12の各画素のカラーフィルタは、例えば一般的な吸収型有機材料や誘電体多層膜などでもよいし、プラズモン共鳴体により構成されたものであってもよい。画素のカラーフィルタをプラズモン共鳴体により構成する技術については、例えば特開2012-59865号公報等に記載されている。 Below, R pixel, G pixel, and B pixel are provided in imaging region R11, and at least W pixel, C pixel, M pixel, Y pixel, IR pixel, and E pixel are provided in sensing region R12. The description will be continued assuming that any pixel is provided. Of course, the R pixel, the G pixel, and the B pixel may be provided in the sensing region R12. Also, the color filters provided to the pixels in the imaging area R11 and the color filters provided to the pixels in the sensing area R12 may be formed in any manner. That is, the color filter of each pixel of the imaging area R11 and the sensing area R12 may be, for example, a general absorption type organic material, a dielectric multilayer film, or the like, or may be formed of a plasmon resonance body. The technique for forming the color filter of the pixel by a plasmon resonator is described in, for example, Japanese Patent Application Laid-Open No. 2012-59865.
 センシング領域R12に配置されたセンシング画素で外部からの光を受光して光電変換することで得られた画素信号、つまりセンシング画像から得られる分光情報は、各色成分(波長成分)の光の受光量を示す情報である。すなわち、カメラモジュール11の画角内にある光源からの光のスペクトル情報である。 A pixel signal obtained by receiving light from the outside and photoelectrically converting it by the sensing pixel arranged in the sensing region R12, that is, spectral information obtained from the sensing image is the light receiving amount of light of each color component (wavelength component) Information indicating That is, it is the spectral information of the light from the light source within the angle of view of the camera module 11.
 このような分光情報は、例えば撮像画像に対するホワイトバランス調整等の各種の画像処理や、撮像画像内の被写体の活性状況の検出等のセンシング処理などに利用される。 Such spectral information is used, for example, in various image processing such as white balance adjustment for a captured image, sensing processing such as detection of an activity state of a subject in the captured image, and the like.
 イメージング光学系22は、1または複数の光学レンズからなるイメージングレンズであり、外部から入射した光を撮像部21のイメージング領域R11へと導く。すなわち、イメージング光学系22は、外部から入射した光を集光してイメージング領域R11上に結像させる。 The imaging optical system 22 is an imaging lens composed of one or a plurality of optical lenses, and guides light incident from the outside to the imaging area R11 of the imaging unit 21. That is, the imaging optical system 22 condenses light incident from the outside to form an image on the imaging region R11.
 センシング光学系23-1は、例えば拡散板(ディフューザ)やシリンドリカルレンズなどからなり、外部から入射した光を撮像部21のセンシング領域R12-1へと導く。同様に、センシング光学系23-2は、例えば拡散板やシリンドリカルレンズなどからなり、外部から入射した光を撮像部21のセンシング領域R12-2へと導く。 The sensing optical system 23-1 is made of, for example, a diffusion plate (diffuser), a cylindrical lens, or the like, and guides light incident from the outside to the sensing region R12-1 of the imaging unit 21. Similarly, the sensing optical system 23-2 includes, for example, a diffusion plate or a cylindrical lens, and guides light incident from the outside to the sensing area R12-2 of the imaging unit 21.
 なお、以下、センシング光学系23-1およびセンシング光学系23-2を特に区別する必要のない場合、単にセンシング光学系23とも称することとする。 Hereinafter, sensing optical system 23-1 and sensing optical system 23-2 will be simply referred to as sensing optical system 23 unless it is necessary to distinguish them.
 以上のような構成とされたカメラモジュール11では、撮像画像とセンシング画像を得るために、それぞれ異なる光学系が用いられている。 In the camera module 11 configured as described above, different optical systems are used to obtain a captured image and a sensing image.
 例えば撮像画像とセンシング画像を得るために、1つのイメージングレンズが用いられるとする。つまり、イメージング領域R11にもセンシング領域R12にも、1つのイメージングレンズにより集光された光が入射するものとする。 For example, it is assumed that one imaging lens is used to obtain a captured image and a sensing image. That is, light collected by one imaging lens is incident on the imaging region R11 and the sensing region R12.
 この場合、撮像画像はイメージングレンズの画角の中心部分にある被写体の画像、つまりイメージングレンズの視野の中心部分にある被写体の画像となる。また、センシング画像はイメージングレンズの画角の周辺部分(端近傍部分)にある被写体の画像となる。 In this case, the captured image is an image of the subject at the center of the angle of view of the imaging lens, that is, an image of the subject at the center of the field of view of the imaging lens. In addition, the sensing image is an image of a subject in the peripheral portion (portion near the end) of the angle of view of the imaging lens.
 したがって、この例では撮像画像上の被写体に関する分光情報は得られないことになる。換言すれば、分光検出に関する興味物体、つまり分光情報を取得したい被写体がイメージングレンズの画角中心にあるときには、その被写体を対象とした分光情報を得ることができない。すなわち、適切な分光情報を得ることができない。 Therefore, in this example, spectral information on the subject on the captured image can not be obtained. In other words, when an object of interest related to spectral detection, that is, a subject whose spectral information is to be acquired is at the center of the angle of view of the imaging lens, spectral information on the subject can not be obtained. That is, appropriate spectral information can not be obtained.
 一方、図1に示すカメラモジュール11では、撮像画像を得るための光学系としてイメージング光学系22が設けられており、そのイメージング光学系22とは別に、分光情報を得るための光学系としてセンシング光学系23が設けられている。 On the other hand, in the camera module 11 shown in FIG. 1, an imaging optical system 22 is provided as an optical system for obtaining a captured image, and sensing optical as an optical system for obtaining spectral information separately from the imaging optical system 22. A system 23 is provided.
 そのため、カメラモジュール11では、分光検出の興味物体に応じたセンシング光学系23を配置することで、その興味物体の分光情報、つまり適切な分光情報を得ることができる。例えば、適切にセンシング光学系23を配置すれば、イメージング光学系22の画角、つまり撮像画像の画角と同一の画角の分光情報、または撮像画像の画角の中心部分に相当する画角の分光情報を得ることができる。 Therefore, in the camera module 11, by arranging the sensing optical system 23 according to the object of interest for spectral detection, it is possible to obtain spectral information of the object of interest, that is, appropriate spectral information. For example, if the sensing optical system 23 is appropriately disposed, the angle of view of the imaging optical system 22, that is, spectral information of the same angle of view as the angle of view of the captured image, or the angle of view corresponding to the central portion of the angle of view of the captured image Spectral information can be obtained.
 この場合、少なくとも撮像画像の中心部分にある被写体について分光情報を得ることができる。すなわち、撮像部21という単一のセンサにより、適切な分光情報を取得することができる。 In this case, spectral information can be obtained for an object at least at the central portion of the captured image. That is, appropriate spectral information can be acquired by a single sensor called the imaging unit 21.
 しかも、この場合、1つの撮像部21に対してイメージング光学系22とセンシング光学系23を設けるだけでよいため、カメラモジュール11の部品数を少なくすることができ、低コスト化および小型化を実現することができる。 Moreover, in this case, since only the imaging optical system 22 and the sensing optical system 23 need be provided for one imaging unit 21, the number of parts of the camera module 11 can be reduced, and cost reduction and miniaturization can be realized. can do.
 例えば撮像画像の画角と同等の画角の分光情報を得るために、撮像部の複数画素からなる領域ごとに、モジュールとしてレンズとカラーフィルタを配置し、多波長画像を得るようにすることも考えられる。しかし、そのような場合、撮像部とは別にレンズアレイやカラーフィルタアレイを設けなければならないので部品数が多くなり、カメラモジュールを小型化することが困難であり、コストも増加してしまう。 For example, in order to obtain spectral information of an angle of view equivalent to the angle of view of a captured image, a lens and a color filter may be disposed as a module for each area including a plurality of pixels of an imaging unit to obtain a multi-wavelength image. Conceivable. However, in such a case, since it is necessary to provide a lens array and a color filter array separately from the imaging unit, the number of parts increases, which makes it difficult to miniaturize the camera module and also increases the cost.
 これに対して、カメラモジュール11では、撮像部とは別にカラーフィルタアレイを設ける必要はなく、カラーフィルタごとにレンズを設ける必要もないため、部品数が少なくて済み、カメラモジュール11の小型化と低コスト化を実現することができる。 On the other hand, in the camera module 11, it is not necessary to provide a color filter array separately from the imaging unit, and it is not necessary to provide a lens for each color filter, so the number of parts may be small. Cost reduction can be realized.
 なお、センシング光学系23は、必ずしも光学レンズ等の光学素子(光学部材)を有している必要はなく、センシング領域R12に外部からの光を導くことができるものであれば、どのようなものであってもよい。 The sensing optical system 23 is not necessarily required to have an optical element (optical member) such as an optical lens, and it may be any type as long as it can guide light from the outside to the sensing region R12. It may be
 すなわち、例えばセンシング光学系23は、遮光板24等により形成される円柱構造や多角柱構造のピンホールからなるピンホールアレイや、そのようなピンホールアレイと拡散板、レンズアレイ、レンズアレイと拡散板などとされてもよい。 That is, for example, the sensing optical system 23 includes a pinhole array formed of pinholes of a cylindrical structure or a polygonal prism structure formed by the light shielding plate 24 or the like, such a pinhole array and a diffusion plate, a lens array, a lens array and a diffusion It may be a plate or the like.
〈第1の実施の形態〉
〈カメラモジュールの構成例〉
 それでは、以下、本技術を適用した、さらに具体的な実施の形態について説明する。
First Embodiment
<Configuration Example of Camera Module>
Now, further specific embodiments to which the present technology is applied will be described below.
 図2は、本技術を適用したカメラモジュールの構成例を示す図である。なお、図2において図1における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 FIG. 2 is a diagram showing a configuration example of a camera module to which the present technology is applied. In FIG. 2, portions corresponding to the case in FIG. 1 are denoted with the same reference numerals, and the description thereof will be appropriately omitted.
 図2は、カメラモジュール11をイメージング光学系22の光軸方向から見た図となっている。以下では、イメージング光学系22の光軸方向をZ方向とも称し、図中、左右方向をX方向とも称し、図中、上下方向をY方向とも称することとする。これらのX方向、Y方向、およびZ方向は、互いに直交する方向である。 FIG. 2 is a view of the camera module 11 as viewed from the optical axis direction of the imaging optical system 22. As shown in FIG. Hereinafter, the optical axis direction of the imaging optical system 22 is also referred to as Z direction, the left and right direction is also referred to as X direction in the drawing, and the up and down direction is also referred to as Y direction in the drawing. These X direction, Y direction, and Z direction are directions orthogonal to each other.
 カメラモジュール11では、撮像部21の受光面の中心部分の領域がイメージング領域R11となっている。また、撮像部21の受光面の図中、左端部分の領域がセンシング領域R12-1となっており、同様に撮像部21の受光面の図中、右端部分の領域がセンシング領域R12-2となっている。 In the camera module 11, the area at the center of the light receiving surface of the imaging unit 21 is an imaging area R11. Further, in the drawing of the light receiving surface of the imaging unit 21, the area at the left end is the sensing area R12-1, and similarly, the area at the right end of the light receiving surface of the imaging unit 21 is the sensing area R12-2. It has become.
 さらに、イメージング領域R11の図中、手前側にはイメージング光学系22が配置されている。特に、ここではZ方向から見たときに、つまりXY平面において、イメージング領域R11の面積よりもイメージング光学系22の断面の領域の面積が小さくなるようになされている。より詳細には、Z方向から見たときに、イメージング光学系22がイメージング領域R11内に含まれるようになされている。 Further, an imaging optical system 22 is disposed on the front side in the drawing of the imaging region R11. In particular, here, when viewed from the Z direction, that is, in the XY plane, the area of the area of the cross section of the imaging optical system 22 is smaller than the area of the imaging area R11. More specifically, when viewed from the Z direction, the imaging optical system 22 is included in the imaging region R11.
 センシング領域R12-1の図中、手前側にはセンシング光学系23-1が配置されており、センシング領域R12-2の図中、手前側にはセンシング光学系23-2が配置されている。 A sensing optical system 23-1 is disposed on the near side in the drawing of the sensing area R12-1, and a sensing optical system 23-2 is disposed on the near side in the drawing of the sensing area R12-2.
 カメラモジュール11では、Z方向から見たときに、イメージング光学系22の断面の領域がイメージング領域R11よりも小さくなるようにすることで、イメージング光学系22に隣接してセンシング領域R12の直上にセンシング光学系23を配置することができる。これにより、イメージング光学系22の画角とセンシング光学系23の画角とにずれが生じてしまうことを防止し、イメージング光学系22の画角と略同じ画角の分光情報を得ることができるようになる。 In the camera module 11, when viewed from the Z direction, the area of the cross section of the imaging optical system 22 is smaller than the imaging area R11, so that the sensing is performed immediately above the sensing area R12 adjacent to the imaging optical system 22. An optical system 23 can be arranged. Thereby, it is possible to prevent deviation between the angle of view of the imaging optical system 22 and the angle of view of the sensing optical system 23, and to obtain spectral information with an angle of view substantially the same as the angle of view of the imaging optical system 22. It will be.
〈撮像部の構成例〉
 また、撮像部21は、例えば図3に示すように構成される。
<Configuration Example of Imaging Unit>
Further, the imaging unit 21 is configured, for example, as shown in FIG.
 図3に示す撮像部21は、画素アレイ51、行走査回路52、PLL(Phase Locked Loop)53、DAC(Digital Analog Converter)54、カラムADC(Analog Digital Converter)回路55、列走査回路56、およびセンスアンプ57を有している。 The imaging unit 21 illustrated in FIG. 3 includes a pixel array 51, a row scanning circuit 52, a PLL (Phase Locked Loop) 53, a DAC (Digital Analog Converter) 54, a column ADC (Analog Digital Converter) circuit 55, a column scanning circuit 56, and A sense amplifier 57 is provided.
 画素アレイ51は、2次元に配列された複数の画素61を有している。画素61は、行走査回路52に接続される水平信号線Hと、カラムADC回路55に接続される垂直信号線Vとが交差する点にそれぞれ配置されており、光電変換を行うフォトダイオードと、蓄積された信号を読み出すための数種類のトランジスタで構成される。 The pixel array 51 has a plurality of pixels 61 arranged in a two-dimensional manner. The pixels 61 are respectively disposed at points where horizontal signal lines H connected to the row scanning circuit 52 and vertical signal lines V connected to the column ADC circuit 55 intersect, and photodiodes performing photoelectric conversion, It is composed of several types of transistors for reading out the accumulated signal.
 すなわち、画素61は、図3の右側に拡大して示されているように、フォトダイオード71、転送トランジスタ72、フローティングディフュージョン73、増幅トランジスタ74、選択トランジスタ75、およびリセットトランジスタ76を有している。 That is, the pixel 61 includes the photodiode 71, the transfer transistor 72, the floating diffusion 73, the amplification transistor 74, the selection transistor 75, and the reset transistor 76, as shown enlarged on the right side of FIG. .
 フォトダイオード71は、外部から入射した光を受光して光電変換する光電変換素子であり、光電変換により得られた電荷を蓄積する。フォトダイオード71に蓄積された電荷は、転送トランジスタ72を介してフローティングディフュージョン73に転送される。 The photodiode 71 is a photoelectric conversion element that receives light incident from the outside and performs photoelectric conversion, and accumulates the charge obtained by photoelectric conversion. The charge stored in the photodiode 71 is transferred to the floating diffusion 73 via the transfer transistor 72.
 フローティングディフュージョン73は、増幅トランジスタ74のゲートに接続されている。画素61が信号の読み出しの対象となると、行走査回路52は水平信号線Hを介して選択トランジスタ75を制御し、選択トランジスタ75をオンさせる。 The floating diffusion 73 is connected to the gate of the amplification transistor 74. When the pixel 61 is a target of signal readout, the row scanning circuit 52 controls the selection transistor 75 via the horizontal signal line H to turn on the selection transistor 75.
 このようにして選択トランジスタ75がオンされると、画素61が選択状態となる。選択された画素61の信号は、増幅トランジスタ74をソースフォロワ(Source Follower)駆動することで、フォトダイオード71に蓄積された電荷の蓄積電荷量に対応する画素信号として、垂直信号線Vに読み出される。すなわち、フォトダイオード71から転送され、フローティングディフュージョン73に蓄積された電荷に応じた電圧信号が画素信号として、選択トランジスタ75から垂直信号線Vに出力される。 When the selection transistor 75 is turned on in this manner, the pixel 61 is in the selected state. The signal of the selected pixel 61 is read out to the vertical signal line V as a pixel signal corresponding to the accumulated charge amount of the charge accumulated in the photodiode 71 by driving the amplification transistor 74 with a source follower (Source Follower). . That is, a voltage signal corresponding to the charge transferred from the photodiode 71 and accumulated in the floating diffusion 73 is output from the selection transistor 75 to the vertical signal line V as a pixel signal.
 また、行走査回路52によりリセットトランジスタ76がオンされると、フローティングディフュージョン73に蓄積された電荷が排出され、フローティングディフュージョン73がリセットされる。 When the reset transistor 76 is turned on by the row scanning circuit 52, the charge stored in the floating diffusion 73 is discharged, and the floating diffusion 73 is reset.
 行走査回路52は、画素アレイ51の画素61を駆動(転送や、選択、リセットなど)するための駆動信号を、行ごとに順次、出力する。PLL53は、外部から供給されるクロック信号に基づいて、撮像部21内部の各ブロックの駆動に必要な所定の周波数のクロック信号を生成して出力する。 The row scanning circuit 52 sequentially outputs driving signals for driving (transferring, selecting, resetting, and the like) the pixels 61 of the pixel array 51 for each row. The PLL 53 generates and outputs a clock signal of a predetermined frequency required to drive each block in the imaging unit 21 based on a clock signal supplied from the outside.
 DAC54は、所定の電圧値から一定の傾きで電圧が降下した後に所定の電圧値に戻る形状(略鋸形状)のランプ信号を生成して出力する。 The DAC 54 generates and outputs a ramp signal having a shape (generally sawtooth shape) that returns to a predetermined voltage value after the voltage drops at a predetermined inclination from a predetermined voltage value.
 カラムADC回路55は、比較器81およびカウンタ82を、画素アレイ51の画素61の列に対応する個数だけ有しており、画素61から出力される画素信号からCDS(Correlated Double Sampling:相関2重サンプリング)動作により信号レベルを抽出して、デジタルの画素信号を出力する。 The column ADC circuit 55 includes the comparators 81 and the counters 82 in the number corresponding to the columns of the pixels 61 of the pixel array 51, and from the pixel signals output from the pixels 61, CDS (Correlated Double Sampling) A sampling operation is performed to extract a signal level, and a digital pixel signal is output.
 すなわち、比較器81が、DAC54から供給されるランプ信号と、画素61から出力される画素信号(輝度値)とを比較し、その結果得られる比較結果信号をカウンタ82に供給する。そして、カウンタ82が比較器81から出力される比較結果信号に応じて、所定の周波数のカウンタクロック信号をカウントすることで、画素信号をA/D変換する。 That is, the comparator 81 compares the ramp signal supplied from the DAC 54 with the pixel signal (luminance value) output from the pixel 61, and supplies the resultant comparison result signal to the counter 82. Then, the counter 82 A / D converts the pixel signal by counting a counter clock signal of a predetermined frequency in accordance with the comparison result signal output from the comparator 81.
 列走査回路56は、カラムADC回路55のカウンタ82に、順次、所定のタイミングで、画素信号を出力させる信号を供給する。センスアンプ57は、カラムADC回路55から供給される画素信号を増幅し、撮像部21の外部に出力する。 The column scanning circuit 56 sequentially supplies a signal for outputting a pixel signal to the counter 82 of the column ADC circuit 55 at a predetermined timing. The sense amplifier 57 amplifies the pixel signal supplied from the column ADC circuit 55 and outputs the amplified signal to the outside of the imaging unit 21.
 撮像部21では、画素アレイ51の領域部分が受光面となっており、その受光面上に上述したイメージング領域R11とセンシング領域R12とが設けられている。 In the imaging unit 21, the area portion of the pixel array 51 is a light receiving surface, and the imaging region R11 and the sensing region R12 described above are provided on the light receiving surface.
 したがって、画素アレイ51を構成する画素61のうち、イメージング領域R11に設けられた画素61がイメージング画素として機能する。また、画素アレイ51を構成する画素61のうち、センシング領域R12に設けられた画素61がセンシング画素として機能する。また、画素61の開口部分には、RやG、B、C、M、Yなどの各色成分のカラーフィルタが形成されている。 Therefore, among the pixels 61 constituting the pixel array 51, the pixels 61 provided in the imaging region R11 function as imaging pixels. Further, among the pixels 61 constituting the pixel array 51, the pixels 61 provided in the sensing region R12 function as sensing pixels. In addition, color filters of respective color components such as R, G, B, C, M, and Y are formed in the opening portion of the pixel 61.
〈センシング光学系の構成例〉
 続いて、図4を参照して、センシング光学系23のより具体的な構成例について説明する。なお、図4において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。
<Example of configuration of sensing optical system>
Subsequently, a more specific configuration example of the sensing optical system 23 will be described with reference to FIG. In FIG. 4, portions corresponding to the case in FIG. 2 are denoted with the same reference numerals, and the description thereof will be appropriately omitted.
 図4では、矢印Q11に示す部分は、図2における場合と同様に、カメラモジュール11をZ方向から見た図を示している。 In FIG. 4, the portion shown by the arrow Q11 is a view of the camera module 11 as viewed in the Z direction, as in the case of FIG.
 また、図4において矢印Q12に示す部分は、カメラモジュール11のセンシング領域R12-2の部分を拡大した図であり、矢印Q13に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の一部分を拡大した図である。 4 is an enlarged view of the sensing area R12-2 of the camera module 11, and the part shown by the arrow Q13 is the sensing area R12-2 of the camera module 11 and the sensing optical system. It is the figure which expanded a part of 23-2.
 矢印Q12に示す部分は、センシング領域R12-2をZ方向から見たときの図を示しており、各四角形は図3に示した画素61に対応するセンシング画素を表している。 The portion shown by the arrow Q12 is a view of the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
 特に、ここではセンシング画素を表す四角形に施されたハッチ(模様)は、そのセンシング画素に設けられたカラーフィルタの色を表している。つまり、同じハッチが施されたセンシング画素は、同じカラーフィルタが設けられた、同じ色成分の画素であり、互いに異なるハッチが施された画素は、互いに異なるカラーフィルタが設けられた、互いに異なる色成分の画素である。 In particular, the hatching (pattern) applied to the square representing the sensing pixel here represents the color of the color filter provided on the sensing pixel. That is, the sensing pixels subjected to the same hatch are pixels of the same color component provided with the same color filter, and the pixels hatched different from each other are provided with different color filters, different colors It is a pixel of a component.
 この例では、センシング領域R12には、4種類の色成分のセンシング画素が形成されており、互いに隣接する4つのセンシング画素(2画素×2画素)により、上述した配列単位が形成されている。 In this example, sensing pixels of four types of color components are formed in the sensing region R12, and the above-described array unit is formed by four sensing pixels (2 pixels × 2 pixels) adjacent to each other.
 配列単位を構成する2画素×2画素の矩形の領域には、互いに異なる色成分の4つのセンシング画素が設けられており、また配列単位にはセンシング領域R12に設けられた全ての色成分のセンシング画素が含まれている。すなわち、ここでは配列単位には4種類の色成分のセンシング画素が含まれている。 Four sensing pixels of different color components are provided in the rectangular area of 2 pixels × 2 pixels constituting the array unit, and sensing of all color components provided in the sensing area R12 is arrayed in the array unit. Pixel is included. That is, here, the array unit includes sensing pixels of four types of color components.
 また、センシング領域R12では、配列単位となる2画素×2画素の各領域がセンシング領域R12直上に形成された遮光板121によって囲まれている。この遮光板121は、図1に示した遮光板24に対応する。 In addition, in the sensing region R12, each region of 2 pixels × 2 pixels, which is an arrangement unit, is surrounded by a light shielding plate 121 formed immediately above the sensing region R12. The light shielding plate 121 corresponds to the light shielding plate 24 shown in FIG.
 なお、以下、センシング領域R12における遮光板121により囲まれる矩形の各領域を遮光領域とも称することとする。例えば図4では、2画素×2画素の領域R41が1つの遮光領域となっている。 Hereinafter, each rectangular area surrounded by the light shielding plate 121 in the sensing area R12 is also referred to as a light shielding area. For example, in FIG. 4, a region R <b> 41 of 2 pixels × 2 pixels is one light shielding region.
 また、ここでは1つの配列単位の領域が1つの遮光領域とされる例について説明したが、遮光領域はセンシング領域R12上に設けられた全ての色成分のセンシング画素が含まれている領域、すなわち少なくとも1つの配列単位の領域を含む領域であればよい。したがって、例えば4画素×4画素の領域、つまり互いに隣接する4つの配列単位からなる領域が1つの遮光領域とされてもよい。 Further, although the example in which the area of one array unit is one light shielding area has been described here, the light shielding area is an area including sensing pixels of all color components provided on the sensing area R12, ie, It may be a region including a region of at least one sequence unit. Therefore, for example, an area of 4 pixels × 4 pixels, that is, an area consisting of 4 array units adjacent to each other may be regarded as one light shielding area.
 カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分をX方向から見ると、矢印Q13に示すように、各遮光領域に外部からの光が入射することが分かる。 When the sensing area R12-2 of the camera module 11 and the sensing optical system 23-2 are viewed in the X direction, it can be seen that light from the outside is incident on each light blocking area as shown by the arrow Q13.
 この例では、各配列単位の領域が遮光板121によって囲まれて遮光領域とされており、4つの遮光領域のそれぞれに対して、遮光板121によりピンホール122-1乃至ピンホール122-4のそれぞれが形成されている。 In this example, the region of each array unit is surrounded by the light shielding plate 121 to be a light shielding region, and for each of the four light shielding regions, the pinhole 122-1 to the pinhole 122-4 are formed by the light shielding plate 121. Each is formed.
 なお、以下、ピンホール122-1乃至ピンホール122-4を特に区別する必要のない場合、単にピンホール122とも称することとする。 Hereinafter, the pinholes 122-1 to 122-4 will be simply referred to as pinholes 122 unless it is necessary to distinguish them.
 図4では、撮像部21の受光面に対して垂直な方向(Z方向)に長い遮光板121が設けられており、遮光板121により形成されたZ方向に長い四角柱形状の空間部分が1つのピンホール122となっている。 In FIG. 4, a light shielding plate 121 which is long in a direction (Z direction) perpendicular to the light receiving surface of the imaging unit 21 is provided, and a rectangular prism-shaped space portion formed by the light shielding plate 121 is long. It is one pinhole 122.
 ここでは、各ピンホール122は四角柱形状となっているが、ピンホール122は円柱形状や多角柱形状など、どのような形状であってもよい。 Here, each pinhole 122 has a quadrangular prism shape, but the pinhole 122 may have any shape such as a cylindrical shape or a polygonal prism shape.
 また、1つのピンホール122に注目すると、そのピンホール122に隣接する遮光領域のX方向の長さやY方向の長さに対して、ピンホール122を形成する遮光板121のZ方向の長さがより長くなっている。換言すれば、ピンホール122は、X方向やY方向の長さよりもZ方向の長さがより長い、つまりZ方向の深さが深いアスペクト付きピンホールとなっている。 Further, focusing on one pinhole 122, the length of the light shielding plate 121 forming the pinhole 122 in the Z direction with respect to the length in the X direction and the length in the Y direction of the light shielding region adjacent to the pinhole 122 Is getting longer. In other words, the pinhole 122 is an aspect-added pinhole whose length in the Z direction is longer than the length in the X direction or Y direction, that is, the depth in the Z direction is deeper.
 さらに、各ピンホール122の径、すなわちX方向やY方向の直径(長さ)は、遮光領域、つまり配列単位の領域のX方向の長さやY方向の長さよりも長くなるようになされている。換言すれば、ピンホール122の開口部分の面積は、遮光領域、すなわち配列単位の面積よりも大きくなるようになされている。これにより、1つのピンホール122により、外部からの光を少なくとも配列単位を構成するセンシング画素からなる領域へと導くことができる。 Furthermore, the diameter of each pinhole 122, that is, the diameter (length) in the X direction or Y direction is made longer than the length in the X direction or Y direction of the light shielding region, that is, the region of the array unit. . In other words, the area of the opening of the pinhole 122 is made larger than the area of the light shielding region, ie, the arrangement unit. Thereby, one pinhole 122 can guide light from the outside to a region consisting of at least sensing pixels constituting an array unit.
 この実施の形態では、受光面におけるセンシング領域R12部分の直上に配置された(設けられた)、X方向およびY方向に並ぶ複数のピンホール122からなるピンホールアレイがセンシング光学系23とされている。なお、ここでは複数のピンホール122が設けられているが、センシング領域R12部分の直上に1つのピンホール122のみが設けられて、そのピンホール122がセンシング光学系23とされてもよい。 In this embodiment, a pinhole array composed of a plurality of pinholes 122 arranged in (arranged) in the X direction and the Y direction disposed (provided) immediately above the sensing region R12 portion on the light receiving surface is used as the sensing optical system 23. There is. Although a plurality of pinholes 122 are provided here, only one pinhole 122 may be provided immediately above the sensing region R12, and the pinhole 122 may be used as the sensing optical system 23.
 カメラモジュール11では、このようなセンシング光学系23を構成する各ピンホール122によって、外部の被写体から入射してくる光がセンシング領域R12の遮光領域内のセンシング画素へと導かれることになる。矢印Q13に示す部分では、図中、左側に被写体があり、被写体からの光が図中、左側からピンホール122を通り、センシング画素へと入射する。 In the camera module 11, the light incident from the external subject is guided to the sensing pixels in the light shielding area of the sensing area R <b> 12 by the respective pinholes 122 constituting the sensing optical system 23. In the portion shown by the arrow Q13, the subject is on the left side in the figure, and light from the subject passes through the pinhole 122 from the left side in the figure and enters the sensing pixel.
 特に、ここではセンシング領域R12がイメージング領域R11に隣接して設けられており、かつピンホール122のZ方向の深さが遮光領域の幅(X方向やY方向の長さ)に対して十分に深くなっている。 In particular, here, the sensing region R12 is provided adjacent to the imaging region R11, and the depth in the Z direction of the pinhole 122 is sufficiently relative to the width (length in the X direction or Y direction) of the light shielding region. It's getting deeper.
 そのため、外部から入射した光のうち、図中、矢印により示されるように、イメージング光学系22の光軸方向、つまり撮像部21の受光面と略垂直な方向から入射してくる光のみがピンホール122によりセンシング領域R12の遮光領域へと導かれることになる。換言すれば、ある程度以上の角度(入射角)でピンホール122へと入射してくる光は、遮光板121等により遮光され、遮光領域へは入射しない。 Therefore, among the light incident from the outside, only the light incident from the direction of the optical axis of the imaging optical system 22, that is, the direction substantially perpendicular to the light receiving surface of the imaging unit 21, as shown by the arrow in the figure The hole 122 leads to the light shielding area of the sensing area R12. In other words, light incident on the pinhole 122 at an angle (incident angle) of a certain degree or more is blocked by the light blocking plate 121 or the like and is not incident on the light blocking area.
 したがって、各遮光領域のセンシング画素では、撮像部21の略真正面にある被写体から入射した光のみ、つまりイメージング光学系22(撮像画像)の画角の略中心部分にある被写体から入射した光のみが光電変換されることになる。 Therefore, in the sensing pixel of each light-shielded area, only light incident from the subject substantially directly in front of the imaging unit 21, that is, only light incident from the subject substantially at the center of the angle of view of the imaging optical system 22 (captured image) It will be photoelectrically converted.
 その結果、センシング画像には、撮像部21の略真正面にある被写体の情報、つまり撮像画像の略中心部分にある被写体の情報が含まれることになる。 As a result, the sensing image includes information of the subject substantially directly in front of the imaging unit 21, that is, information of the subject substantially at the center of the captured image.
 カメラモジュール11における撮像部21の後段のブロックでは、遮光領域内の各センシング画素から読み出された画素信号、つまりセンシング画像に基づいて分光情報が生成される。 In a block subsequent to the imaging unit 21 in the camera module 11, spectral information is generated based on pixel signals read from each sensing pixel in the light-shielded area, that is, a sensing image.
 具体的には、例えば1つの遮光領域には互いに色成分の異なる4つのセンシング画素が設けられているので、1つの遮光領域について4つの各色成分の入射光の強度(入射光量)を示す分光情報が得られる。そして、全ての遮光領域で得られた分光情報の色成分ごとの平均値や加重平均値、和、重み付き加算値などが求められ、最終的な1つの分光情報とされる。 Specifically, for example, since four sensing pixels having different color components are provided in one light shielding region, spectral information indicating the intensity (incident light amount) of incident light of each of the four color components in one light shielding region Is obtained. Then, an average value, a weighted average value, a sum, a weighted addition value, and the like for each color component of spectral information obtained in all the light-shielded regions are determined, and they are used as one final spectral information.
 この例では、センシング光学系23には結像レンズ等の結像光学系は設けられていないので、カメラモジュール11では、撮像画像の中心部分における領域ごとに分光情報を得ることはできない。つまり、領域ごとの2次元の分光情報を得ることができない。そのため、カメラモジュール11で得られる分光情報は、撮像画像の画角の中心部分の領域全体における各色成分の情報、すなわち0次元の分光情報となる。 In this example, since the imaging optical system such as an imaging lens is not provided in the sensing optical system 23, the camera module 11 can not obtain spectral information for each area in the central portion of the captured image. That is, two-dimensional spectral information for each region can not be obtained. Therefore, spectral information obtained by the camera module 11 is information of each color component in the entire region of the central part of the angle of view of the captured image, that is, 0-dimensional spectral information.
 なお、センシング領域R12の画素配列は、配列単位が周期的(規則的)に繰り返し並べられた画素配列となっている。つまり、遮光領域におけるセンシング画素のカラー配列は、全ての遮光領域で同じとなっている。このように遮光領域内のセンシング画素のカラー配列を全遮光領域で同じとなるようにすることで、各遮光領域の分光情報から最終的な1つの分光情報を得るための信号処理など、センシング画素の後段における信号処理が煩雑になることを防止することができる。つまり、例えばセンシング領域R12全体において各色成分のセンシング画素がランダムに配列される場合と比べて、後段における信号処理を簡単に行うことができる。 The pixel arrangement of the sensing region R12 is a pixel arrangement in which arrangement units are periodically (regularly) repeated. That is, the color arrangement of the sensing pixels in the light shielding area is the same in all the light shielding areas. As described above, by making the color array of sensing pixels in the light shielding region the same in all the light shielding regions, sensing pixels such as signal processing for obtaining one final spectral information from spectral information of each light shielding region. It is possible to prevent the signal processing in the subsequent stage of the signal from becoming complicated. That is, for example, signal processing in the subsequent stage can be easily performed as compared with the case where the sensing pixels of each color component are randomly arranged in the entire sensing region R12.
 以上のように、センシング光学系23をイメージング光学系22とは異なる光学系とし、さらにピンホールアレイによりセンシング光学系23を構成することで、撮像画像の画角の中心部分の領域の分光情報を得ることができる。すなわち、撮像画像の中心部分の被写体は、分光検出に関する興味物体である可能性が高いので、より適切な分光情報を得ることができる。 As described above, by using the sensing optical system 23 as an optical system different from the imaging optical system 22 and further configuring the sensing optical system 23 with a pinhole array, spectral information of the central part of the angle of view of the captured image is obtained. You can get it. That is, since the subject in the central portion of the captured image is highly likely to be an object of interest for spectral detection, more appropriate spectral information can be obtained.
 しかも、カメラモジュール11では、センシング光学系23としてのピンホールアレイがアスペクト付きのピンホール122により構成されているので、カラーフィルタの入射角依存性の影響を受けることなく、正確な分光情報を得ることができる。 Moreover, in the camera module 11, since the pinhole array as the sensing optical system 23 is configured by the pinhole 122 with an aspect, accurate spectral information can be obtained without being affected by the incident angle dependency of the color filter. be able to.
 センシング画素に設けられたカラーフィルタは入射角依存性を有しており、カラーフィルタの分光特性は、そのカラーフィルタに対する光の入射角度によって変化する。つまり、センシング画素(カラーフィルタ)への光の入射角度によって、センシング画素で受光される光の受光強度の波長分布が変化する。 The color filter provided in the sensing pixel has an incident angle dependency, and the spectral characteristics of the color filter change according to the incident angle of light on the color filter. That is, the wavelength distribution of the light reception intensity of the light received by the sensing pixel changes according to the incident angle of the light to the sensing pixel (color filter).
 通常のカメラモジュールでは、受光面の周辺領域、つまり端近傍の領域がセンシング領域とされ、イメージング領域と共通のイメージング光学系によりセンシング領域へと光が導かれるので、センシング画素には、ある程度の入射角度で光が入射することになる。そのため、センシング画素の位置によって上述のカラーフィルタの入射角依存性により分光が変化し、正確な分光情報を検出することができない。 In a normal camera module, the peripheral area of the light receiving surface, that is, the area near the end is used as a sensing area, and light is guided to the sensing area by the imaging optical system common to the imaging area. Light will be incident at an angle. Therefore, the spectrum changes due to the incident angle dependency of the above-mentioned color filter depending on the position of the sensing pixel, and accurate spectral information can not be detected.
 これに対して、カメラモジュール11では、イメージング光学系22とは別にセンシング光学系23が設けられ、アスペクト付きのピンホール122によりセンシング光学系23が構成されている。したがって、各センシング画素には受光面と略垂直な方向から光が入射することになるので、カラーフィルタの入射角依存性の影響を受けることなく正確な分光情報を検出することができる。しかも、センシング画素の位置によらず、全センシング画素で光の入射角度は略同じであるので、つまり同一の光学条件で光が入射するので、センシング画素の位置によって分光情報のばらつきが生じることもない。 On the other hand, in the camera module 11, the sensing optical system 23 is provided separately from the imaging optical system 22, and the sensing optical system 23 is configured by the pinhole 122 with an aspect. Therefore, light is incident on each sensing pixel from a direction substantially perpendicular to the light receiving surface, so that accurate spectral information can be detected without being affected by the incident angle dependency of the color filter. Moreover, regardless of the position of the sensing pixel, the incident angle of light is substantially the same for all sensing pixels, that is, light is incident under the same optical condition, and thus dispersion of spectral information may occur depending on the position of the sensing pixel. Absent.
 さらに、カメラモジュール11では、センシング光学系23としてピンホールアレイを設けるだけでよく、部品数も少なくて済むので、カメラモジュール11の低コスト化および小型化も実現することができる。 Furthermore, in the camera module 11, it is sufficient to provide a pinhole array as the sensing optical system 23, and the number of parts can be reduced, so that the cost and size of the camera module 11 can be reduced.
 なお、センシング領域R12-1およびセンシング光学系23-1の構成も、センシング領域R12-2およびセンシング光学系23-2の構成と同様の構成となっている。 The configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2.
 また、図4の例では、遮光板121によりピンホール122を形成する例について説明したが、その他、ミラーやメタル(金属)等の部材や、反射板などよりピンホール122を形成するようにしてもよい。すなわち、遮光板121に代えてミラーやメタル、反射板などの反射部材で遮光領域を囲むことによりピンホール122を形成してもよい。 In the example of FIG. 4, the example in which the pinhole 122 is formed by the light shielding plate 121 has been described, but in addition, the pinhole 122 is formed by a member such as a mirror or metal or a reflector. It is also good. That is, in place of the light shielding plate 121, the pinhole 122 may be formed by surrounding the light shielding region with a reflecting member such as a mirror, a metal, or a reflecting plate.
 そのような場合、ピンホール122に入射してくる光のうち、多少の角度を有する光も反射部材で反射されてセンシング画素へと導かれ、センシング画素で受光されるようになる。その結果、遮光板121によりピンホール122を形成する場合よりもより広い領域を対象とした分光情報を得ることができる。 In such a case, among the light incident on the pinhole 122, light having a slight angle is also reflected by the reflection member and guided to the sensing pixel, and is received by the sensing pixel. As a result, it is possible to obtain spectral information for a wider area than in the case where the pinhole 122 is formed by the light shielding plate 121.
 さらに、アスペクト付きピンホールであるピンホール122のアスペクト比、つまりピンホール122のX方向やY方向の長さと、ピンホール122のZ方向の長さ(深さ)との比を調整することで、所望の広さの領域を対象として分光情報を検出することができる。 Furthermore, by adjusting the aspect ratio of the pinhole 122 which is a pinhole with an aspect, that is, the ratio of the length in the X direction or Y direction of the pinhole 122 to the length (depth) in the Z direction of the pinhole 122 Spectral information can be detected for a desired area of area.
 例えば適切なアスペクト比のピンホール122を設けることで、イメージング光学系22の画角と略同じ画角で分光情報を検出することも可能である。 For example, by providing a pinhole 122 with an appropriate aspect ratio, it is also possible to detect spectral information at an angle of view substantially the same as the angle of view of the imaging optical system 22.
 具体的には、例えばイメージング領域R11のX方向の長さおよびY方向の長さが、それぞれX(例えば、Xmm)およびY(例えば、Ymm)であり、イメージング光学系22の焦点距離がf(例えば、fmm)であるとする。 Specifically, for example, the length in the X direction and the length in the Y direction of the imaging region R11 are X (for example, X mm) and Y (for example, Y mm), respectively, and the focal length of the imaging optical system 22 is f (f For example, suppose that it is fmm).
 また、ピンホール122のX方向の長さおよびY方向の長さが、それぞれX’(例えば、X’mm)およびY’(例えば、Y’mm)であり、ピンホール122のZ方向の長さ(深さ)がZ’(例えば、Z’mm)であるとする。 Further, the length in the X direction and the length in the Y direction of the pinhole 122 are respectively X ′ (eg, X ′ mm) and Y ′ (eg, Y ′ mm), and the length in the Z direction of the pinhole 122 (Depth) is assumed to be Z '(e.g., Z' mm).
 ここで、ピンホール122のZ方向の長さは、遮光板121のZ方向の長さである。Z方向は、イメージング領域R11やセンシング領域R12と垂直な方向であり、ピンホール122の深さ方向(長手方向)、つまりセンシング光学系23の光軸方向であるともいうことができる。また、ここではピンホール122は四角柱形状であり、そのピンホール122の開口部分は矩形の形状であるとする。 Here, the length of the pinhole 122 in the Z direction is the length of the light shielding plate 121 in the Z direction. The Z direction is a direction perpendicular to the imaging area R11 and the sensing area R12, and can also be referred to as a depth direction (longitudinal direction) of the pinhole 122, that is, an optical axis direction of the sensing optical system 23. Here, it is assumed that the pinhole 122 has a quadrangular prism shape, and the opening of the pinhole 122 has a rectangular shape.
 さらに、ピンホール122のZ方向(深さ)に対するX方向のアスペクト比をX’/Z’とし、ピンホール122のZ方向に対するY方向のアスペクト比をY’/Z’とする。 Furthermore, the aspect ratio of the pinhole 122 in the X direction to the Z direction (depth) is X '/ Z', and the aspect ratio of the pinhole 122 in the Y direction to the Z direction is Y '/ Z'.
 このとき、概ねX’/Z’=X/fおよびY’/Z’=Y/fが満たされるようにピンホール122を形成すれば、イメージング光学系22の画角と、ピンホール122(センシング光学系23)の画角、つまり分光情報の画角とが略同じとなる。 At this time, if the pinhole 122 is formed so that X ′ / Z ′ = X / f and Y ′ / Z ′ = Y / f are generally satisfied, the angle of view of the imaging optical system 22 and the pinhole 122 (sensing The angle of view of the optical system 23), that is, the angle of view of the spectral information, is substantially the same.
 例えばY方向に注目し、イメージング光学系22のY方向の画角の半分の角度をθyとすると、tan(θy)=Y/2fが成立する。したがって、tan(θy)=Y’/2Z’が成立するようにピンホール122のZ方向の長さZ’(深さ)と、ピンホール122のY方向の長さY’とを定めれば、結果としてピンホール122のY方向の画角の半分の角度もθyとなる。 For example, if attention is focused on the Y direction and the angle of half of the angle of view of the imaging optical system 22 in the Y direction is θy, then tan (θy) = Y / 2f holds. Therefore, if the length Z ′ (depth) of the pinhole 122 in the Z direction and the length Y ′ of the pinhole 122 in the Y direction are determined so that tan (θy) = Y ′ / 2Z ′ holds. As a result, the half angle of the angle of view of the pinhole 122 in the Y direction also becomes θy.
 以上のように、ピンホール122の深さ方向の長さZ’と、ピンホール122の深さ方向に垂直なX方向またはY方向の長さであるX’またはY’との比が、イメージング光学系22の焦点距離fと、イメージング領域R11のX方向またはY方向の長さであるXまたはYとの比と略等しくなるようにピンホール122を形成すれば、イメージング光学系22の画角と略同じ画角の分光情報を得ることができる。 As described above, the ratio of the length Z ′ in the depth direction of the pinhole 122 to the length X ′ or Y ′ which is the length in the X direction or the Y direction perpendicular to the depth direction of the pinhole 122 is imaging If the pinhole 122 is formed to be approximately equal to the ratio of the focal length f of the optical system 22 to X or Y which is the length in the X direction or Y direction of the imaging region R11, the angle of view of the imaging optical system 22 Spectral information of substantially the same angle of view can be obtained.
〈第2の実施の形態〉
〈センシング光学系の構成例〉
 また、より広い領域を対象とした分光情報を得ることができるように、ピンホールアレイと拡散板を組み合わせてセンシング光学系23としてもよい。
Second Embodiment
<Example of configuration of sensing optical system>
In addition, in order to obtain spectral information for a wider area, a pinhole array and a diffusion plate may be combined to form the sensing optical system 23.
 そのような場合、センシング光学系23は、例えば図5に示すように構成される。なお、図5において図4における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 In such a case, the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 5, parts corresponding to those in FIG. 4 are given the same reference numerals, and the description thereof will be omitted as appropriate.
 図5では、矢印Q21に示す部分は、カメラモジュール11をZ方向から見た図を示しており、矢印Q22に示す部分はセンシング領域R12-2の部分を拡大した図を示している。これらの矢印Q21に示す部分および矢印Q22に示す部分は、図4の矢印Q11に示す部分および矢印Q12に示す部分と同様であるので、その説明は省略する。 In FIG. 5, the portion shown by the arrow Q21 shows the camera module 11 viewed from the Z direction, and the portion shown by the arrow Q22 shows an enlarged view of the portion of the sensing region R12-2. The portion shown by the arrow Q21 and the portion shown by the arrow Q22 are the same as the portion shown by the arrow Q11 and the portion shown by the arrow Q12 of FIG.
 また、矢印Q23に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の一部分をX方向から見た図を示している。 Further, a portion indicated by an arrow Q23 is a view of a sensing region R12-2 of the camera module 11 and a part of the sensing optical system 23-2 as viewed from the X direction.
 この例では、遮光板121によりピンホール122が形成されている点は、図4における場合と同じであるが、ピンホール122からなるピンホールアレイにおける、撮像部21側とは反対側、つまり被写体側に拡散板151が設けられている点で図4における場合と異なっている。 In this example, the pinhole 122 is formed by the light shielding plate 121, which is the same as the case in FIG. 4, but in the pinhole array composed of the pinhole 122, the side opposite to the imaging unit 21 side, that is, the object It differs from the case in FIG. 4 in that a diffusion plate 151 is provided on the side.
 図5に示す例では、ピンホール122からなるピンホールアレイと、そのピンホールアレイの被写体側に配置された拡散板151とからセンシング光学系23が構成されている。 In the example shown in FIG. 5, the sensing optical system 23 is configured of a pinhole array consisting of pinholes 122 and a diffusion plate 151 disposed on the object side of the pinhole array.
 この場合、被写体からの光は、まず拡散板151に入射し、拡散板151により拡散される。そして、被写体から入射し、拡散板151により拡散された光が各ピンホール122を通って遮光領域内のセンシング画素へと入射することになる。 In this case, light from the subject first enters the diffusion plate 151 and is diffused by the diffusion plate 151. Then, the light is incident from the subject, and the light diffused by the diffusion plate 151 is incident to the sensing pixels in the light shielding area through the respective pinholes 122.
 したがって、図4に示した例と比較すると、拡散板151に入射してくる光のうち、ある程度の角度を有する光でも拡散板151で拡散されてセンシング画素で受光されるようになるので、より広い領域を対象とした分光情報を得ることができる。特に、拡散板151における拡散度合いによっては、イメージング光学系22と同等の画角の分光情報を得ることも可能である。 Therefore, in comparison with the example shown in FIG. 4, even light having an angle to a certain degree among light incident on the diffusion plate 151 is diffused by the diffusion plate 151 and is received by the sensing pixel. Spectral information for a wide area can be obtained. In particular, depending on the degree of diffusion in the diffusion plate 151, it is also possible to obtain spectral information with an angle of view equivalent to that of the imaging optical system 22.
 このようにピンホール122からなるピンホールアレイと、拡散板151とからセンシング光学系23を構成する場合においても、カラーフィルタの入射角依存性の影響を受けることなく、適切な分光情報を得ることができる。特に、この実施の形態で得られる分光情報は、図4に示した例と同様に0次元の分光情報となる。 As described above, even when the sensing optical system 23 includes the pinhole array including the pinhole 122 and the diffusion plate 151, appropriate spectral information can be obtained without being affected by the incident angle dependency of the color filter. Can. In particular, the spectral information obtained in this embodiment is 0-dimensional spectral information as in the example shown in FIG.
 なお、センシング領域R12-1およびセンシング光学系23-1の構成も、センシング領域R12-2およびセンシング光学系23-2の構成と同様の構成となっている。また、1つのピンホール122と拡散板151とによりセンシング光学系23が構成されてもよい。 The configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2. Also, the sensing optical system 23 may be configured by one pinhole 122 and the diffusion plate 151.
〈第3の実施の形態〉
〈センシング光学系の構成例〉
 さらに、センシング画素に設けられたカラーフィルタの入射角依存性を利用して、実質的にセンシング画素のカラーフィルタの種類を増加させ、分光情報の波長分解能を向上させるようにしてもよい。
Third Embodiment
<Example of configuration of sensing optical system>
Furthermore, the wavelength resolution of spectral information may be improved by substantially increasing the types of color filters of the sensing pixel by utilizing the incident angle dependency of the color filter provided in the sensing pixel.
 そのような場合、センシング光学系23は、例えば図6に示すように構成される。なお、図6において図4における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 In such a case, the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 6, parts corresponding to those in FIG. 4 are given the same reference numerals, and the description thereof will be omitted as appropriate.
 図6では、矢印Q31に示す部分は、カメラモジュール11をZ方向から見た図を示している。この矢印Q31に示す部分は図4の矢印Q11に示す部分と同様であるので、その説明は省略する。 In FIG. 6, a portion indicated by an arrow Q31 is a view of the camera module 11 as viewed from the Z direction. The portion shown by the arrow Q31 is the same as the portion shown by the arrow Q11 in FIG.
 また、図6において矢印Q32に示す部分は、カメラモジュール11のセンシング領域R12-2の部分を拡大した図であり、矢印Q33に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分を拡大した図である。 6 is an enlarged view of the sensing area R12-2 of the camera module 11, and the part shown by the arrow Q33 is a sensing area R12-2 of the camera module 11 and a sensing optical system. It is the figure which expanded the part of 23-2.
 矢印Q32に示す部分は、センシング領域R12-2をZ方向から見たときの図を示しており、各四角形は図3に示した画素61に対応するセンシング画素を表している。 The portion shown by the arrow Q32 is a view of the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
 センシング画素を表す四角形に施されたハッチは、そのセンシング画素に設けられたカラーフィルタの色を表しており、ここではセンシング領域R12には7種類の色成分のセンシング画素が設けられている。 A hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
 また、ここではセンシング領域R12-2全体が、そのセンシング領域R12-2直上に形成された遮光板181によって囲まれており、センシング領域R12-2全体が1つの遮光領域となっている。この遮光板181は、図1に示した遮光板24に対応する。 Here, the entire sensing region R12-2 is surrounded by the light shielding plate 181 formed immediately above the sensing region R12-2, and the entire sensing region R12-2 is one light shielding region. The light blocking plate 181 corresponds to the light blocking plate 24 shown in FIG.
 カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分をX方向から見ると、矢印Q33に示すように、外部の被写体からの光は、拡散板182およびシリンドリカルレンズ183を介してセンシング画素へと入射する。 When the sensing area R12-2 and the sensing optical system 23-2 of the camera module 11 are viewed in the X direction, light from an external subject passes through the diffuser plate 182 and the cylindrical lens 183, as indicated by an arrow Q33. It is incident on the sensing pixel.
 この例では、センシング領域R12-2の全体が遮光板181によって囲まれて1つの遮光領域とされており、その遮光領域に対して拡散板182、および光学レンズ(結像レンズ)であるシリンドリカルレンズ183が設けられている。そして、これらの拡散板182およびシリンドリカルレンズ183により、センシング光学系23-2が構成されている。 In this example, the entire sensing region R12-2 is surrounded by the light shielding plate 181 to be one light shielding region, and the light shielding region is a diffusion plate 182 and a cylindrical lens which is an optical lens (imaging lens) 183 are provided. The diffusion plate 182 and the cylindrical lens 183 constitute a sensing optical system 23-2.
 図6では、撮像部21の被写体側にセンシング領域R12-2と略同じ大きさの断面を有するシリンドリカルレンズ183が配置され、そのシリンドリカルレンズ183の被写体側、つまりセンシング領域R12-2側とは反対側に、さらに拡散板182が配置されている。 In FIG. 6, a cylindrical lens 183 having a cross section substantially the same size as the sensing area R12-2 is disposed on the subject side of the imaging unit 21, and the cylindrical lens 183 is opposite to the subject side, that is, the sensing area R12-2 side. A diffuser plate 182 is further arranged on the side.
 この場合、被写体からの光は拡散板182に入射し、拡散板182によって拡散された後、シリンドリカルレンズ183に入射する。そして、拡散板182からシリンドリカルレンズ183に入射した光は、シリンドリカルレンズ183により集光されて遮光領域の各センシング画素へと導かれる。 In this case, light from the subject is incident on the diffusion plate 182, diffused by the diffusion plate 182, and then incident on the cylindrical lens 183. Then, the light incident on the cylindrical lens 183 from the diffusion plate 182 is collected by the cylindrical lens 183 and guided to each sensing pixel in the light shielding area.
 シリンドリカルレンズ183および拡散板182を介して被写体からセンシング画素に入射した光は、センシング画素において光電変換され、その結果得られた電荷の量に応じた画素信号が各センシング画素から読み出されて分光情報が生成される。 Light incident on the sensing pixel from the subject through the cylindrical lens 183 and the diffusion plate 182 is photoelectrically converted in the sensing pixel, and a pixel signal corresponding to the amount of charge obtained as a result is read out from each sensing pixel Information is generated.
 この例では拡散板182があるため、カメラモジュール11で得られる分光情報は、撮像画像の画角の中心部分とその周辺領域からなる領域全体における各色成分の情報となる。すなわち、この実施の形態で得られる分光情報は、図5に示した例と同様の0次の分光情報となる。 In this example, since the diffusion plate 182 is provided, spectral information obtained by the camera module 11 is information of each color component in the entire area including the central portion of the angle of view of the captured image and the peripheral area thereof. That is, the spectral information obtained in this embodiment is the zeroth-order spectral information similar to the example shown in FIG.
 この場合においても、第1の実施の形態や第2の実施の形態と同様に、撮像画像の画角の中心部分やその周辺の領域を対象とした適切な分光情報を得ることができる。 Also in this case, as in the first embodiment and the second embodiment, it is possible to obtain appropriate spectral information for the central portion of the angle of view of the captured image and the region around it.
 但し、拡散板182で拡散された光は、シリンドリカルレンズ183により集光されて遮光領域(センシング領域R12)の各領域で結像するため、同じ被写体からの光であってもセンシング画素の位置によって、それらのセンシング画素への光の入射角度が異なる。 However, since the light diffused by the diffusion plate 182 is condensed by the cylindrical lens 183 and forms an image in each area of the light shielding area (sensing area R12), even light from the same subject depends on the position of the sensing pixel The incident angles of light to the sensing pixels are different.
 すなわち、シリンドリカルレンズ183は、遮光領域上の互いに異なる位置にあるセンシング画素に対して、それぞれ異なる入射角度で拡散板182からの光を入射させる。 That is, the cylindrical lens 183 causes the light from the diffusion plate 182 to be incident on sensing pixels at different positions on the light shielding region at different incident angles.
 具体的には、例えばセンシング領域R12中央の矢印W11に示す位置にあるセンシング画素では、拡散板182の中心位置付近からの光は、矢印AR11に示すように受光面に対して略垂直な方向から入射してくる。つまり、光の入射角度が略0度である。 Specifically, for example, in the sensing pixel at the position indicated by the arrow W11 at the center of the sensing region R12, light from near the center position of the diffusion plate 182 is from a direction substantially perpendicular to the light receiving surface as shown by the arrow AR11. It will be incident. That is, the incident angle of light is approximately 0 degrees.
 これに対して、例えばセンシング領域R12の端近傍の矢印W12に示す位置にあるセンシング画素では、拡散板182の中心位置付近からの光は、矢印AR12に示すように受光面に対して斜めの方向から入射してくる。 On the other hand, for example, in the sensing pixel at the position indicated by the arrow W12 near the end of the sensing region R12, the light from near the central position of the diffusion plate 182 is oblique to the light receiving surface as shown by the arrow AR12. It is incident from.
 このように拡散板182における同じ位置から射出された光でも、センシング画素のある遮光領域(センシング領域R12)上の位置によって、それらのセンシング画素へと入射するときの光の入射角が異なる。 As described above, even with light emitted from the same position in the diffusion plate 182, the incident angle of light when entering the sensing pixels differs depending on the position on the light shielding area (sensing area R12) where the sensing pixels are present.
 したがって、同じ色成分のセンシング画素でも、それらのセンシング画素のある位置によって分光特性が変化するため、異なる位置に配置された同じ色成分のセンシング画素が、実質的には異なる色成分のセンシング画素として機能することになる。 Therefore, even in the sensing pixels of the same color component, the spectral characteristics change depending on the position of the sensing pixels, so sensing pixels of the same color component arranged at different positions are substantially as sensing pixels of different color components. It will work.
 具体的には、例えば矢印W11に示す位置と、矢印W12に示す位置とに所定の色成分のセンシング画素があるとする。この場合、それらの2つのセンシング画素の画素信号から、互いに異なる色成分の分光情報が得られるようになる。 Specifically, for example, it is assumed that there are sensing pixels of a predetermined color component at the position indicated by arrow W11 and the position indicated by arrow W12. In this case, spectral information of different color components can be obtained from pixel signals of those two sensing pixels.
 これにより、実質的にセンシング領域R12内のセンシング画素の色成分(カラーフィルタ)の種類を増加させ、分光情報の波長分解能を向上させることができる。すなわち、分光情報の波長チャネル数を増加させることができる。 As a result, the types of color components (color filters) of the sensing pixels in the sensing region R12 can be substantially increased, and the wavelength resolution of spectral information can be improved. That is, the number of wavelength channels of spectral information can be increased.
 なお、どのセンシング画素からどの色成分の情報が得られるかは、センシング画素に設けられたカラーフィルタと、そのセンシング画素のセンシング領域R12上の配置位置とから予め把握することが可能である。 In addition, it can be understood in advance from which sensing pixel the information of which color component can be obtained from the color filter provided in the sensing pixel and the arrangement position of the sensing pixel on the sensing region R12.
 また、センシング領域R12-1およびセンシング光学系23-1の構成も、センシング領域R12-2およびセンシング光学系23-2の構成と同様の構成となっている。 Further, the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2.
 さらに、図6に示した例ではセンシング領域R12-2全体が1つの遮光領域とされ、その遮光領域に対して拡散板182とシリンドリカルレンズ183が設けられる例について説明した。その他、例えばセンシング領域R12-1やセンシング領域R12-2に複数の遮光領域が設けられ、それらの複数の遮光領域ごとに拡散板182とシリンドリカルレンズ183を設けるようにしてもよい。すなわち、センシング光学系23が1または複数の拡散板182と、1または複数のシリンドリカルレンズ183とから構成されるようにしてもよい。また、シリンドリカルレンズ183とは異なる他の光学レンズと拡散板182とからセンシング光学系23が構成されてもよい。また、この実施の形態では、各画素のカラーフィルタ、特にセンシング画素のカラーフィルタは、誘電体多層膜やプラズモン共鳴体などの入射角ごとの分光依存性の違いが顕著なカラーフィルタであることが望ましい。 Further, in the example shown in FIG. 6, the entire sensing area R12-2 is one light shielding area, and the diffusion plate 182 and the cylindrical lens 183 are provided for the light shielding area. In addition, for example, a plurality of light shielding regions may be provided in the sensing region R12-1 and the sensing region R12-2, and the diffusion plate 182 and the cylindrical lens 183 may be provided for each of the plurality of light shielding regions. That is, the sensing optical system 23 may be configured by one or more diffusion plates 182 and one or more cylindrical lenses 183. In addition, the sensing optical system 23 may be configured of another optical lens different from the cylindrical lens 183 and the diffusion plate 182. Further, in this embodiment, the color filter of each pixel, in particular the color filter of the sensing pixel, is a color filter in which the difference in spectral dependence at each incident angle of the dielectric multilayer film, the plasmon resonator, etc. is remarkable. desirable.
〈第4の実施の形態〉
〈センシング光学系の構成例〉
 また、センシング領域R12を複数の遮光領域に分割し、それらの遮光領域ごとにセンシング光学系23として結像レンズを設けることで、イメージング光学系22(撮像画像)の画角と略同じ画角の2次元の分光情報を得ることができる。
Fourth Embodiment
<Example of configuration of sensing optical system>
Further, the sensing area R12 is divided into a plurality of light shielding areas, and an imaging lens is provided as the sensing optical system 23 for each of the light shielding areas, so that the angle of view substantially the same as the angle of view of the imaging optical system 22 (captured image) Two-dimensional spectral information can be obtained.
 そのような場合、センシング光学系23は、例えば図7に示すように構成される。なお、図7において図4における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 In such a case, the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 7, portions corresponding to the case in FIG. 4 are denoted with the same reference numerals, and the description thereof will be appropriately omitted.
 図7では、矢印Q41に示す部分は、カメラモジュール11をZ方向から見た図を示している。この矢印Q41に示す部分は図4の矢印Q11に示す部分と同様であるので、その説明は省略する。 In FIG. 7, the portion indicated by the arrow Q41 is a view of the camera module 11 as viewed from the Z direction. The portion shown by the arrow Q41 is the same as the portion shown by the arrow Q11 in FIG.
 また、図7において矢印Q42に示す部分は、カメラモジュール11のセンシング領域R12-2の部分を拡大した図であり、矢印Q43に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分を拡大した図である。 7 is an enlarged view of the sensing area R12-2 of the camera module 11, and the part shown by the arrow Q43 is the sensing area R12-2 of the camera module 11 and the sensing optical system. It is the figure which expanded the part of 23-2.
 矢印Q42に示す部分は、センシング領域R12-2をZ方向から見たときの図を示しており、各四角形は図3に示した画素61に対応するセンシング画素を表している。 The portion shown by the arrow Q42 shows the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
 センシング画素を表す四角形に施されたハッチは、そのセンシング画素に設けられたカラーフィルタの色を表しており、ここではセンシング領域R12には7種類の色成分のセンシング画素が設けられている。 A hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
 また、ここではセンシング領域R12-2が、そのセンシング領域R12-2直上に形成された遮光板211によって3つの遮光領域に分割されている。この遮光板211は、図1に示した遮光板24に対応する。 Further, here, the sensing region R12-2 is divided into three light shielding regions by the light shielding plate 211 formed immediately above the sensing region R12-2. The light shielding plate 211 corresponds to the light shielding plate 24 shown in FIG.
 カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分をX方向から見ると、矢印Q43に示すように、外部の被写体からの光は、結像レンズ212-1乃至結像レンズ212-3のそれぞれを介して、3つの遮光領域のそれぞれに入射する。 When the sensing area R12-2 of the camera module 11 and the sensing optical system 23-2 are viewed in the X direction, light from an external subject is transmitted through the imaging lens 212-1 to the imaging lens as shown by an arrow Q43. The light is incident on each of the three light shielding regions through each of 212-3.
 この例では、センシング領域R12-2上の3つの領域のそれぞれが遮光板211によって囲まれて遮光領域とされており、それらの遮光領域に対して結像レンズ212-1乃至結像レンズ212-3が設けられている。そして、これらの結像レンズ212-1乃至結像レンズ212-3からなるレンズアレイによりセンシング光学系23-2が構成されている。 In this example, each of three regions on the sensing region R12-2 is surrounded by the light shielding plate 211 to be a light shielding region, and the imaging lens 212-1 to the imaging lens 212 Three are provided. A sensing optical system 23-2 is configured of a lens array including the imaging lenses 212-1 to 212-3.
 図7では、図中、最も上側の遮光領域に対して、その遮光領域の被写体側に結像レンズ212-1が配置され、センシング領域R12-2の中央の遮光領域の被写体側に結像レンズ212-2が配置され、図中、最も下側の遮光領域の被写体側に結像レンズ212-3が配置されている。なお、以下、結像レンズ212-1乃至結像レンズ212-3を特に区別する必要のない場合、単に結像レンズ212とも称することとする。 In FIG. 7, the imaging lens 212-1 is disposed on the object side of the uppermost light shielding area in the drawing, and the imaging lens on the object side of the light shielding area at the center of the sensing area R 12-2. An imaging lens 212-3 is disposed on the object side of the lowermost light shielding area in the drawing. Hereinafter, the imaging lenses 212-1 to 212-3 will be simply referred to as imaging lenses 212 unless it is necessary to distinguish them.
 この場合、被写体からの光は結像レンズ212により集光され、その結像レンズ212の直下に配置された遮光領域の各センシング画素へと導かれる。すなわち、被写体からの光が結像レンズ212によって遮光領域上で結像される。 In this case, light from the subject is condensed by the imaging lens 212 and is guided to each sensing pixel of the light shielding area disposed immediately below the imaging lens 212. That is, the light from the subject is imaged on the light shielding area by the imaging lens 212.
 結像レンズ212を介して被写体からセンシング画素に入射した光は、センシング画素において光電変換され、その結果得られた電荷の量に応じた画素信号が各センシング画素から読み出されて分光情報が生成される。 Light incident on a sensing pixel from a subject through the imaging lens 212 is photoelectrically converted in the sensing pixel, and a pixel signal corresponding to the amount of charge obtained as a result is read from each sensing pixel to generate spectral information. Be done.
 この例では結像レンズ212によりセンシング光学系23が構成されるので、各遮光領域では結像レンズ212により被写体の像が結像されることになる。また、各結像レンズ212の画角は、イメージング光学系22(撮像画像)の画角と略同じ画角となっている。 In this example, since the sensing optical system 23 is configured by the imaging lens 212, the image of the subject is formed by the imaging lens 212 in each light-shielded area. Further, the angle of view of each imaging lens 212 is substantially the same as the angle of view of the imaging optical system 22 (captured image).
 そのため、遮光領域の各センシング画素から読み出される画素信号からなる画像は、撮像画像と略同じ画像、より詳細には撮像画像と色成分のみが異なる画像となる。したがって、カメラモジュール11で得られる分光情報は、撮像画像の画角の中心部分とその周辺領域からなる領域全体(撮像画像の画角全体)を対象とした、領域ごとの各色成分の情報(分光)を示す2次元の分光情報となる。 Therefore, an image composed of pixel signals read out from the respective sensing pixels in the light shielding area is an image substantially the same as the captured image, and more specifically, an image which differs from the captured image only in color components. Therefore, the spectral information obtained by the camera module 11 is information (spectrum of each color component for each area) for the entire area (entire angle of view of the imaged image) including the central portion of the angle of view of the imaged image and its peripheral area. It becomes two-dimensional spectral information which shows).
 すなわち、カメラモジュール11では各遮光領域について、結像レンズ212の画角(観察視野)内の領域ごとに、それらの領域における各色成分の入射光の強度(入射光量)を示す分光情報が得られる。そして、全ての遮光領域で得られた分光情報の各領域について色成分ごとの平均値や加重平均値、和、重み付き加算値などが求められ、最終的な1つの分光情報とされる。この場合、撮像画像の画角全体を対象とした2次元の分光情報を得ることができる。 That is, in the camera module 11, spectral information indicating the intensity (incident light quantity) of the incident light of each color component in each area in each area in the angle of view (observation visual field) of the imaging lens 212 is obtained for each light shielding area. . Then, an average value, a weighted average value, a sum, a weighted addition value, and the like for each color component are obtained for each region of spectral information obtained in all the light-shielded regions, and the final spectral information is obtained. In this case, two-dimensional spectral information can be obtained for the entire angle of view of the captured image.
 なお、センシング領域R12-1およびセンシング光学系23-1の構成も、図7に示したセンシング領域R12-2およびセンシング光学系23-2の構成と同様の構成となっている。 The configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2 shown in FIG.
 また、ここではセンシング領域R12は図中、縦方向に長い形状となっているので、センシング領域R12が複数の遮光領域に分割され、それらの遮光領域ごとに被写体の像を結像可能な結像レンズ212が配置された構成となっている。しかし、センシング領域R12全体が1つの遮光領域とされ、被写体の像を結像可能な1つの結像レンズがセンシング光学系23として配置されてもよい。 In addition, since the sensing area R12 has a long shape in the vertical direction in the figure, the sensing area R12 is divided into a plurality of light shielding areas, and an image can be formed on the image of the subject for each of the light shielding areas. The lens 212 is disposed. However, the entire sensing area R12 may be one light blocking area, and one imaging lens capable of forming an image of an object may be disposed as the sensing optical system 23.
〈第5の実施の形態〉
〈センシング光学系の構成例〉
 また、センシング領域R12を複数の遮光領域に分割し、それらの遮光領域ごとにセンシング光学系23として結像レンズを設けるときに、遮光領域(結像レンズ)ごとに異なる画角(観察視野)で分光情報の検出を行うようにしてもよい。
Fifth Embodiment
<Example of configuration of sensing optical system>
In addition, when the sensing area R12 is divided into a plurality of light blocking areas, and an imaging lens is provided as the sensing optical system 23 for each of the light blocking areas, the angle of view (observation field) differs for each light blocking area (imaging lens) The spectral information may be detected.
 そのような場合、センシング光学系23は、例えば図8に示すように構成される。なお、図8において図4における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 In such a case, the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 8, parts corresponding to the case in FIG. 4 are given the same reference numerals, and the description thereof will be omitted as appropriate.
 図8では、矢印Q51に示す部分は、カメラモジュール11をZ方向から見た図を示している。この矢印Q51に示す部分は図4の矢印Q11に示す部分と同様であるので、その説明は省略する。 In FIG. 8, the portion shown by the arrow Q51 is a view of the camera module 11 as viewed from the Z direction. The part shown by the arrow Q51 is the same as the part shown by the arrow Q11 in FIG.
 また、図8において矢印Q52に示す部分は、カメラモジュール11のセンシング領域R12-2の部分を拡大した図であり、矢印Q53に示す部分は、カメラモジュール11のセンシング領域R12-1の部分を拡大した図である。また、矢印Q54に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分を拡大した図である。 8 is an enlarged view of the sensing area R12-2 of the camera module 11, and the area shown by the arrow Q53 is an enlarged view of the sensing area R12-1 of the camera module 11. FIG. Further, a portion indicated by an arrow Q54 is an enlarged view of a portion of the sensing area R12-2 of the camera module 11 and the sensing optical system 23-2.
 矢印Q52に示す部分は、センシング領域R12-2をZ方向から見たときの図を示しており、各四角形は図3に示した画素61に対応するセンシング画素を表している。 The portion shown by the arrow Q52 is a view of the sensing region R12-2 as viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
 センシング画素を表す四角形に施されたハッチは、そのセンシング画素に設けられたカラーフィルタの色を表しており、ここではセンシング領域R12には7種類の色成分のセンシング画素が設けられている。 A hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
 また、ここではセンシング領域R12-2が、そのセンシング領域R12-2直上に形成された遮光板241によって3つの遮光領域に分割されている。この遮光板241は、図1に示した遮光板24に対応する。 Further, here, the sensing region R12-2 is divided into three light shielding regions by the light shielding plate 241 formed immediately above the sensing region R12-2. The light shielding plate 241 corresponds to the light shielding plate 24 shown in FIG.
 さらに、矢印Q52に示す部分において、各遮光領域を囲む点線は、イメージング光学系22、すなわち撮像画像の画角を表している。このイメージング光学系22の画角に対して、各遮光領域の画角(観察視野)は、イメージング光学系22の画角の領域の図中、左半分の領域となっている。 Furthermore, in the portion shown by the arrow Q52, the dotted lines surrounding each light shielding area represent the angle of view of the imaging optical system 22, that is, the captured image. With respect to the angle of view of the imaging optical system 22, the angle of view (observation field) of each light-shielded area is the left half in the figure of the area of angle of view of the imaging optical system 22.
 これに対して、矢印Q53に示すようにセンシング領域R12-1も遮光板により3つの遮光領域に分割されており、それらの各遮光領域の画角は、それらの遮光領域を囲む点線により表されるイメージング光学系22の画角の領域の図中、右半分の領域となっている。 On the other hand, as shown by arrow Q53, sensing region R12-1 is also divided into three light shielding regions by the light shielding plate, and the angle of view of each of the light shielding regions is represented by a dotted line surrounding those light shielding regions. In the drawing of the field angle region of the imaging optical system 22, the right half region is shown.
 このようにカメラモジュール11では、センシング領域R12ごとに、すなわちセンシング領域R12の遮光領域ごとに異なる画角(観察視野)で分光情報を得ることができるようになされている。換言すれば、センシング光学系23のそれぞれは、それらのセンシング光学系23直下にあるセンシング領域R12のそれぞれに対して、互いに異なる画角で外部の被写体からの光を導くことができるようになされている。 As described above, in the camera module 11, spectral information can be obtained at different angle of view (observation field of view) for each sensing region R12, that is, for each light-shielded region of the sensing region R12. In other words, each of the sensing optical systems 23 can guide light from an external subject at a different angle of view to each of the sensing regions R12 located immediately below the sensing optical systems 23. There is.
 しかも、各遮光領域の画角をイメージング光学系22の画角の一部の領域とし、イメージング光学系22の画角内の各領域が、必ずセンシング領域R12の少なくとも1つの遮光領域の画角内に含まれるようにすることで、イメージング光学系22の画角と同じ画角の2次元の分光情報を得ることができる。 Moreover, the angle of view of each light shielding area is a partial area of the angle of view of the imaging optical system 22, and each area within the angle of view of the imaging optical system 22 is necessarily within the angle of view of at least one light shielding area of the sensing area R12. As a result, two-dimensional spectral information of the same angle of view as the angle of view of the imaging optical system 22 can be obtained.
 図8の例では、センシング領域R12-1では、イメージング光学系22の画角の領域の右半分の領域を対象とした分光情報が得られ、センシング領域R12-2では、イメージング光学系22の画角の領域の左半分の領域を対象とした分光情報が得られる。 In the example of FIG. 8, in the sensing area R12-1, spectral information for the right half of the field angle area of the imaging optical system 22 is obtained, and in the sensing area R12-2, the image of the imaging optical system 22 is Spectral information is obtained for the left half of the corner area.
 そして、センシング領域R12-1で得られる分光情報と、センシング領域R12-2で得られる分光情報とから、イメージング光学系22の画角全体を対象とした2次元の分光情報が得られることになる。 Then, from the spectral information obtained in the sensing region R12-1 and the spectral information obtained in the sensing region R12-2, two-dimensional spectral information for the entire angle of view of the imaging optical system 22 can be obtained. .
 ここで、各遮光領域の画角をイメージング光学系22の画角内のどの領域に対応するものとするかは、センシング光学系23を適切に選択することで定めることができる。 Here, which region in the angle of view of the imaging optical system 22 corresponds to the angle of view of each light shielding region can be determined by appropriately selecting the sensing optical system 23.
 すなわち、例えばセンシング光学系23を構成する結像レンズとして、適切な焦点距離やレンズ径等のレンズを選択することで、所望の画角(観察視野)の分光情報を得ることができるようになる。 That is, for example, it is possible to obtain spectral information of a desired angle of view (observation field) by selecting a lens such as an appropriate focal length or lens diameter as an imaging lens constituting the sensing optical system 23. .
 矢印Q52に示したように、センシング領域R12-2でイメージング光学系22の画角の領域の左半分の領域を対象とした分光情報を得ようとする場合、例えば矢印Q54に示すようにセンシング光学系23-2を適切な焦点距離等の結像レンズにより構成すればよい。 As shown by the arrow Q52, when it is intended to obtain spectral information targeting the left half of the field angle region of the imaging optical system 22 in the sensing region R12-2, for example, as shown by the arrow Q54 The system 23-2 may be configured by an imaging lens such as an appropriate focal length.
 矢印Q54に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分をX方向から見た図を示している。 The part shown by arrow Q54 is a view of the sensing area R12-2 of the camera module 11 and the part of the sensing optical system 23-2 as viewed from the X direction.
 この例では、外部の被写体からの光は、結像レンズ242-1乃至結像レンズ242-3のそれぞれを介して、センシング領域R12-2の3つの遮光領域のそれぞれに入射する。 In this example, light from an external subject is incident on each of the three light shielding regions of the sensing region R12-2 through the imaging lenses 242-1 to 242-3.
 すなわち、センシング領域R12-2上の3つの領域のそれぞれが遮光板241によって囲まれて遮光領域とされており、それらの遮光領域に対して被写体側に結像レンズ242-1乃至結像レンズ242-3が設けられている。 That is, each of the three regions on the sensing region R12-2 is surrounded by the light shielding plate 241 to be a light shielding region, and the imaging lens 242-1 to the imaging lens 242 are provided on the object side with respect to those light shielding regions. -3 is provided.
 そして、センシング領域R12-2上に被写体の像を結像可能なこれらの結像レンズ242-1乃至結像レンズ242-3からなるレンズアレイによりセンシング光学系23-2が構成されている。 A sensing optical system 23-2 is configured by a lens array including the imaging lenses 242-1 to 242-3 capable of forming an image of a subject on the sensing region R12-2.
 図8では、図中、最も上側の遮光領域に対して、その遮光領域の被写体側に結像レンズ242-1が配置され、センシング領域R12-2の中央の遮光領域の被写体側に結像レンズ242-2が配置され、図中、最も下側の遮光領域の被写体側に結像レンズ242-3が配置されている。なお、以下、結像レンズ242-1乃至結像レンズ242-3を特に区別する必要のない場合、単に結像レンズ242とも称することとする。 In FIG. 8, the imaging lens 242-1 is disposed on the object side of the light shielding area with respect to the uppermost light shielding area in the figure, and the imaging lens on the object side of the light shielding area at the center of the sensing area R12-2. An imaging lens 242-3 is disposed on the object side of the lowermost light shielding area in the drawing. Hereinafter, the imaging lenses 242-1 to 242-3 will be simply referred to as an imaging lens 242 unless it is necessary to distinguish them.
 この場合、被写体からの光は結像レンズ242により集光され、その結像レンズ242の直下に配置された遮光領域の各センシング画素へと導かれる。すなわち、被写体からの光が結像レンズ242によって遮光領域上で結像される。 In this case, the light from the subject is condensed by the imaging lens 242, and is guided to each sensing pixel of the light shielding area disposed immediately below the imaging lens 242. That is, light from the subject is imaged on the light shielding area by the imaging lens 242.
 結像レンズ242を介して被写体からセンシング画素に入射した光は、センシング画素において光電変換され、その結果得られた電荷の量に応じた画素信号が各センシング画素から読み出されて分光情報が生成される。 Light incident on the sensing pixel from the subject through the imaging lens 242 is photoelectrically converted in the sensing pixel, and a pixel signal corresponding to the amount of charge obtained as a result is read from each sensing pixel to generate spectral information. Be done.
 この例では結像レンズ242によりセンシング光学系23が構成されるので、各遮光領域では結像レンズ242により被写体の像が結像されることになる。また、各結像レンズ242の画角は、イメージング光学系22(撮像画像)の画角の左半分の領域と略同じとなっている。 In this example, since the sensing optical system 23 is configured by the imaging lens 242, the image of the subject is formed by the imaging lens 242 in each light-shielded area. Further, the angle of view of each imaging lens 242 is substantially the same as the area of the left half of the angle of view of the imaging optical system 22 (captured image).
 そのため、各遮光領域のセンシング画素から読み出される画素信号からなる画像は、撮像画像の左半分の画像、より詳細には撮像画像の左半分の画像と色成分のみが異なる画像となる。したがって、センシング領域R12-2で得られる分光情報は、撮像画像の画角の左半分の領域を対象とした、領域ごとの各色成分の情報(分光)を示す2次元の分光情報となる。 Therefore, an image composed of pixel signals read out from the sensing pixels of each light-shielded area is an image of the left half of the captured image, more specifically, an image different from the image of the left half of the captured image by only the color component. Therefore, spectral information obtained in the sensing region R12-2 is two-dimensional spectral information indicating information (spectrum) of each color component for each region, for the region at the left half of the angle of view of the captured image.
 センシング領域R12-1およびセンシング光学系23-1の構成も、センシング光学系23-1の画角(観察視野)が異なるだけで、図8に示したセンシング領域R12-2およびセンシング光学系23-2の構成と同様の構成となっている。 The configurations of the sensing area R12-1 and the sensing optical system 23-1 are also the same as the sensing area R12-2 and the sensing optical system 23 shown in FIG. 8 except that the angle of view (observation visual field) of the sensing optical system 23-1 is different. The configuration is the same as that of the second embodiment.
 センシング領域R12-1の各遮光領域のセンシング画素から読み出される画素信号からなる画像は、撮像画像の右半分の画像、より詳細には撮像画像の右半分の画像と色成分のみが異なる画像となる。したがって、センシング領域R12-1で得られる分光情報は、撮像画像の画角の右半分の領域を対象とした、領域ごとの各色成分の情報(分光)を示す2次元の分光情報となる。 An image consisting of pixel signals read from sensing pixels in each light-shielded area of sensing area R12-1 is an image in the right half of the captured image, more specifically, an image different from the image in the right half of the captured image . Therefore, spectral information obtained in the sensing region R12-1 is two-dimensional spectral information indicating information (spectrum) of each color component for each region, targeting the region on the right half of the angle of view of the captured image.
 したがって、これらのセンシング領域R12ごとの分光情報から、最終的に撮像画像の画角の中心部分とその周辺領域からなる領域全体(撮像画像の画角全体)を対象とした、領域ごとの各色成分の情報(分光)を示す2次元の分光情報を得ることができる。 Therefore, from the spectral information for each sensing area R12, each color component for each area targeting the entire area (entire angle of view of the imaged image) finally including the central part of the angle of view of the imaged image and its peripheral area It is possible to obtain two-dimensional spectral information indicating the information (spectrum) of
 図8に示した構成のカメラモジュール11では、基本的には図7に示した構成のカメラモジュール11における場合と同様の分光情報を得ることができる。 The camera module 11 having the configuration shown in FIG. 8 can basically obtain spectral information similar to that of the camera module 11 having the configuration shown in FIG. 7.
 しかし、図7に示した例では、各遮光領域(結像レンズ212)の画角がイメージング光学系22の画角と略同じであったのに対して、図8に示した例では、各遮光領域(結像レンズ242)の画角がイメージング光学系22の画角内の一部の領域となっている。 However, in the example shown in FIG. 7, the angle of view of each light shielding area (imaging lens 212) is substantially the same as the angle of view of the imaging optical system 22, while in the example shown in FIG. The angle of view of the light shielding area (imaging lens 242) is a partial area within the angle of view of the imaging optical system 22.
 そのため、図8に示した例では、イメージング光学系22の画角内の所定の領域に注目したときに、その所定の領域の分光情報を得るために用いられるセンシング画素の数が図7に示した例よりも多くなり、分光情報の空間分解能を向上させることができる。 Therefore, in the example shown in FIG. 8, when focusing on a predetermined area within the angle of view of the imaging optical system 22, the number of sensing pixels used to obtain spectral information of the predetermined area is shown in FIG. The spatial resolution of spectral information can be improved.
 また、ここではセンシング領域R12は図中、縦方向に長い形状となっているので、1つのセンシング領域R12が複数の遮光領域に分割され、それらの遮光領域ごとに結像レンズ242が配置された構成となっている。しかし、1つのセンシング領域R12全体が1つの遮光領域とされ、1つの結像レンズがセンシング光学系23として配置されてもよい。 Furthermore, since the sensing area R12 has a long shape in the vertical direction in the figure, one sensing area R12 is divided into a plurality of light shielding areas, and the imaging lens 242 is disposed for each of the light shielding areas. It is a structure. However, one entire sensing area R12 may be one light blocking area, and one imaging lens may be disposed as the sensing optical system 23.
 また、ここでは各遮光領域へと光を導く光学部材が、センシング領域R12上に被写体の像を結像可能な結像レンズ242である例について説明したが、必ずしも結像レンズである必要はなく、他の光学レンズ等であってもよい。但し、複数のセンシング光学系23のうちの少なくとも2以上のセンシング光学系23は、外部からの光を互いに異なる画角でセンシング領域R12(遮光領域)へと導くことができる必要がある。 Further, although the example in which the optical member for guiding light to each light shielding area is the imaging lens 242 capable of forming the image of the subject on the sensing area R12 has been described here, it is not necessarily required to be the imaging lens , And other optical lenses. However, at least two or more sensing optical systems 23 of the plurality of sensing optical systems 23 need to be able to guide light from the outside to the sensing area R12 (light shielding area) at different angles of view.
〈第6の実施の形態〉
〈センシング光学系の構成例〉
 ところで、上述したカラーフィルタの入射角依存性における場合と同様に、入射してくる赤外光を遮断(カット)するIR(Infrared)カットフィルタ(赤外遮光フィルタ)の有無によっても画素の分光特性が変化することが知られている。
Sixth Embodiment
<Example of configuration of sensing optical system>
By the way, as in the case of the incident angle dependency of the color filter described above, the spectral characteristic of the pixel is also caused by the presence or absence of an IR (Infrared) cut filter (infrared light blocking filter) for blocking incident infrared light. Is known to change.
 そこで、IRカットフィルタが設けられたセンシング画素と、IRカットフィルタが設けられていないセンシング画素とを配置することで、実質的にセンシング画素のカラーフィルタの種類を増加させ、分光情報の波長分解能を向上させるようにしてもよい。 Therefore, by arranging a sensing pixel provided with an IR cut filter and a sensing pixel provided with no IR cut filter, the types of color filters of the sensing pixel are substantially increased, and the wavelength resolution of spectral information is increased. It may be improved.
 そのような場合、センシング光学系23は、例えば図9に示すように構成される。なお、図9において図7における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 In such a case, the sensing optical system 23 is configured, for example, as shown in FIG. In FIG. 9, parts corresponding to those in FIG. 7 are assigned the same reference numerals, and the description thereof will be omitted as appropriate.
 図9では、矢印Q61に示す部分は、カメラモジュール11をZ方向から見た図を示している。この矢印Q61に示す部分は図7の矢印Q41に示す部分と同様であるので、その説明は省略する。 In FIG. 9, the portion shown by the arrow Q61 is a view of the camera module 11 as viewed from the Z direction. The portion shown by the arrow Q61 is the same as the portion shown by the arrow Q41 of FIG. 7, so the description thereof is omitted.
 また、図9において矢印Q62に示す部分は、カメラモジュール11のイメージング領域R11とセンシング領域R12の部分を拡大した図であり、矢印Q63に示す部分は、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分を拡大した図である。 9 is an enlarged view of the imaging area R11 and the sensing area R12 of the camera module 11, and the part shown by the arrow Q63 is the sensing area R12-2 of the camera module 11 and the sensing. It is the figure which expanded the part of optical system 23-2.
 矢印Q62に示す部分は、イメージング領域R11とセンシング領域R12をZ方向から見たときの図を示しており、各四角形は図3に示した画素61に対応するセンシング画素を表している。 The part shown by the arrow Q62 shows the view when the imaging area R11 and the sensing area R12 are viewed from the Z direction, and each square represents a sensing pixel corresponding to the pixel 61 shown in FIG.
 センシング画素を表す四角形に施されたハッチは、そのセンシング画素に設けられたカラーフィルタの色を表しており、ここではセンシング領域R12には7種類の色成分のセンシング画素が設けられている。 A hatched hatched square representing a sensing pixel represents the color of a color filter provided in the sensing pixel, and here, sensing pixels of seven color components are provided in the sensing region R12.
 また、イメージング領域R11にも図3に示した画素61に対応するイメージング画素が設けられているが、ここでは各イメージング画素を表す四角形の図示は省略されている。 Further, although imaging pixels corresponding to the pixels 61 shown in FIG. 3 are also provided in the imaging region R11, illustration of squares representing the respective imaging pixels is omitted here.
 一般的に、イメージング画素では、不要な赤外光を受光してしまわないようにIRカットフィルタが設けられている。このようなIRカットフィルタは、例えばイメージング画素に設けられたカラーフィルタの直上または直下に形成される。 Generally, in the imaging pixel, an IR cut filter is provided so as not to receive unnecessary infrared light. Such an IR cut filter is formed, for example, immediately above or below a color filter provided in an imaging pixel.
 一方、センシング領域R12に形成されたセンシング画素では、IRカットフィルタが設けられていてもよいし、IRカットフィルタが設けられていなくてもよいが、この例では実質的にカラーフィルタの種類を増加させるため、一部のセンシング画素にIRカットフィルタが設けられた構成とされている。 On the other hand, in the sensing pixels formed in the sensing region R12, an IR cut filter may be provided, or an IR cut filter may not be provided, but in this example, the types of color filters are substantially increased. In order to cause this, an IR cut filter is provided in part of the sensing pixels.
 具体的には、図9の例では、イメージング領域R11とセンシング領域R12-2とからなる領域R61内の画素に対してIRカットフィルタが設けられている。 Specifically, in the example of FIG. 9, an IR cut filter is provided for the pixels in the area R61 formed of the imaging area R11 and the sensing area R12-2.
 すなわち、センシング領域R12-1内のセンシング画素には、カラーフィルタのみが設けられ、IRカットフィルタは設けられていない。これに対して、センシング領域R12-2内のセンシング画素には、カラーフィルタとIRカットフィルタの両方が設けられている。 That is, only the color filter is provided in the sensing pixel in the sensing region R12-1, and the IR cut filter is not provided. On the other hand, both the color filter and the IR cut filter are provided in the sensing pixel in the sensing region R12-2.
 なお、ここでは、例えばセンシング領域R12-1におけるセンシング画素のカラーフィルタの配置と、センシング領域R12-2におけるセンシング画素のカラーフィルタの配置とは同じとなっている。つまり、センシング領域R12-1内の所定位置にあるセンシング画素に設けられたカラーフィルタと、その所定位置に対応するセンシング領域R12-2内の位置にあるセンシング画素に設けられたカラーフィルタとは同じものとなっている。 Here, for example, the arrangement of the color filter of the sensing pixel in the sensing region R12-1 and the arrangement of the color filter of the sensing pixel in the sensing region R12-2 are the same. That is, the color filter provided in the sensing pixel at the predetermined position in sensing region R12-1 is the same as the color filter provided in the sensing pixel at the position in sensing region R12-2 corresponding to the predetermined position. It has become a thing.
 図9に示す例では、センシング領域R12-2内のセンシング画素に新たにIRカットフィルタを設ける必要がある。しかし、もともとイメージング画素があるイメージング領域R11にのみIRカットフィルタを形成していたのを、そのイメージング領域R11よりも少し広い領域R61にIRカットフィルタを形成するようにすればよいので、低コストでの実現が可能である。 In the example shown in FIG. 9, it is necessary to newly provide an IR cut filter in the sensing pixel in the sensing region R12-2. However, since the IR cut filter is originally formed only in the imaging area R11 in which the imaging pixels are present, the IR cut filter may be formed in the area R61 slightly wider than the imaging area R11, so that the cost can be reduced. Is possible.
 また、ここではセンシング領域R12-1が、そのセンシング領域R12-1直上に形成された遮光板271によって3つの遮光領域に分割されている。同様に、センシング領域R12-2が、そのセンシング領域R12-2直上に形成された遮光板211によって3つの遮光領域に分割されている。これらの遮光板271および遮光板211は、図1に示した遮光板24に対応する。 Here, the sensing region R12-1 is divided into three light shielding regions by a light shielding plate 271 formed immediately above the sensing region R12-1. Similarly, a sensing area R12-2 is divided into three light shielding areas by a light shielding plate 211 formed immediately above the sensing area R12-2. The light shielding plate 271 and the light shielding plate 211 correspond to the light shielding plate 24 shown in FIG.
 さらに、カメラモジュール11のセンシング領域R12-2およびセンシング光学系23-2の部分をX方向から見ると、矢印Q63に示すように、外部の被写体からの光は、結像レンズ212を介して遮光領域に入射する。 Furthermore, when the sensing area R12-2 of the camera module 11 and the sensing optical system 23-2 are viewed in the X direction, light from an external subject is blocked via the imaging lens 212, as shown by arrow Q63. It is incident on the area.
 この例では、センシング領域R12-2上の3つの領域のそれぞれが遮光板211によって囲まれて遮光領域とされており、それらの遮光領域に対して結像レンズ212-1乃至結像レンズ212-3が設けられている。そして、これらの結像レンズ212-1乃至結像レンズ212-3からなるレンズアレイによりセンシング光学系23-2が構成されている。 In this example, each of three regions on the sensing region R12-2 is surrounded by the light shielding plate 211 to be a light shielding region, and the imaging lens 212-1 to the imaging lens 212 Three are provided. A sensing optical system 23-2 is configured of a lens array including the imaging lenses 212-1 to 212-3.
 図9に示すセンシング光学系23-2は、図7に示したセンシング光学系23-2と同じ構成とされている。このことから、図9の例においてもセンシング領域R12-2では、撮像画像の画角の中心部分とその周辺領域からなる領域全体を対象とした、領域ごとの各色成分の情報(分光)を示す2次元の分光情報が得られることになる。 The sensing optical system 23-2 shown in FIG. 9 has the same configuration as the sensing optical system 23-2 shown in FIG. From this, in the example of FIG. 9 as well, in the sensing region R12-2, information (spectrum) of each color component for each region is shown for the entire region including the central portion of the angle of view of the captured image and its peripheral region. Two-dimensional spectral information will be obtained.
 また、センシング領域R12-1およびセンシング光学系23-1の構成も、図9に示したセンシング領域R12-2およびセンシング光学系23-2の構成と同様の構成となっている。 Further, the configurations of the sensing region R12-1 and the sensing optical system 23-1 are the same as the configurations of the sensing region R12-2 and the sensing optical system 23-2 shown in FIG.
 そのため、センシング領域R12-1においても、センシング領域R12-2で得られる分光情報と同様の分光情報が得られることになる。 Therefore, also in the sensing region R12-1, spectral information similar to that obtained in the sensing region R12-2 can be obtained.
 但し、上述したようにセンシング領域R12-1内のセンシング画素にはIRカットフィルタが設けられていないが、センシング領域R12-2内のセンシング画素にはIRカットフィルタが設けられている。 However, as described above, the IR cut filter is not provided in the sensing pixel in the sensing region R12-1, but the IR cut filter is provided in the sensing pixel in the sensing region R12-2.
 そのため、同じカラーフィルタが設けられたセンシング画素でも、センシング領域R12-1内のセンシング画素と、センシング領域R12-2内のセンシング画素とでは分光特性が異なる。したがって、センシング領域R12-1内のセンシング画素の画素信号と、そのセンシング画素と同じカラーフィルタが設けられたセンシング領域R12-2内のセンシング画素の画素信号とからは、互いに異なる色成分の分光情報が得られる。 Therefore, even in the sensing pixel provided with the same color filter, the spectral characteristics are different between the sensing pixel in the sensing region R12-1 and the sensing pixel in the sensing region R12-2. Therefore, spectral information of color components different from each other between the pixel signal of the sensing pixel in the sensing region R12-1 and the pixel signal of the sensing pixel in the sensing region R12-2 provided with the same color filter as the sensing pixel Is obtained.
 このことから、センシング領域R12-1では、センシング領域R12-2で得られる分光情報と同様であるが、色成分の異なる分光情報が得られることになる。その結果、センシング領域R12-1で得られた分光情報と、センシング領域R12-2で得られた分光情報とから、より多くの色成分の情報を含む、2次元の最終的な1つの分光情報を得ることができる。 From this, in the sensing region R12-1, spectral information which is similar to the spectral information obtained in the sensing region R12-2 but different in color component is obtained. As a result, one-dimensional final two-dimensional spectral information including information of more color components from the spectral information obtained in the sensing region R12-1 and the spectral information obtained in the sensing region R12-2 You can get
 このように、センシング領域R12内の一部のセンシング画素にIRカットフィルタを設けることで、実質的にセンシング領域R12内のセンシング画素の色成分(カラーフィルタ)の種類を増加させ、分光情報の波長分解能を向上させることができる。すなわち、分光情報の波長チャネル数を増加させることができる。 As described above, by providing the IR cut filter in a part of sensing pixels in the sensing area R12, the types of color components (color filters) of the sensing pixels in the sensing area R12 are substantially increased, and the wavelength of spectral information Resolution can be improved. That is, the number of wavelength channels of spectral information can be increased.
 なお、どのセンシング画素からどの色成分の情報が得られるかは、センシング画素に設けられたカラーフィルタと、IRカットフィルタの有無とから予め把握することが可能である。 Note that which color component information can be obtained from which sensing pixel can be grasped in advance from the color filter provided in the sensing pixel and the presence or absence of the IR cut filter.
 また、図9では、図7に示した構成において一部のセンシング画素にIRカットフィルタを設ける場合を例として説明したが、その他、図4や図5、図6、図8に示した例にも適用することが可能である。 Further, in FIG. 9, the case where an IR cut filter is provided in some of the sensing pixels in the configuration shown in FIG. 7 has been described as an example, but in addition, in the examples shown in FIG. 4, FIG. 5, FIG. Is also applicable.
 すなわち、カメラモジュール11が例えば図4や図5、図6、図8に示した構成とされる場合において、センシング領域R12-1とセンシング領域R12-2の何れか一方のセンシング領域R12内のセンシング画素にIRカットフィルタを設け、他方のセンシング領域R12内のセンシング画素にIRカットフィルタを設けないようにしてもよい。このようにすることでも、実質的にカラーフィルタの種類を増加させ、波長分解能を向上させることができる。 That is, when the camera module 11 is configured as shown in FIG. 4, FIG. 5, FIG. 6, and FIG. 8, for example, sensing in the sensing region R12 of either sensing region R12-1 or sensing region R12-2. The IR cut filter may be provided in the pixel, and the IR cut filter may not be provided in the sensing pixel in the other sensing region R12. This also substantially increases the types of color filters and can improve wavelength resolution.
〈変形例1〉
〈カメラモジュールの構成例〉
 ところで、以上においては、図2に示したように撮像部21では、イメージング領域R11の左右にセンシング領域R12が設けられる例について説明した。
Modified Example 1
<Configuration Example of Camera Module>
By the way, in the above, as illustrated in FIG. 2, in the imaging unit 21, an example in which the sensing region R12 is provided on the left and right of the imaging region R11 has been described.
 しかし、画素アレイ51の領域内にイメージング領域R11とセンシング領域R12が設けられるようにすれば、センシング領域R12はどのような位置に設けられてもよいし、センシング領域R12がいくつ設けられてもよい。 However, as long as the imaging area R11 and the sensing area R12 are provided in the area of the pixel array 51, the sensing area R12 may be provided at any position, and any number of sensing areas R12 may be provided. .
 したがって、例えば図10に示すように、イメージング領域R11の上下左右の各端部分の領域に隣接してセンシング領域R12が設けられるようにしてもよい。なお、図10において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 Therefore, for example, as shown in FIG. 10, sensing regions R12 may be provided adjacent to the regions of the upper, lower, left, and right end portions of the imaging region R11. In FIG. 10, parts corresponding to the case in FIG. 2 are given the same reference numerals, and the description thereof will be omitted as appropriate.
 図10は、カメラモジュール11をイメージング光学系22の光軸方向(Z方向)から見た図となっている。 FIG. 10 is a view of the camera module 11 as viewed from the optical axis direction (Z direction) of the imaging optical system 22.
 この例では、撮像部21の受光面、つまり画素アレイ51の中心部分(中央部分)の領域がイメージング領域R11とされており、そのイメージング領域R11の図中、左側に隣接する領域がセンシング領域R12-1とされている。 In this example, the light receiving surface of the imaging unit 21, that is, the region at the central portion (central portion) of the pixel array 51 is used as the imaging region R11, and the region adjacent to the left in the figure of the imaging region R11 is the sensing region R12. It is considered to be -1.
 また、イメージング領域R11の図中、右側に隣接する領域がセンシング領域R12-2とされており、イメージング領域R11の図中、上側に隣接する領域がセンシング領域R12-3とされており、イメージング領域R11の図中、下側に隣接する領域がセンシング領域R12-4とされている。 In the drawing of the imaging area R11, the area adjacent to the right is the sensing area R12-2, and in the drawing of the imaging area R11, the area adjacent to the upper is the sensing area R12-3. In the figure of R11, the area adjacent to the lower side is taken as a sensing area R12-4.
 換言すれば、センシング領域R12-1乃至センシング領域R12-4は、撮像部21の受光面における、左端、右端、上端、および下端のそれぞれの領域に設けられている。 In other words, the sensing region R12-1 to the sensing region R12-4 are provided in the left end, the right end, the upper end, and the lower end of the light receiving surface of the imaging unit 21, respectively.
 なお、センシング領域R12-3およびセンシング領域R12-4は、センシング領域R12-1やセンシング領域R12-2と同様に、複数のセンシング画素が形成された領域である。 The sensing area R12-3 and the sensing area R12-4 are areas in which a plurality of sensing pixels are formed as in the sensing area R12-1 and the sensing area R12-2.
 さらに、センシング領域R12-1乃至センシング領域R12-4のそれぞれの図中、手前側、つまり被写体側には、センシング光学系23-1乃至センシング光学系23-4のそれぞれが設けられている。 Furthermore, in each of the sensing area R12-1 to the sensing area R12-4, the sensing optical system 23-1 to the sensing optical system 23-4 are respectively provided on the front side, that is, the object side.
 ここで、センシング光学系23-3は、被写体からの光をセンシング領域R12-3へと導く光学系であり、センシング光学系23-4は、被写体からの光をセンシング領域R12-4へと導く光学系である。これらのセンシング光学系23-3やセンシング光学系23-4は、センシング光学系23-1やセンシング光学系23-2と同様の構成とされる。 Here, the sensing optical system 23-3 is an optical system for guiding the light from the subject to the sensing area R12-3, and the sensing optical system 23-4 is for guiding the light from the subject to the sensing area R12-4. It is an optical system. The sensing optical system 23-3 and the sensing optical system 23-4 have the same configuration as the sensing optical system 23-1 and the sensing optical system 23-2.
 なお、ここではイメージング領域R11に対して、センシング領域R12-1乃至センシング領域R12-4が設けられる例について説明した。しかし、これに限らず、センシング領域R12-1乃至センシング領域R12-4のうちの少なくとも何れか1つが設けられていればよい。 Here, an example in which the sensing area R12-1 to the sensing area R12-4 are provided for the imaging area R11 has been described. However, the invention is not limited to this, and at least one of the sensing area R12-1 to the sensing area R12-4 may be provided.
 例えばイメージング領域R11に対して、センシング領域R12-1とセンシング領域R12-2のみが設けられていてもよいし、センシング領域R12-3とセンシング領域R12-4のみが設けられていてもよい。 For example, only the sensing region R12-1 and the sensing region R12-2 may be provided for the imaging region R11, or only the sensing region R12-3 and the sensing region R12-4 may be provided.
 また、イメージング領域R11に対して、センシング領域R12-1のみが設けられていてもよいし、センシング領域R12-2のみが設けられていてもよいし、センシング領域R12-3のみが設けられていてもよいし、センシング領域R12-4のみが設けられていてもよい。 Further, only the sensing region R12-1 may be provided for the imaging region R11, only the sensing region R12-2 may be provided, or only the sensing region R12-3 is provided. Alternatively, only the sensing region R12-4 may be provided.
〈変形例2〉
〈カメラモジュールの構成例〉
 さらに画素アレイ51の受光面の四隅の各領域のうちの何れか1つの領域がセンシング領域として用いられるようにしてもよい。
<Modification 2>
<Configuration Example of Camera Module>
Furthermore, any one of the four corners of the light receiving surface of the pixel array 51 may be used as a sensing region.
 そのような場合、カメラモジュール11は、例えば図11に示すように構成される。なお、図11において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。 In such a case, the camera module 11 is configured, for example, as shown in FIG. In FIG. 11, parts corresponding to the case in FIG. 2 are given the same reference numerals, and the description thereof will be omitted as appropriate.
 図11は、カメラモジュール11をイメージング光学系22の光軸方向(Z方向)から見た図となっている。 FIG. 11 is a view of the camera module 11 as viewed from the optical axis direction (Z direction) of the imaging optical system 22.
 この例では、撮像部21の受光面、つまり画素アレイ51の中心部分(中央部分)にある領域R31-1または領域R31-2の何れかの領域がイメージング領域として用いられる。すなわち、領域R31-1や領域R31-2は、図1に示したイメージング領域R11に対応し、これらの領域R31-1や領域R31-2には上述したイメージング画素が形成されている。 In this example, either the region R31-1 or the region R31-2 in the light receiving surface of the imaging unit 21, that is, the central portion (central portion) of the pixel array 51 is used as an imaging region. That is, the region R31-1 and the region R31-2 correspond to the imaging region R11 shown in FIG. 1, and the imaging pixels described above are formed in the region R31-1 and the region R31-2.
 具体的には、例えばカメラモジュール11において、16:9の画角で撮像画像の撮像が行われるときには領域R31-1がイメージング領域とされ、4:3の画角で撮像画像の撮像が行われるときには領域R31-2がイメージング領域とされる。なお、以下、領域R31-1および領域R31-2を特に区別する必要のない場合、単に領域R31とも称することとする。 Specifically, for example, when imaging of a captured image is performed at an angle of view of 16: 9 in the camera module 11, the region R31-1 is set as an imaging region, and imaging of the captured image is performed at an angle of view of 4: 3. Sometimes the region R31-2 is taken as the imaging region. Hereinafter, the region R31-1 and the region R31-2 will be simply referred to as the region R31 unless it is necessary to distinguish them.
 カメラモジュール11では、16:9の画角と4:3の画角とを切り替えることができるようになされていることがある。 The camera module 11 may be configured to be able to switch between the 16: 9 angle of view and the 4: 3 angle of view.
 この場合、どちらの画角で撮像を行うときでも撮像部21の受光面における四隅の各領域、すなわち、受光面の図中、左上の隅の領域R32-1、受光面の図中、右上の隅の領域R32-2、受光面の図中、右下の隅の領域R32-3、および受光面の図中、左下の隅の領域R32-4の各領域はイメージング領域としては用いられない。 In this case, even when imaging is performed at any angle of view, the four corners of the light receiving surface of the imaging unit 21, that is, the region R32-1 of the upper left corner in the drawing of the light receiving surface and the upper right in the drawing of the light receiving surface. The regions of the corner region R32-2, the light receiving surface in the lower right corner region R32-3, and the light receiving surface in the lower left corner region R32-4 are not used as imaging regions.
 そこで、カメラモジュール11では、これらの領域R32-1乃至領域R32-4のうちの少なくとも何れか1つの領域がセンシング領域として用いられる。領域R32-1乃至領域R32-4は、図1に示したセンシング領域R12に対応し、これらの領域R32-1乃至領域R32-4には、上述したセンシング画素が形成される。 Therefore, in the camera module 11, at least one of the regions R32-1 to R32-4 is used as a sensing region. The regions R32-1 to R32-4 correspond to the sensing region R12 shown in FIG. 1, and the sensing pixels described above are formed in the regions R32-1 to R32-4.
 このように受光面の四隅の領域をセンシング領域とすることで、撮像部21の受光面を有効に活用することができる。なお、以下、領域R32-1乃至領域R32-4を特に区別する必要のない場合、単に領域R32とも称することとする。 By using the four corner areas of the light receiving surface as sensing regions in this manner, the light receiving surface of the imaging unit 21 can be effectively used. Hereinafter, the regions R32-1 to R32-4 are simply referred to as a region R32 unless there is a need to distinguish them.
 さらに、図11に示す例では、イメージング領域として用いられる領域R31-1および領域R31-2の図中、手前側、つまり被写体側にはイメージング光学系22が設けられている。このイメージング光学系22によって、外部の被写体からの光は、イメージング領域としての領域R31-1や領域R31-2に導かれる。 Further, in the example shown in FIG. 11, an imaging optical system 22 is provided on the near side, ie, the object side in the drawing of the regions R31-1 and R31-2 used as imaging regions. The imaging optical system 22 guides light from an external subject to a region R31-1 or a region R31-2 as an imaging region.
 また、センシング領域として用いられる領域R32-1の図中、手前側、つまり被写体側にはセンシング光学系301-1が設けられており、センシング領域として用いられる領域R32-2の図中、手前側にはセンシング光学系301-2が設けられている。 Further, in the drawing of the region R32-1 used as the sensing region, the sensing optical system 301-1 is provided on the front side, that is, the object side in the drawing of the region R32-2 used as the sensing region. Is provided with a sensing optical system 301-2.
 同様に、センシング領域として用いられる領域R32-3の図中、手前側にはセンシング光学系301-3が設けられており、センシング領域として用いられる領域R32-4の図中、手前側にはセンシング光学系301-4が設けられている。 Similarly, in the drawing of the region R32-3 used as the sensing region, the sensing optical system 301-3 is provided on the front side, and in the drawing of the region R32-4 used as the sensing region, sensing is performed on the front side An optical system 301-4 is provided.
 これらのセンシング光学系301-1乃至センシング光学系301-4のぞれぞれは、図1に示したセンシング光学系23に対応し、外部の被写体からの光を領域R32-1乃至領域R32-4のそれぞれに導く。なお、以下、センシング光学系301-1乃至センシング光学系301-4を特に区別する必要のない場合、単にセンシング光学系301とも称することとする。各センシング光学系301は、上述したセンシング光学系23と同様の構成とされる。 Each of the sensing optical system 301-1 to sensing optical system 301-4 corresponds to the sensing optical system 23 shown in FIG. 1, and light from an external subject is taken from the area R32-1 to the area R32- Lead to each of the four. Hereinafter, sensing optical system 301-1 to sensing optical system 301-4 will be simply referred to as sensing optical system 301 unless it is necessary to distinguish them. Each sensing optical system 301 has a configuration similar to that of the sensing optical system 23 described above.
 このように、図11に示す例では4つの領域R32や、4つの領域R32のうちの何れか1つなど、受光面の隅にある領域がセンシング領域とされる。この場合においても適切な分光情報を得ることができる。 As described above, in the example illustrated in FIG. 11, an area at a corner of the light receiving surface, such as any one of the four areas R32 and one of the four areas R32, is used as a sensing area. Also in this case, appropriate spectral information can be obtained.
〈電子機器への適用例〉
 さらに、上述したカメラモジュール11は、例えばデジタルスチルカメラやデジタルビデオカメラなどの撮像装置、撮像機能を備えた携帯電話機、撮像機能を備えた他の機器といった各種の電子機器に適用することができる。
<Example of application to electronic devices>
Furthermore, the camera module 11 described above can be applied to various electronic devices such as an imaging device such as a digital still camera or a digital video camera, a mobile phone having an imaging function, and other devices having an imaging function.
 図12は、本技術を適用した電子機器としての撮像装置の構成例を示すブロック図である。 FIG. 12 is a block diagram illustrating a configuration example of an imaging device as an electronic device to which the present technology is applied.
 図12に示される撮像装置501は、光学系511、シャッタ装置512、固体撮像素子513、制御回路514、信号処理回路515、モニタ516、およびメモリ517を有しており、静止画像および動画像を撮像可能である。 An imaging device 501 illustrated in FIG. 12 includes an optical system 511, a shutter device 512, a solid-state imaging device 513, a control circuit 514, a signal processing circuit 515, a monitor 516, and a memory 517. It can be imaged.
 光学系511は、1枚または複数枚のレンズを有して構成され、被写体からの光(入射光)を固体撮像素子513に導き、固体撮像素子513の受光面に結像させる。 The optical system 511 includes one or a plurality of lenses, guides light from a subject (incident light) to the solid-state imaging device 513, and forms an image on the light receiving surface of the solid-state imaging device 513.
 シャッタ装置512は、光学系511および固体撮像素子513の間に配置され、制御回路514の制御に従って、固体撮像素子513への光照射期間および遮光期間を制御する。 The shutter device 512 is disposed between the optical system 511 and the solid-state imaging device 513, and controls the light irradiation period and the light shielding period to the solid-state imaging device 513 according to the control of the control circuit 514.
 固体撮像素子513は、光学系511およびシャッタ装置512を介して受光面に結像される光に応じて、一定期間、信号電荷を蓄積する。固体撮像素子513に蓄積された信号電荷は、制御回路514から供給される駆動信号(タイミング信号)に従って転送される。 The solid-state imaging device 513 accumulates signal charges for a certain period in accordance with the light imaged on the light receiving surface via the optical system 511 and the shutter device 512. The signal charge stored in the solid-state imaging device 513 is transferred in accordance with a drive signal (timing signal) supplied from the control circuit 514.
 制御回路514は、固体撮像素子513の転送動作、および、シャッタ装置512のシャッタ動作を制御する駆動信号を出力して、固体撮像素子513およびシャッタ装置512を駆動する。 The control circuit 514 outputs a drive signal for controlling the transfer operation of the solid-state imaging device 513 and the shutter operation of the shutter device 512 to drive the solid-state imaging device 513 and the shutter device 512.
 信号処理回路515は、固体撮像素子513から出力された信号電荷に対して各種の信号処理を施す。信号処理回路515が信号処理を施すことにより得られた画像(画像データ)は、モニタ516に供給されて表示されたり、メモリ517に供給されて記録されたりする。 The signal processing circuit 515 performs various types of signal processing on the signal charge output from the solid-state imaging device 513. An image (image data) obtained by the signal processing circuit 515 performing signal processing is supplied to a monitor 516 for display, or supplied to a memory 517 for recording.
 このように構成されている撮像装置501においても本技術を適用することが可能である。すなわち、例えば光学系511がイメージング光学系22およびセンシング光学系23に対応し、固体撮像素子513が撮像部21に対応する。換言すれば、光学系511乃至固体撮像素子513がカメラモジュール11に対応する。さらに、固体撮像素子513のセンシング画素から出力された画素信号に基づいて、信号処理回路515により分光情報が生成される。 The present technology can also be applied to the imaging device 501 configured as described above. That is, for example, the optical system 511 corresponds to the imaging optical system 22 and the sensing optical system 23, and the solid-state imaging device 513 corresponds to the imaging unit 21. In other words, the optical system 511 to the solid-state imaging device 513 correspond to the camera module 11. Further, spectral information is generated by the signal processing circuit 515 based on the pixel signal output from the sensing pixel of the solid-state imaging device 513.
〈カメラモジュールの使用例〉
 図13は、上述のカメラモジュール11を使用する使用例を示す図である。
<Example of use of camera module>
FIG. 13 is a view showing a use example using the camera module 11 described above.
 上述したカメラモジュール11は、例えば、以下のように、可視光や、赤外光、紫外光、X線等の光をセンシングする様々なケースに使用することができる。 The camera module 11 described above can be used, for example, in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays as described below.
 ・ディジタルカメラや、カメラ機能付きの携帯機器等の、鑑賞の用に供される画像を撮影する装置
 ・自動停止等の安全運転や、運転者の状態の認識等のために、自動車の前方や後方、周囲、車内等を撮影する車載用センサ、走行車両や道路を監視する監視カメラ、車両間等の測距を行う測距センサ等の、交通の用に供される装置
 ・ユーザのジェスチャを撮影して、そのジェスチャに従った機器操作を行うために、TVや、冷蔵庫、エアーコンディショナ等の家電に供される装置
 ・内視鏡や、赤外光の受光による血管撮影を行う装置等の、医療やヘルスケアの用に供される装置
 ・防犯用途の監視カメラや、人物認証用途のカメラ等の、セキュリティの用に供される装置
 ・肌を撮影する肌測定器や、頭皮を撮影するマイクロスコープ等の、美容の用に供される装置
 ・スポーツ用途等向けのアクションカメラやウェアラブルカメラ等の、スポーツの用に供される装置
 ・畑や作物の状態を監視するためのカメラ等の、農業の用に供される装置
-A device that captures images for viewing, such as a digital camera or a portable device with a camera function-For safe driving such as automatic stop, recognition of driver's condition, etc. A device provided for traffic, such as an on-vehicle sensor for capturing images of the rear, surroundings, inside of a car, a monitoring camera for monitoring a traveling vehicle or a road, a distance measuring sensor for measuring distance between vehicles, etc. Devices used for home appliances such as TVs, refrigerators, air conditioners, etc. to perform imaging and device operation according to the gesture ・ Endoscopes, devices for performing blood vessel imaging by receiving infrared light, etc. Equipment provided for medical and healthcare use-Equipment provided for security, such as surveillance cameras for crime prevention, cameras for personal identification, etc.-Skin measuring equipment for photographing skin, photographing for scalp Beauty, such as a microscope Equipment provided for use-Equipment provided for sports use, such as action cameras and wearable cameras for sports applications, etc.-Used for agriculture, such as cameras for monitoring the condition of fields and crops apparatus
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Note that the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the present technology.
 さらに、本技術は、以下の構成とすることも可能である。 Furthermore, the present technology can also be configured as follows.
(1)
 画像を撮像するための画素が設けられたイメージング領域、および少なくとも前記イメージング領域の画素とは異なる分光情報を取得可能な画素が設けられたセンシング領域を有する画素アレイ部と、
 外部からの光を前記イメージング領域へと導くイメージング光学系と、
 外部からの光を前記センシング領域へと導くセンシング光学系と
 を備えるカメラモジュール。
(2)
 前記センシング領域における画素配列が、複数の異なるカラーフィルタのそれぞれが設けられた画素のそれぞれを含む配列単位が繰り返し配置された画素配列となっている
 (1)に記載のカメラモジュール。
(3)
 前記イメージング光学系の光軸方向から見た、前記イメージング光学系の断面の面積が前記イメージング領域の面積よりも小さい
 (1)または(2)に記載のカメラモジュール。
(4)
 前記画素アレイ部の中央に前記イメージング領域が設けられ、
 前記画素アレイ部の上端、下端、左端、および右端のうちの少なくとも何れかの領域に前記センシング領域が設けられている
 (1)乃至(3)の何れか一項に記載のカメラモジュール。
(5)
 前記画素アレイ部の中央に前記イメージング領域が設けられ、
 前記画素アレイ部の四隅の各領域のうちの少なくとも何れかの領域に前記センシング領域が設けられている
 (1)乃至(3)の何れか一項に記載のカメラモジュール。
(6)
 前記センシング光学系は、前記センシング領域の直上に設けられたピンホールまたはピンホールアレイにより構成される
 (1)乃至(5)の何れか一項に記載のカメラモジュール。
(7)
 前記センシング領域における画素配列が、複数の異なるカラーフィルタのそれぞれが設けられた画素のそれぞれを含む配列単位が繰り返し配置された画素配列となっており、
 前記ピンホールの開口部分の面積、または前記ピンホールアレイを構成するピンホールの開口部分の面積が、前記配列単位の面積よりも大きい
 (6)に記載のカメラモジュール。
(8)
 前記ピンホール、または前記ピンホールアレイを構成するピンホールにおける、深さ方向の長さと、前記深さ方向と垂直な方向の長さとの比が、前記イメージング光学系の焦点距離と、前記イメージング領域の前記深さ方向と垂直な方向の長さとの比と略同じである
 (6)または(7)に記載のカメラモジュール。
(9)
 前記センシング光学系は、前記ピンホールまたは前記ピンホールアレイ、および前記ピンホールまたは前記ピンホールアレイに対して前記センシング領域側とは反対側に配置された拡散板により構成される
 (6)乃至(8)の何れか一項に記載のカメラモジュール。
(10)
 前記センシング光学系は、1または複数の光学レンズ、および前記光学レンズに対して前記センシング領域側とは反対側に配置された1または複数の拡散板により構成される
 (1)乃至(5)の何れか一項に記載のカメラモジュール。
(11)
 前記光学レンズはシリンドリカルレンズである
 (10)に記載のカメラモジュール。
(12)
 前記センシング光学系は、1または複数の結像学レンズにより構成される
 (1)乃至(5)の何れか一項に記載のカメラモジュール。
(13)
 複数の前記センシング領域のそれぞれに対して、互いに異なる画角で外部からの光を導く複数の前記センシング光学系のそれぞれを有する
 (1)乃至(5)の何れか一項に記載のカメラモジュール。
(14)
 前記画素アレイ部には、赤外遮光フィルタが設けられた画素からなる前記センシング領域と、前記赤外遮光フィルタが設けられていない画素からなる前記センシング領域とが設けられている
 (1)乃至(13)の何れか一項に記載のカメラモジュール。
(15)
 画像を撮像するための画素が設けられたイメージング領域、および少なくとも前記イメージング領域の画素とは異なる分光情報を取得可能な画素が設けられたセンシング領域を有する画素アレイ部と、
 外部からの光を前記イメージング領域へと導くイメージング光学系と、
 外部からの光を前記センシング領域へと導くセンシング光学系と
 を備える撮像装置。
(1)
A pixel array unit having an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least the pixels of the imaging area;
An imaging optical system for guiding external light to the imaging area;
A sensing optical system for guiding light from the outside to the sensing area.
(2)
The camera module according to (1), wherein the pixel array in the sensing area is an array unit in which an array unit including each of the pixels provided with each of a plurality of different color filters is repeatedly arranged.
(3)
The camera module according to (1) or (2), wherein the area of the cross section of the imaging optical system as viewed from the optical axis direction of the imaging optical system is smaller than the area of the imaging region.
(4)
The imaging region is provided at the center of the pixel array unit,
The camera module according to any one of (1) to (3), wherein the sensing area is provided in at least one of the upper end, the lower end, the left end, and the right end of the pixel array unit.
(5)
The imaging region is provided at the center of the pixel array unit,
The camera module according to any one of (1) to (3), wherein the sensing area is provided in at least one of the four corner areas of the pixel array unit.
(6)
The camera module according to any one of (1) to (5), wherein the sensing optical system is configured of a pinhole or a pinhole array provided immediately above the sensing area.
(7)
The pixel array in the sensing area is a pixel array in which an array unit including each of the pixels provided with each of a plurality of different color filters is repeatedly arranged.
The camera module according to (6), wherein the area of the opening portion of the pinhole or the area of the opening portion of the pinhole constituting the pinhole array is larger than the area of the array unit.
(8)
The ratio of the length in the depth direction to the length in the direction perpendicular to the depth direction in the pinhole or the pinhole constituting the pinhole array is the focal length of the imaging optical system and the imaging area The camera module according to (6) or (7), having substantially the same ratio as the depth direction and the length in the direction perpendicular to the depth direction.
(9)
The sensing optical system includes the pinhole or the pinhole array, and the pinhole or the diffusion plate disposed on the opposite side of the sensing area to the pinhole array (6) to (6) 8) The camera module according to any one of the above.
(10)
The sensing optical system includes one or more optical lenses and one or more diffusion plates disposed on the opposite side to the sensing area with respect to the optical lenses (1) to (5). The camera module according to any one of the above.
(11)
The camera module according to (10), wherein the optical lens is a cylindrical lens.
(12)
The camera module according to any one of (1) to (5), wherein the sensing optical system includes one or more imaging lenses.
(13)
The camera module according to any one of (1) to (5), including a plurality of sensing optical systems that guide light from the outside at different angles of view for each of the plurality of sensing regions.
(14)
The pixel array portion is provided with the sensing area consisting of pixels provided with an infrared light shielding filter, and the sensing area consisting of pixels not provided with the infrared light shielding filter (1) to (1) 13) The camera module according to any one of the above.
(15)
A pixel array unit having an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least the pixels of the imaging area;
An imaging optical system for guiding external light to the imaging area;
An imaging device that guides light from the outside to the sensing area.
 11 カメラモジュール, 21 撮像部, 22 イメージング光学系, 23-1,23-2,23 センシング光学系, 121 遮光板, 122-1乃至122-4,122 ピンホール, 151 拡散板, 181 遮光板, 182 拡散板, 183 シリンドリカルレンズ, 211 遮光板, 212-1乃至212-3,212 結像レンズ, 241 遮光板, 242-1乃至242-3,242 結像レンズ DESCRIPTION OF SYMBOLS 11 Camera module, 21 imaging part, 22 imaging optical system, 23-1, 23-2, 23 sensing optical system, 121 light-shielding plate, 122-1 thru | or 122-4, 122 pin hole, 151 diffusion plate, 181 light-shielding plate, 182 diffusion plate, 183 cylindrical lens, 211 shading plate, 212-1 to 212-3, 212 imaging lens, 241 shading plate, 242-1 to 242-3, 242 imaging lens

Claims (15)

  1.  画像を撮像するための画素が設けられたイメージング領域、および少なくとも前記イメージング領域の画素とは異なる分光情報を取得可能な画素が設けられたセンシング領域を有する画素アレイ部と、
     外部からの光を前記イメージング領域へと導くイメージング光学系と、
     外部からの光を前記センシング領域へと導くセンシング光学系と
     を備えるカメラモジュール。
    A pixel array unit having an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least the pixels of the imaging area;
    An imaging optical system for guiding external light to the imaging area;
    A sensing optical system for guiding light from the outside to the sensing area.
  2.  前記センシング領域における画素配列が、複数の異なるカラーフィルタのそれぞれが設けられた画素のそれぞれを含む配列単位が繰り返し配置された画素配列となっている
     請求項1に記載のカメラモジュール。
    The camera module according to claim 1, wherein the pixel array in the sensing area is a pixel array in which an array unit including each of the pixels provided with each of the plurality of different color filters is repeatedly arranged.
  3.  前記イメージング光学系の光軸方向から見た、前記イメージング光学系の断面の面積が前記イメージング領域の面積よりも小さい
     請求項1に記載のカメラモジュール。
    The camera module according to claim 1, wherein an area of a cross section of the imaging optical system, as viewed in a direction of an optical axis of the imaging optical system, is smaller than an area of the imaging region.
  4.  前記画素アレイ部の中央に前記イメージング領域が設けられ、
     前記画素アレイ部の上端、下端、左端、および右端のうちの少なくとも何れかの領域に前記センシング領域が設けられている
     請求項1に記載のカメラモジュール。
    The imaging region is provided at the center of the pixel array unit,
    The camera module according to claim 1, wherein the sensing area is provided in at least one of an upper end, a lower end, a left end, and a right end of the pixel array unit.
  5.  前記画素アレイ部の中央に前記イメージング領域が設けられ、
     前記画素アレイ部の四隅の各領域のうちの少なくとも何れかの領域に前記センシング領域が設けられている
     請求項1に記載のカメラモジュール。
    The imaging region is provided at the center of the pixel array unit,
    The camera module according to claim 1, wherein the sensing area is provided in at least one of areas at four corners of the pixel array unit.
  6.  前記センシング光学系は、前記センシング領域の直上に設けられたピンホールまたはピンホールアレイにより構成される
     請求項1に記載のカメラモジュール。
    The camera module according to claim 1, wherein the sensing optical system is configured by a pinhole or a pinhole array provided immediately above the sensing area.
  7.  前記センシング領域における画素配列が、複数の異なるカラーフィルタのそれぞれが設けられた画素のそれぞれを含む配列単位が繰り返し配置された画素配列となっており、
     前記ピンホールの開口部分の面積、または前記ピンホールアレイを構成するピンホールの開口部分の面積が、前記配列単位の面積よりも大きい
     請求項6に記載のカメラモジュール。
    The pixel array in the sensing area is a pixel array in which an array unit including each of the pixels provided with each of a plurality of different color filters is repeatedly arranged.
    The camera module according to claim 6, wherein an area of an opening portion of the pinhole or an area of an opening portion of a pinhole constituting the pinhole array is larger than an area of the array unit.
  8.  前記ピンホール、または前記ピンホールアレイを構成するピンホールにおける、深さ方向の長さと、前記深さ方向と垂直な方向の長さとの比が、前記イメージング光学系の焦点距離と、前記イメージング領域の前記深さ方向と垂直な方向の長さとの比と略同じである
     請求項6に記載のカメラモジュール。
    The ratio of the length in the depth direction to the length in the direction perpendicular to the depth direction in the pinhole or the pinhole constituting the pinhole array is the focal length of the imaging optical system and the imaging area The camera module according to claim 6, wherein the ratio of the length direction to the length direction in the vertical direction is substantially the same.
  9.  前記センシング光学系は、前記ピンホールまたは前記ピンホールアレイ、および前記ピンホールまたは前記ピンホールアレイに対して前記センシング領域側とは反対側に配置された拡散板により構成される
     請求項6に記載のカメラモジュール。
    The sensing optical system according to claim 6, wherein the sensing optical system includes the pinhole or the pinhole array, and a diffusion plate disposed on the side opposite to the sensing region with respect to the pinhole or the pinhole array. Camera module.
  10.  前記センシング光学系は、1または複数の光学レンズ、および前記光学レンズに対して前記センシング領域側とは反対側に配置された1または複数の拡散板により構成される
     請求項1に記載のカメラモジュール。
    The camera module according to claim 1, wherein the sensing optical system includes one or more optical lenses, and one or more diffusion plates disposed on the opposite side to the sensing area with respect to the optical lenses. .
  11.  前記光学レンズはシリンドリカルレンズである
     請求項10に記載のカメラモジュール。
    The camera module according to claim 10, wherein the optical lens is a cylindrical lens.
  12.  前記センシング光学系は、1または複数の結像学レンズにより構成される
     請求項1に記載のカメラモジュール。
    The camera module according to claim 1, wherein the sensing optical system is configured of one or more imaging lenses.
  13.  複数の前記センシング領域のそれぞれに対して、互いに異なる画角で外部からの光を導く複数の前記センシング光学系のそれぞれを有する
     請求項1に記載のカメラモジュール。
    The camera module according to claim 1, wherein each of the plurality of sensing optical systems guiding light from the outside at a different angle of view is provided to each of the plurality of sensing regions.
  14.  前記画素アレイ部には、赤外遮光フィルタが設けられた画素からなる前記センシング領域と、前記赤外遮光フィルタが設けられていない画素からなる前記センシング領域とが設けられている
     請求項1に記載のカメラモジュール。
    The pixel array portion is provided with the sensing area consisting of pixels provided with an infrared light shielding filter, and the sensing area consisting of pixels not provided with the infrared light shielding filter. Camera module.
  15.  画像を撮像するための画素が設けられたイメージング領域、および少なくとも前記イメージング領域の画素とは異なる分光情報を取得可能な画素が設けられたセンシング領域を有する画素アレイ部と、
     外部からの光を前記イメージング領域へと導くイメージング光学系と、
     外部からの光を前記センシング領域へと導くセンシング光学系と
     を備える撮像装置。
    A pixel array unit having an imaging area provided with pixels for capturing an image, and a sensing area provided with pixels capable of acquiring spectral information different from at least the pixels of the imaging area;
    An imaging optical system for guiding external light to the imaging area;
    An imaging device that guides light from the outside to the sensing area.
PCT/JP2018/026658 2017-07-31 2018-07-17 Camera module and image capture device WO2019026600A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017148001A JP2019029851A (en) 2017-07-31 2017-07-31 Camera module and image capture device
JP2017-148001 2017-07-31

Publications (1)

Publication Number Publication Date
WO2019026600A1 true WO2019026600A1 (en) 2019-02-07

Family

ID=65232676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/026658 WO2019026600A1 (en) 2017-07-31 2018-07-17 Camera module and image capture device

Country Status (2)

Country Link
JP (1) JP2019029851A (en)
WO (1) WO2019026600A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766017A (en) * 2019-10-21 2021-05-07 广州印芯半导体技术有限公司 Optical identification module

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022107235A1 (en) * 2020-11-18 2022-05-27 株式会社ソシオネクスト Image processing device, image processing method, program, and image processing system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001128071A (en) * 1999-10-22 2001-05-11 Fuji Photo Film Co Ltd Digital camera
JP2004320568A (en) * 2003-04-17 2004-11-11 Nagasaki Prefecture Spectroscopic image photographing equipment
JP2005150592A (en) * 2003-11-19 2005-06-09 Nippon Telegr & Teleph Corp <Ntt> Ccd imaging device
WO2014091706A1 (en) * 2012-12-14 2014-06-19 コニカミノルタ株式会社 Image capture device
WO2016117597A1 (en) * 2015-01-21 2016-07-28 Jsr株式会社 Solid-state imaging device and infrared absorbent composition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001128071A (en) * 1999-10-22 2001-05-11 Fuji Photo Film Co Ltd Digital camera
JP2004320568A (en) * 2003-04-17 2004-11-11 Nagasaki Prefecture Spectroscopic image photographing equipment
JP2005150592A (en) * 2003-11-19 2005-06-09 Nippon Telegr & Teleph Corp <Ntt> Ccd imaging device
WO2014091706A1 (en) * 2012-12-14 2014-06-19 コニカミノルタ株式会社 Image capture device
WO2016117597A1 (en) * 2015-01-21 2016-07-28 Jsr株式会社 Solid-state imaging device and infrared absorbent composition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766017A (en) * 2019-10-21 2021-05-07 广州印芯半导体技术有限公司 Optical identification module

Also Published As

Publication number Publication date
JP2019029851A (en) 2019-02-21

Similar Documents

Publication Publication Date Title
JP7180658B2 (en) Solid-state image sensor and electronic equipment
JP7264187B2 (en) Solid-state imaging device, its driving method, and electronic equipment
US10134797B2 (en) Solid-state image sensor, imaging device, and electronic equipment
JP4483951B2 (en) Imaging device
CN107112339B (en) Image pickup device and electronic apparatus
US20160234435A1 (en) Image generation method, image generation apparatus, program, and storage medium
US9658367B2 (en) Image acquisition apparatus
US10805560B2 (en) Solid-state imaging device and electronic apparatus
WO2017065019A1 (en) Solid-state imaging element and electronic device
WO2019026600A1 (en) Camera module and image capture device
CN109076178B (en) Solid-state image pickup element and electronic apparatus
WO2017130725A1 (en) Focal point detection device and imaging device
WO2016194577A1 (en) Imaging element, imaging method, program, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18840459

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18840459

Country of ref document: EP

Kind code of ref document: A1