CN114615447A - Image sensing device - Google Patents

Image sensing device Download PDF

Info

Publication number
CN114615447A
CN114615447A CN202111180049.6A CN202111180049A CN114615447A CN 114615447 A CN114615447 A CN 114615447A CN 202111180049 A CN202111180049 A CN 202111180049A CN 114615447 A CN114615447 A CN 114615447A
Authority
CN
China
Prior art keywords
layer
photoelectric conversion
image sensing
lens
sensor layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111180049.6A
Other languages
Chinese (zh)
Inventor
卞庆洙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SK Hynix Inc
Original Assignee
SK Hynix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SK Hynix Inc filed Critical SK Hynix Inc
Publication of CN114615447A publication Critical patent/CN114615447A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/1462Coatings
    • H01L27/14621Colour filter arrangements
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14625Optical elements or arrangements associated with the device
    • H01L27/14627Microlenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14603Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
    • H01L27/14605Structural or functional details relating to the position of the pixel elements, e.g. smaller pixel elements in the center of the imager compared to pixel elements at the periphery
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14634Assemblies, i.e. Hybrid structures
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14636Interconnect structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Abstract

The application discloses an image sensing device, which can include: a first sensor layer configured to include a plurality of first photoelectric conversion elements to receive light and generate photocharges corresponding to the light; a second sensor layer disposed below the first sensor layer, the second sensor layer configured to include a plurality of second photoelectric conversion elements vertically overlapping the first photoelectric conversion elements to receive light and generate photocharges corresponding to the light having passed through the first sensor layer; and a bonding layer disposed between the first sensor layer and the second sensor layer. The bonding layer includes a lens layer configured to refract light rays that have passed through the first sensor layer toward the second sensor layer such that an incident angle of the light rays is greater than a refraction angle of the light rays.

Description

Image sensing device
Technical Field
Various embodiments relate generally to an image sensing apparatus including a photoelectric conversion element implemented in a stacked type.
Background
An image sensing device is a device for capturing an optical image by converting light into an electrical signal using a photosensitive semiconductor material that reacts to light. With the development of the automotive, medical, computer, and communication industries, demands for high-performance image sensing devices are increasing in various fields such as smart phones, digital cameras, game machines, IOT (internet of things), robots, security cameras, and medical miniature cameras.
Image sensing devices can be broadly classified into CCD (charge coupled device) image sensing devices and CMOS (complementary metal oxide semiconductor) image sensing devices. CCD image sensing devices provide better image quality, but they tend to consume more power and be larger than CMOS image sensing devices. The CMOS image sensing device is smaller in size and consumes less power than the CCD image sensing device. Further, the CMOS sensor is manufactured using CMOS manufacturing technology, and thus the photosensor and other signal processing circuits can be integrated into a single chip, enabling a miniaturized image sensing device to be produced at lower cost. For these reasons, CMOS image sensing devices are being developed for many applications including mobile devices.
Disclosure of Invention
Embodiments of the disclosed technology relate to an image sensing device having an optimized stacked structure.
In some embodiments of the disclosed technology, an image sensing device has two or more image sensor layers stacked on top of each other and one or more bonding layers disposed between the two or more image sensor layers. The one or more bonding layers may include one or more lenses configured to correct or adjust an optical path from one image sensor layer to another image sensor layer.
In an embodiment, an image sensing apparatus may include: a first sensor layer configured to include a plurality of first photoelectric conversion elements to receive light and generate photocharges corresponding to the light; a second sensor layer disposed below the first sensor layer, the second sensor layer configured to include a plurality of second photoelectric conversion elements vertically overlapping the first photoelectric conversion elements to receive light and generate photocharges corresponding to the light having passed through the first sensor layer; and a bonding layer disposed between the first sensor layer and the second sensor layer. The bonding layer includes a lens layer configured to refract light rays that have passed through the first sensor layer toward the second sensor layer such that an incident angle of the light rays is greater than a refraction angle of the light rays.
In an embodiment, an image sensing apparatus may include: a plurality of first photoelectric conversion elements configured to respond to incident light, each of the first photoelectric conversion elements being configured to convert the incident light into a first electrical signal; a plurality of second photoelectric conversion elements disposed under the plurality of first photoelectric conversion elements so as to vertically overlap the plurality of first photoelectric conversion elements, each of the second photoelectric conversion elements being configured to convert incident light passing through the first sensor layer into a second electrical signal; and a lens layer disposed below the plurality of first photoelectric conversion elements and disposed above the plurality of second photoelectric conversion elements. The lens layer includes: a first slit having a first width; a second slit having a second width narrower than the first width; and a dielectric layer configured to surround the first slit and the second slit, and the first slit is disposed at a position where a principal ray incident on the lens layer reaches.
Based on the embodiment, a digital lens may be inserted between vertically stacked photoelectric conversion elements and calibrate an optical path, thereby preventing optical crosstalk.
In addition, various effects directly or indirectly understood through this document may be provided.
Drawings
Fig. 1 is a diagram schematically illustrating an example configuration of an image sensing apparatus based on an embodiment of the disclosed technology.
Fig. 2 is a diagram illustrating an example of a stacked structure of the first sensor layer, the second sensor layer, and the logic layer shown in fig. 1.
Fig. 3 is a sectional view of the laminated structure shown in fig. 2.
Fig. 4 is a diagram illustrating an example of light rays propagating through the lens module toward the first sensor layer.
Fig. 5 is a diagram illustrating an example of a laminated structure corresponding to the central region of fig. 4.
Fig. 6 is a diagram illustrating an example of a stacked structure corresponding to the first edge region of fig. 4.
Fig. 7 is a diagram illustrating another example of a stacked structure corresponding to the first edge region of fig. 4.
Fig. 8 is a diagram illustrating another example of a stacked structure corresponding to the first edge region of fig. 4.
Detailed Description
Hereinafter, various embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be noted, however, that the present disclosure is not limited to the particular embodiments, but includes various modifications, equivalents, and/or alternatives.
Fig. 1 is a diagram schematically illustrating an example configuration of an image sensing apparatus based on an embodiment of the disclosed technology.
Referring to fig. 1, an image sensing device 100 is a CMOS (complementary metal oxide semiconductor) image sensing device and may include a light source 10, a lens module 20, a first sensor layer 200, a second sensor layer 300, and a logic layer 400.
The depth sensor is used to measure a distance between the image sensing device and the object. The depth sensor may require additional space in the electronic device, which imposes limitations on the design of the image sensing device.
The first sensor layer 200 and the second sensor layer 300 may acquire the same type of image or different types of images, respectively. In some implementations, the first sensor layer 200 may be used to acquire a color image corresponding to a specific color (e.g., R (red), G (green), or B (blue)), and the second sensor layer 300 may be used to acquire a depth image for measuring a distance between the image sensing device and the target object 1 by a ToF (time of flight) method. In some implementations, the image sensing device 100 acquires a color image and a depth image. In some implementations, the depth image may include an image or optical signal captured by an image sensor for measuring a distance between the image sensing device and the object.
In an implementation, the image sensing device 100 may acquire a color image and an IR (infrared) image simultaneously. In another implementation, the image sensing device 100 may acquire a pair of color images corresponding to different sensitivities. In this case, the image sensing device 100 may not include some components that other image sensing devices may have, such as the light source 10 and the light source driver 420 of the logic layer 400.
The light source 10 emits light towards the target object 1 in response to the modulated light signal MLS from the logic layer 400. Examples of light source 10 may include Laser Diodes (LDs) or Light Emitting Diodes (LEDs), NIR (near infrared lasers), point sources, monochromatic illumination sources, and combinations of these and/or other laser sources. The LD or LED emits light (e.g., near-infrared light, or visible light) at a specific wavelength band, and monochromatic illumination sources include white lamps and monochromators. For example, the light source 10 may emit infrared light having a wavelength of 800nm to 1000 nm. The light generated by the light source 10 may include light modulated at a preset frequency. For convenience of description, fig. 1 illustrates only one light source 10, but a plurality of light sources may be arranged around the lens module 20.
The lens module 20 may collect light reflected from the target object 1 and transmit the collected light to the sensor layers 200 and 300. The light reflected from the target object 1 may include infrared light generated by the light source 10 and reflected by the target object 1, and visible light generated by an external light source (e.g., sunlight or illumination) and reflected by the target object 1. Light reflected from a target object 1 (e.g., an object or a scene) is transmitted through the lens module 20. For example, the lens module 20 may include a focusing lens or a cylindrical optical element having a glass or plastic surface. The lens module 20 may include a plurality of lenses aligned with the optical axis.
The first sensor layer 200 may include a plurality of color pixels arranged along rows and columns in a two-dimensional matrix array for capturing an image having a color. The plurality of color pixels are adjacent image sensing pixels having different color filters arranged to capture color information. For example, the color filters may be arranged based on a Bayer (Bayer) filter pattern having a green filter of 50% green, 25% red, and 25% blue, a red filter, and a blue filter. The image sensing pixel may be formed in the semiconductor substrate, and may be configured to convert light transmitted through the lens module 20 and the color filter into an electrical signal corresponding to the intensity of light at a wavelength corresponding to a specific color filter and output the electrical signal as an image sensing pixel signal.
The second sensor layer 300 may include a plurality of depth pixels arranged along rows and columns in a 2D matrix array. The depth pixel may be formed in the semiconductor substrate, and may be configured to convert light transmitted through the lens module 20 into an electrical signal corresponding to the intensity of light at an infrared wavelength and output the electrical signal as a depth pixel signal. In some implementations, the depth pixel signal may be used to measure a distance between the image sensing device and the object.
Each image sensing pixel and each depth pixel may include: a photoelectric conversion element for generating a photo-charge corresponding to an intensity of incident light; and one or more transistors configured to generate an electrical signal based on the photo-charges. For example, each image sensing pixel may have a 3-TR (transistor) structure, a 4-TR structure, or a 5-TR structure. In some implementations, each depth pixel may comprise a SPAD (single photon avalanche diode) pixel operating based on the direct ToF method. In some implementations, each depth pixel may be a CAPD (current assisted photon demodulation) pixel that may operate based on an indirect ToF method.
The resolution of the image sensing pixels may be equal to or different from the resolution of the depth pixels. In some implementations, the resolution of the depth pixels may be less than the resolution of the image sensing pixels. In some implementations, the first sensor layer 200 and the second sensor layer 300 have the same resolution, and the image sensing pixels are respectively mapped to depth pixels.
The logic layer 400 may include circuitry that may control the light source 10 to transmit light toward the target object 1, activate the pixels of the first and second sensor layers 200 and 300, and generate a color image and a depth image of the target object 1 by processing the image sensing pixel signals and the depth pixel signals corresponding to the light reflected from the target object 1. The color image may be an image indicating the color of the target object 1, and the depth image may be an image indicating the distance from the target object 1.
The logic layer 400 may include a sensor driver 410, a light source driver 420, a timing controller (T/C)430, and logic circuitry 440.
The sensor driver 410 may activate and/or control the image sensing pixels of the first sensor layer 200 and the depth pixels of the second sensor layer 300 in response to timing signals generated by the timing controller 430. For example, the sensor driver 410 may generate control signals to select and control one or more row lines among a plurality of row lines of each of the first and second sensor layers 200 and 300. Such control signals may include a reset signal for controlling the reset transistor, a transfer signal for controlling the transfer transistor, a selection signal for controlling the selection transistor, and the like. In some implementations, when the second sensor layer 300 includes pixels operating based on the indirect ToF method, the control signal may further include a demodulation control signal having a certain phase difference (e.g., 0 degrees, 90 degrees, 180 degrees, or 270 degrees) from the modulated light signal MLS.
The light source driver 420 may generate the modulated light signal MLS to control the light source 10 based on a command or control signal generated by the timing controller 430. The modulated optical signal MLS may comprise a signal modulated at a predetermined frequency.
The timing controller 430 may generate timing signals for controlling the operations of the sensor driver 410, the light source driver 420, and the logic circuit 440.
The logic circuit 440 may generate digital pixel data by processing analog pixel signals generated by the first and second sensor layers 200 and 300 based on command or control signals generated by the timing controller 430. In some implementations, the logic circuit 440 may include a CDS (correlated double sampler) configured to perform correlated double sampling on pixel signals output from the first and second sensor layers 200 and 300. The logic circuit 440 may include an ADC (analog-to-digital converter) configured to convert the analog output signal from the CDS into a digital signal. In some implementations, the logic circuit 440 may include a buffer circuit configured to temporarily store pixel data output from the ADC and output the pixel data based on a command or control signal generated by the timing controller 430.
In some implementations, the logic 440 may generate a color image based on light captured by the first sensor layer 200 and a depth image based on light captured by the second sensor layer 300. In some implementations, an image signal processor (not shown) provided in addition to the logic circuit 440 or the image sensing apparatus 100 may generate a 3D image by synthesizing the color image and the depth image, or determine a distance between the second sensor layer 300 and the target object 1 based on the depth image.
As an example of a method for calculating the distance to the target object 1, an indirect ToF method will be discussed below. The light source 10 may generate light modulated at a predetermined frequency toward a target object. The image sensing device 100 may generate a depth image of depth pixels by sensing modulated light reflected from the target object 1. In some implementations, the time delay that occurs between the modulated light and the incident light is used to determine the distance between the image sensing device 100 and the target object 1. Such a time delay causes a phase difference between the signal generated by the image sensing apparatus 100 and the modulated light signal MLS for controlling the light source 10. The image signal processor (not shown) may calculate depth information of each depth pixel by calculating a phase difference with respect to a depth image output from the image sensing device 100.
Fig. 2 is a diagram illustrating an example of a stacked structure of the first sensor layer, the second sensor layer, and the logic layer shown in fig. 1.
Referring to fig. 2, the first sensor layer 200, the second sensor layer 300, and the logic layer 400 of the image sensing device 100 shown in fig. 1 may be stacked on top of each other and aligned with the optical axis OA of the lens module 20.
In some implementations, a bonding layer 500 may be disposed between the first sensor layer 200 and the second sensor layer 300 to bond the first sensor layer 200 and the second sensor layer 300 and to transmit signals to the logic layer 400. In some implementations, the first sensor layer 200, the bonding layer 500, the second sensor layer 300, and the logic layer 400 may be sequentially arranged in a direction away from the lens module 20. Although fig. 2 illustrates the first sensor layer 200, the bonding layer 500, the second sensor layer 300, and the logic layer 400 as having the same cross-sectional area, the disclosed techniques are not limited thereto.
The cross-sectional areas of the first sensor layer 200, the bonding layer 500, the second sensor layer 300, and the logic layer 400, which are stacked on top of each other, may be divided into a central area CR and an edge area ER, as shown in fig. 2.
The central region CR through which the optical axis OA passes may include pixels corresponding to a predetermined number of rows and columns. Fig. 2 illustrates that the central region CR is formed in a rectangular shape, but the disclosed technique is not limited thereto. For example, the central region CR may have a sectional shape corresponding to the shape of the lens module 20.
The edge region ER surrounding the central region CR may include pixels.
Fig. 3 is a sectional view of the laminated structure shown in fig. 2.
Fig. 3 illustrates, by way of example only, a cross section of a stacked structure of the first sensor layer 200, the bonding layer 500, the second sensor layer 300, and the logic layer 400 shown in fig. 2 with respect to three pixels.
The first sensor layer 200, the bonding layer 500, the second sensor layer 300, and the logic layer 400 are stacked in the order shown in fig. 3.
The first sensor layer 200 may include a first substrate 210, a first photoelectric conversion element 220, a filter 230, and a microlens 240.
The first substrate 210 may include a top surface and a bottom surface facing away from each other. In one example, the first substrate 210 may be a P-type or N-type body substrate. In another example, the first substrate 210 may be a substrate formed by epitaxially growing a P-type or N-type epitaxial layer on a P-type body substrate. In another example, the first substrate 210 may be a substrate formed by epitaxially growing a P-type or N-type epitaxial layer on an N-type body substrate.
The first photoelectric conversion element 220 may be disposed in a region corresponding to each image sensing pixel within the first substrate 210. The first photoelectric conversion element 220 may generate a photo-charge corresponding to the intensity of light in a specific visible light wavelength band. The first photoelectric conversion element 220 may have a large light receiving area to improve efficiency by having a large fill factor. Examples of the first photoelectric conversion element 220 may include a photodiode, a phototransistor, a photogate, a pinned photoelectric conversion element, or a combination thereof.
When the first photoelectric conversion element 220 is implemented as a photodiode, the first photoelectric conversion element 220 may be formed as an N-type doped region by implanting N-type ions through an ion implantation process. In an embodiment, the photodiode may have a structure in which two or more doped regions are stacked. In this case, the lower doped region may be formed by implanting P ions and N + ions, and the upper doped region may be formed by implanting N-ions.
The filter 230 may be formed over the first substrate 210 and selectively transmits light (e.g., red, green, or blue light) at a specific wavelength band corresponding to a visible light wavelength band. In some implementations, the optical filter 230 may include a color filter without an infrared cut filter. Accordingly, the light that has passed through the filter 230 may include light rays at a specific visible light wavelength band and infrared wavelength band corresponding to the filter 230. Infrared light has a longer wavelength than visible light and can therefore penetrate thicker layers of material than visible light. Therefore, even when visible light that has passed through the filter 230 is absorbed by the first photoelectric conversion element 220, infrared light that has passed through the filter 230 can reach the bonding layer 500 after passing through the first photoelectric conversion element 220.
The microlenses 240 may be formed in a hemispherical shape above the optical filter 230 to improve light receiving efficiency by increasing light collecting power of the microlenses 240. In some implementations, the microlenses 240 may additionally include an overcoat layer (not shown) formed on the top or bottom thereof to avoid lens glare by preventing diffuse reflection of light.
Although not shown in fig. 3, the source and drain electrodes may be formed in an inner region of the first substrate 210 adjacent to the bottom surface of the first substrate 210. The source and the drain may constitute each of a plurality of transistors for generating a pixel signal based on the photocharges accumulated in the first photoelectric conversion element 220. A plurality of transistors may be provided for each pixel or each pixel group (having, for example, four pixels). In an embodiment, each of the source and the drain may include an impurity region doped with P-type or N-type impurities.
A pixel gate (not shown) constituting a transistor with a source and a drain included in the first substrate 210 may be formed in an inner region of the bonding layer 500 adjacent to the bottom surface of the first substrate 210. The pixel gate generates an image sensing pixel signal by operating based on the control signal, so that each image sensing pixel can generate an image sensing pixel signal corresponding to the photo-charges generated by the first photoelectric conversion element 220. For example, the pixel gate may include a reset gate constituting a reset transistor, a transfer gate constituting a transfer transistor, and a selection gate constituting a selection transistor. Each pixel gate may include a gate dielectric layer for electrical isolation from the first substrate 210 and a gate electrode configured to receive a control signal.
The second sensor layer 300 may include a second substrate 310 and a second photoelectric conversion element 320.
In some implementations, the second substrate 310 and the second photoelectric conversion element 320 have the same function as the first substrate 210 and the first photoelectric conversion element 220 of the first sensor layer 200, and are manufactured in the same manner as the first substrate 210 and the first photoelectric conversion element 220.
In some implementations, the second photoelectric conversion element 320 may generate a photo-charge corresponding to the intensity of incident light that reaches the second photoelectric conversion element 320 after passing through the bonding layer 500 without being absorbed (or photoelectrically converted) by the first photoelectric conversion element 220. In some implementations, the first and second photoelectric conversion elements 220 and 320 may vertically overlap each other.
In an embodiment, an isolation layer may be formed between the first photoelectric conversion elements 220 adjacent to each other and/or the second photoelectric conversion elements 320 adjacent to each other. The isolation layer may have a DTI (deep trench isolation) structure. In some implementations, the isolation layer may be formed by etching the substrates on the left and right sides of the photoelectric conversion element 220 or 320 in the vertical direction through a deep trench process to form trenches and filling the trenches with a dielectric material having a different refractive index (e.g., a relatively high refractive index) from the corresponding substrate 210 or 310.
Although not shown in fig. 3, the source and drain electrodes may be formed in an inner region of the second substrate 310 adjacent to the top surface of the second substrate 310. The source and the drain may constitute each of a plurality of transistors for generating a pixel signal based on the photocharge accumulated in the second photoelectric conversion element 320. A plurality of transistors may be provided for each pixel or each pixel group (having, for example, four pixels). In an embodiment, each of the source and the drain may include an impurity region doped with P-type or N-type impurities.
A pixel gate (not shown) constituting a transistor with a source and a drain included in the second substrate 310 may be formed in an inner region of the bonding layer 500 adjacent to the top surface of the second substrate 310. The pixel gate may generate a depth pixel signal by operating based on the control signal, so that each depth pixel may generate a depth pixel signal corresponding to the photo-charge generated by the second photoelectric conversion element 320. For example, the pixel gate may include a reset gate constituting a reset transistor, a transfer gate constituting a transfer transistor, and a selection gate constituting a selection transistor. Each pixel gate may include a gate dielectric layer for electrical isolation from the second substrate 310 and a gate electrode configured to receive a control signal.
The bonding layer 500 may be disposed between the first sensor layer 200 and the second sensor layer 300 to bond the first sensor layer 200 and the second sensor layer 300 to each other. The bonding layer 500 may include an interconnect region 510, a first TSV (through silicon via) pad 520, and a second TSV pad 530.
The interconnection region 510 may include a plurality of interconnection layers (e.g., Ma to Md of fig. 5), and the pixel gate and the metal interconnection may be disposed in the plurality of interconnection layers.
The pixel gate may be configured in the same manner as the metal gate described above with respect to the first and second sensor layers 200 and 300.
The metal interconnection (e.g., 540 of fig. 5) may transmit a control signal from the first TSV pad 520 or the second TSV pad 530 to the pixel gate, and transmit an image sensing pixel signal or a depth pixel signal generated by an operation of the pixel gate to the first TSV pad 520 or the second TSV pad 530.
In some implementations, no metal or fewer metal interconnects may be provided in a region corresponding to the bottom of the first photoelectric conversion element 220 (or the top of the second photoelectric conversion element 320) so that light that has passed through the first sensor layer 200 can be efficiently transmitted to the second sensor layer 300.
The first TSV pad 520 may be disposed at an uppermost layer (e.g., Md of fig. 5) among the plurality of interconnection layers, and electrically coupled to the metal interconnection and the first TSV525, and transmit an electrical signal (e.g., a control signal or an image sensing pixel signal). The first TSV pad 520 may have a larger horizontal area than the first TSV 525. The first TSV pad 520 may be disposed in an edge region where the image sensing pixel is not disposed.
The second TSV pad 530 may be disposed at a lowermost layer (e.g., Ma of fig. 5) among the plurality of interconnect layers, and may be electrically coupled to the metal interconnect and the second TSV 535 to carry an electrical signal (e.g., a control signal or a depth pixel signal). The second TSV pad 530 may have a larger horizontal area than the second TSV 535. The second TSV pad 530 may be disposed in an edge region where no pixel is disposed.
As described with reference to fig. 1, the logic layer 400 may perform a series of operations for generating a color image and a depth image. Logic layer 400 may include a third TSV pad 410.
The third TSV pad 410 may be electrically coupled to the first and second TSVs 525 and 535 and logic circuits inside the logic layer 400, and may be used to transmit electrical signals (e.g., control signals, image sensing pixel signals, and depth pixel signals). The third TSV pad 410 may have a larger horizontal area than the first TSV525 and the second TSV 535. The third TSV pad 410 may be disposed such that at least a portion thereof vertically overlaps the first TSV pad 520 corresponding to the first sensor layer 200 and the second TSV pad 530 corresponding to the second sensor layer 300.
The metal interconnects of the TSV pads 520, 530, and 410 and the interconnect region 510 may include silver (Ag), copper (Cu), aluminum (Al), or other materials having conductivity. The TSV pads 520 and 410 may be electrically coupled through a first TSV525, and the TSV pads 530 and 410 may be electrically coupled through a second TSV 535.
The first TSV525 and the second TSV 535 may be electrically connected to respective TSV pads vertically through at least a portion of the bonding layer 500, the second sensor layer 300, and the logic layer 400.
In some implementations, each of the first TSV525 and the second TSV 535 may have a dual structure including an inner plug for electrical coupling and a barrier surrounding the inner plug to electrically isolate the inner plug. The internal plug may include Ag, Cu, Al, or the like having high conductivity. The barrier may comprise at least one of titanium (Ti), titanium nitride (TiN), tantalum (Ta), tantalum nitride (TaN), or other barrier metals.
Fig. 4 is a diagram illustrating an example of light rays propagating through the lens module toward the first sensor layer.
Fig. 4 illustrates a cross section of the image sensing device 100 of fig. 2. The image sensing device 100 may be divided into a central region CR including the optical axis OA, a first edge region ER1 disposed on one side (i.e., the left side) of the central region CR, and a second edge region ER2 disposed on the other side (i.e., the right side) of the central region CR. The first edge region ER1 and the second edge region ER2 may be included in the edge region ER of fig. 2.
The chief ray CR1 incident on the central region CR through the lens module 20 may be perpendicularly incident on the top surface of the first sensor layer 200. That is, the incident angle of the chief ray incident on the central region CR may be 0 degree or an angle close to 0 degree.
However, the chief rays incident on the first and second edge regions ER1 and ER2 may be obliquely incident on the top surface of the first sensor layer 200. That is, the incident angle of the chief ray CR2 incident on the first edge region ER1 and the incident angle of the chief ray CR3 incident on the second edge region ER2 may correspond to a predetermined angle ranging from 0 degrees to 90 degrees. The predetermined angle may be changed depending on the size of the first sensor layer 200, the curvature of the lens module 20, the distance between the lens module 20 and the first sensor layer 200, and the like.
The incident angle of the chief rays CR1 to CR3 may gradually increase from the optical axis OA toward both ends of the first sensor layer 200.
Fig. 5 is a diagram illustrating an example of a laminated structure corresponding to the central region of fig. 4.
In some implementations, the stacked structure STK-CR may include the first sensor layer 200, the second sensor layer 300, and the bonding layer 500 in the central region CR. By way of example, fig. 5 illustrates a cross-section of three adjacent pixels of the first sensor layer 200 and three adjacent pixels of the second sensor layer 300. In some implementations, other pixels within the central region CR may have the same structure as the illustrated pixels. Fig. 5 illustrates the first to third pixel groups PX1 to PX3, and each of the pixel groups PX1 to PX3 may include pixels of the first and second sensor layers 200 and 300. The pixel includes photoelectric conversion elements vertically overlapping each other. Suppose the following coincide with each other: a central axis of the first pixel group PX 1; a straight line parallel to an optical axis OA passing through the center of the photoelectric conversion element 220 within the pixel of the first sensor layer 200 included in the first pixel group PX 1; a straight line parallel to an optical axis OA passing through the center of the photoelectric conversion element 320 within the pixel of the second sensor layer 300 included in the first pixel group PX 1; and a straight line parallel to the optical axis OA passing through the center of the digital lens 600. Such an assumption may be similarly applied to other pixel groups to be described below. Further, the central axis of the pixel group is defined as the pixel central line.
In some implementations, the internal structures of the first sensor layer 200 and the second sensor layer 300 are the same as those discussed above with reference to fig. 3.
The bonding layer 500 may include a plurality of interconnect layers Ma to Md. The plurality of interconnect layers Ma to Md may be included in the interconnect region 510 described with reference to fig. 3.
The first interconnect layer Ma, the second interconnect layer Mb, the third interconnect layer Mc, and the fourth interconnect layer Md may be sequentially stacked from bottom to top in fig. 5. In the manufacturing process of the bonding layer 500, the plurality of interconnect layers Ma to Md may be stacked in the order of the fourth interconnect layer Md to the first interconnect layer Ma or the first interconnect layer Ma to the fourth interconnect layer Md in fig. 5.
The bonding layer 500 may include a pixel gate, a metal interconnect 540, a dielectric layer 550, and a digital lens 600.
The pixel gate may be disposed in an interconnect layer Md adjacent to a bottom surface of the first substrate 210 or an interconnect layer Ma adjacent to a top surface of the second substrate 310. The metal interconnects 540 may be disposed in the interconnect layers Ma to Md, respectively, and the metal interconnects included in different interconnect layers may be electrically coupled to each other through the dielectric layer 550. Since the pixel gate and the metal interconnection 540 have already been described with reference to fig. 3, overlapping description thereof will be omitted here.
Dielectric layer 550 may surround metal interconnect 540, electrically insulating metal interconnect 540. In some implementations, the dielectric layer 550 may include at least one of silicon oxide, silicon nitride, and silicon oxynitride.
Where the first sensor layer 200 configured to generate a color image is disposed above the second sensor layer 300 configured to generate a depth image, the disclosed techniques may be implemented in some embodiments to provide a lens layer disposed below the first sensor layer 200 and above the second sensor layer 300 to modify the path of light transmitted to the second sensor layer 300. The digital lens 600 may collimate the optical path of light transmitted through the first sensor layer 200 to increase the amount of light reaching the second sensor layer 300. The operation of calibrating the optical path may indicate an operation of refracting light such that an incident angle with respect to the digital lens 600 is greater than a refraction angle.
The digital lens 600 may include one or more first slits 610 having a relatively large width and one or more second slits 620 having a relatively small width. Further, the digital lens 600 may include a dielectric layer 550, the dielectric layer 550 being disposed to surround the first slit 610 and the second slit 620 and being located at a portion of the digital lens 600 where the first slit 610 or the second slit 620 is not disposed. The number of first slits 610 and the number of second slits 620 in fig. 5 are merely examples for illustrative purposes, and the disclosed technology is not limited thereto.
In an implementation, the digital lens 600 may be disposed in the second interconnect layer Mb. In another implementation, the digital lens 600 may be disposed in another interconnect layer, such as the third interconnect layer Mc.
The first slit 610 and the second slit 620 may each include a material having a higher refractive index than the dielectric layer 550. The first and second slits 610 and 620 and the dielectric layer 550 in the digital lens 600 may be arranged such that the cross-sections of the slits and the cross-section of the dielectric layer 550 are alternately arranged. Although not shown, the horizontal section of the digital lens 600 may have ring-shaped dielectric layers 550 and second slits 620 alternately arranged around the first slit 610.
In some implementations, the digital lens 600 may be formed as will be discussed below. After the dielectric layer 550 of the second interconnect layer Mb is formed, a photo mask is disposed to define regions where the first slits 610 and the second slits 620 will be formed. An etching process may be performed to form a null pattern corresponding to the first slit 610 and the second slit 620. The empty pattern may be gap-filled with a material having a relatively high refractive index, thereby forming the digital lens 600.
A region in the digital lens 600 in which the first slit 610 having a relatively large width is disposed may become an optically dense region, and a region in the digital lens 600 in which the second slit 620 having a relatively small width is disposed may become an optically less dense region. The dielectric layer 550 in the interconnection layers Mc and Md disposed above the digital lens 600 may be referred to as an optically less dense region, compared to the region in the digital lens 600 in which the second slit 620 is disposed. That is, the refractive index of the medium may increase in the following order: (1) a dielectric layer 550 disposed in the interconnect layers Mc and Md; (2) a region in the digital lens 600 where the second slit 620 is provided; and (3) a region in the digital lens 600 where the first slit 610 is disposed.
Therefore, when light is incident on the digital lens 600 from the dielectric layers 550 provided in the interconnect layers Mc and Md, the light propagates from the optically low dense region to the optically dense region. Therefore, the angle of refraction of light is smaller than its angle of incidence. Further, since the region in the digital lens 600 where the first slit 610 is provided corresponds to a denser medium than the region in the digital lens 600 where the second slit 620 is provided, the region where the first slit 610 is provided has a smaller refraction angle than the region where the second slit 620 is provided.
A principal ray L1 incident on each of the first to third pixel groups PX1 to PX3 included in the central region CR may enter the top surface of the first substrate 210 in a direction perpendicular to the top surface of the first substrate 210 and pass through a pixel central line of each of the pixel groups PX1 to PX 3. In addition, the principal ray L1 may pass through the first slit 610 of the digital lens 600. That is, the first slit 610 may be disposed at a position where the principal ray L1 in the digital lens 600 reaches.
Since the incident angle of the principal ray L1 is 0 degree (or a value approximate to 0 degree), the principal ray L1' having passed through the digital lens 600 can reach the second photoelectric conversion element 320 while having a refraction angle of 0 degree (or a value approximate to 0 degree).
Fig. 6 is a diagram illustrating an example of a stacked structure corresponding to the first edge region of fig. 4.
In some implementations, the stacked structure STK-ERa may include the first sensor layer 200, the second sensor layer 300, and the bonding layer 500 in the first edge region ER 1. By way of example, fig. 6 illustrates a cross-section of a first pixel group PX1 to a third pixel group PX3 including three adjacent pixels of the first sensor layer 200 and three adjacent pixels of the second sensor layer. In some implementations, other pixels within the first edge region ER1 can have the same structure as the illustrated pixels. In some implementations, the structure of the second edge region ER2 and the structure of the first edge region ER1 are symmetric with respect to the optical axis OA, and thus the structure of the second edge region ER2 may have the same structure as the first edge region ER 1.
In the first edge region ER1, the filter 230 and the microlens 240 may be shifted to the right with respect to the first photoelectric conversion element 220. That is, the filter 230 and the microlens 240 may be shifted from the pixel central line toward the optical axis OA. The filter 230 and the microlens 240 may be displaced to different degrees. As shown in fig. 6, the microlens 240 may be shifted more than the filter 230 to effectively perform light-collecting and filtering on the chief ray L2 incident on the first to third pixel groups PX1 to PX3 in the first edge region ER1, because the chief ray L2 is obliquely incident with respect to the top surface of the first substrate 210.
In some implementations, the internal structure of bonding layer 500 is the same or similar to that discussed above with reference to FIG. 5, and the following description will focus on differences from bonding layer 500 of FIG. 5.
The principal ray L2 incident on each of the first to third pixel groups PX1 to PX3 included in the first edge region ER1 may be obliquely incident with respect to the top surface of the first substrate 210 and incident toward the left side of the pixel central line based on each of the pixel groups PX1 to PX 3. The first slit 710 of the digital lens 700 may be shifted leftward from the pixel central line of each of the pixel groups PX1 to PX3, and may be disposed at a position where the principal ray L2 in the digital lens 700 reaches. That is, the first slit 710 may be displaced from the center of the digital lens 700 in the direction opposite to the optical axis OA.
In an embodiment, the position of the first slit 710 within the digital lens 700 at the first edge region ER1 may vary, and the first slit 710 may be further displaced as the distance from the optical axis OA increases.
The first slit 710 of the digital lens 700 may refract the principal ray L2 at a smaller refraction angle than the incident angle of the principal ray L2.
When the principal ray L2 has the first incident angle, the principal ray L2' having passed through the digital lens 700 may reach the second photoelectric conversion element 320 while having a refraction angle smaller than the first incident angle.
Further, light rays other than the principal ray L2 incident on the second slit 720 of the digital lens 700 may also be transmitted to the second photoelectric conversion element 320 while having a refraction angle smaller than the corresponding incident angle.
When it is assumed that the digital lens 700 is not present in the second pixel group PX2, the principal ray L2 incident on the second pixel group PX2 is not refracted toward the photoelectric conversion element 320 of the second pixel group PX2, but is incident on the photoelectric conversion element 320 of the first pixel group PX1 like the principal ray L2 ″, thereby causing optical crosstalk to deteriorate the signal-to-noise ratio.
Fig. 7 is a diagram illustrating another example of a stacked structure corresponding to the first edge region of fig. 4.
In some implementations, the stacked structure STK-ERb may include the first sensor layer 200, the second sensor layer 300, and the bonding layer 500 in the first edge region ER 1. By way of example, fig. 7 illustrates a cross-section of a first pixel group PX1 through a third pixel group PX3 including three adjacent pixels of the first sensor layer 200 and three adjacent pixels of the second sensor layer 300. In some implementations, other pixels within the first edge region ER1 can have the same structure as the illustrated pixels. In some implementations, the structure of the second edge region ER2 and the structure of the first edge region ER1 are symmetric with respect to the optical axis OA, and thus the structure of the second edge region ER2 may have the same structure as the first edge region ER 1.
In some implementations, the stacked structure shown in fig. 7 is the same or similar to that discussed above with reference to fig. 6, and the following description will focus on the differences from fig. 6.
The digital lens 800 may have substantially the same internal structure as the digital lens 700 of fig. 6. However, the digital lens 800 may be provided in the third interconnect layer Mc located above the second interconnect layer Mb.
The principal ray L3 incident on each of the first to third pixel groups PX1 to PX3 included in the first edge region ER1 may have a relatively large second incident angle. In this case, since the principal ray L3 is significantly deviated from the pixel central line before reaching the second interconnect layer Mb, refraction of the ray by the digital lens 800 may not be effectively performed.
However, the digital lens 800 provided in the third interconnect layer Mc located above the second interconnect layer Mb may effectively perform a refraction operation on the principal ray L3 before the principal ray L3 is significantly deviated from the pixel central line, thereby transmitting the principal ray L3' having passed through the digital lens 800 to the second photoelectric conversion element 320.
When it is assumed that the digital lens 800 is not present in the second pixel group PX2, the principal ray L3 incident on the second pixel group PX2 is not refracted toward the photoelectric conversion element 320 of the second pixel group PX2, but is incident on the photoelectric conversion element 320 of the first pixel group PX1 like the principal ray L3 ″, thereby causing optical crosstalk to deteriorate the signal-to-noise ratio.
Fig. 8 is a diagram illustrating another example of a stacked structure corresponding to the first edge region of fig. 4.
In some implementations, the stacked structure STK-ERc may include the first sensor layer 200, the second sensor layer 300, and the bonding layer 500 in the first edge region ER 1. By way of example, fig. 8 illustrates a cross-section of a first pixel group PX1 through a third pixel group PX3 including three adjacent pixels of the first sensor layer 200 and three adjacent pixels of the second sensor layer 300. In some implementations, other pixels within the first edge region ER1 can have the same structure as the illustrated pixels. In some implementations, the structure of the second edge region ER2 and the structure of the first edge region ER1 are symmetric with respect to the optical axis OA, and thus the structure of the second edge region ER2 may have the same structure as the first edge region ER 1.
In some implementations, the stacked structure illustrated in fig. 8 is the same or similar to that discussed above with reference to fig. 6, and the following description will focus on differences from fig. 6.
The digital lenses 900 and 1000 may have substantially the same internal structure as the digital lens 700 of fig. 6. However, the digital lens 900 may be disposed in the second interconnection layer Mb, and the digital lens 1000 may be disposed in the third interconnection layer Mc. That is, a plurality of digital lenses 900 and 1000 may be stacked in each of the pixel groups PX1 to PX 3. Fig. 8 illustrates two digital lenses stacked as an example. However, three or more digital lenses may be stacked.
The principal ray L4 incident on each of the first to third pixel groups PX1 to PX3 included in the first edge region ER1 may have a relatively large third incident angle. In this case, even if the principal ray L4 passes through one digital lens, the principal ray L4 may not be refracted to have a sufficiently small refraction angle. Therefore, the principal ray L4 may not be transmitted to the second photoelectric conversion element 320.
However, the plurality of digital lenses 900 and 1000 respectively disposed in the second interconnection layer Mb and the third interconnection layer Mc may be refracted twice to substantially reduce the refraction angle of the principal ray L4' having passed through the digital lenses 900 and 1000. Therefore, the principal ray L4' can be stably transmitted to the second photoelectric conversion element 320.
In an embodiment, in order to maximize the refraction effect of the plurality of digital lenses 900 and 1000, the distance between the first slit 1010 of the digital lens 1000 and the pixel central line may be smaller than the distance between the first slit 910 of the digital lens 900 and the pixel central line to allow the principal ray L4' to more reliably pass through the first slits 910 and 1010 of the digital lenses 900 and 1000.
When it is assumed that the digital lenses 900 and 1000 are not present in the second pixel group PX2, the principal ray L4 incident on the second pixel group PX2 is not refracted toward the photoelectric conversion element 320 of the second pixel group PX2, but is incident on the photoelectric conversion element 320 of the first pixel group PX1 as the principal ray L4 ″, thereby causing optical crosstalk to deteriorate the signal-to-noise ratio.
The embodiments of the digital lens described with reference to fig. 6 to 8 may be combined if desired.
For example, the digital lens 700 of fig. 6 may be disposed in the second interconnect layer Mb at a region relatively close to the central region CR within the first edge region E1. As shown in fig. 7, the digital lens 800 may be disposed in the third interconnect layer Mc at a region relatively far from the central region CR, and the digital lens may be disposed in the fourth interconnect layer Md at a region farthest from the central region CR. Alternatively, as shown in fig. 8, the digital lenses 900 and 1000 may be provided in the second interconnection layer Mb and the third interconnection layer Mc, respectively.
While various embodiments have been described above as specific examples for implementing those embodiments, variations and modifications of those embodiments and other embodiments may be made based on what is disclosed and illustrated in this patent document.
Cross Reference to Related Applications
This application claims priority and benefit from korean application No.10-2020-0170452 filed on 8.12.2020, which is hereby incorporated by reference in its entirety.

Claims (17)

1. An image sensing device, comprising:
a first sensor layer configured to include a plurality of first photoelectric conversion elements to receive light and generate photocharges corresponding to the light;
a second sensor layer disposed below the first sensor layer, the second sensor layer configured to include a plurality of second photoelectric conversion elements vertically overlapping the first photoelectric conversion elements to receive light and generate photocharges corresponding to the light that has passed through the first sensor layer; and
a bonding layer disposed between the first sensor layer and the second sensor layer,
wherein the bonding layer includes a lens layer configured to refract a light ray having passed through the first sensor layer toward the second sensor layer such that an incident angle of the light ray is greater than a refraction angle of the light ray.
2. The image sensing device of claim 1, wherein the lens layer comprises a digital lens comprising:
a first slit having a first width;
a second slit having a second width narrower than the first width; and
a dielectric layer configured to surround the first slit and the second slit.
3. The image sensing device of claim 2, wherein each of the first and second slits has a higher refractive index than the dielectric layer.
4. The image sensing device according to claim 2, wherein the first slit is provided at a position where a principal ray incident on the digital lens reaches.
5. The image sensing device according to claim 2, wherein the first sensor layer, the second sensor layer, and the bonding layer are divided into a central region through which an optical axis of a lens module for transmitting light to the first photoelectric conversion element passes and an edge region surrounding the central region, and
the digital lens located in the central region is disposed in a first interconnect layer of the bonding layer.
6. The image sensing device according to claim 5, wherein the first slit in the edge region is shifted from a center of the digital lens in a direction opposite to the optical axis.
7. The image sensing device according to claim 5, wherein the digital lens in the edge region is provided in a second interconnect layer of the bonding layer, and
wherein the second interconnect layer is disposed above the first interconnect layer.
8. The image sensing device according to claim 5, wherein the digital lens in the edge region is provided in each of the first interconnect layer of the bonding layer and a second interconnect layer provided above the first interconnect layer.
9. The image sensing device according to claim 5, wherein a microlens provided above the first photoelectric conversion element in the edge region is shifted from a center of the digital lens toward the optical axis.
10. The image sensing device according to claim 1, further comprising: a logic layer including a circuit that processes an electrical signal corresponding to the photo-charges converted from the light by the first and second photoelectric conversion elements.
11. The image sensing device of claim 10, wherein the logic layer is disposed below the second sensor layer, and
the bonding layer and the logic layer are electrically coupled through a through-silicon via (TSV) formed through the second sensor layer.
12. An image sensing device, comprising:
a plurality of first photoelectric conversion elements configured to respond to incident light, each first photoelectric conversion element configured to convert the incident light into a first electrical signal;
a plurality of second photoelectric conversion elements disposed under the plurality of first photoelectric conversion elements so as to vertically overlap the plurality of first photoelectric conversion elements, each of the second photoelectric conversion elements being configured to convert incident light passing through the first sensor layer into a second electric signal; and
a lens layer disposed below the plurality of first photoelectric conversion elements and disposed above the plurality of second photoelectric conversion elements,
wherein the lens layer includes a first slit having a first width, a second slit having a second width narrower than the first width, and a dielectric layer configured to surround the first slit and the second slit, and
the first slit is provided at a position where a principal ray incident on the lens layer reaches.
13. The image sensing device of claim 12, wherein each of the first and second slits has a higher refractive index than the dielectric layer.
14. The image sensing device according to claim 12, wherein the plurality of first photoelectric conversion elements are formed in the first sensor layer, the plurality of second photoelectric conversion elements are formed in a second sensor layer, and the lens layer is formed in a bonding layer configured to bond the first sensor layer to the second sensor layer, and wherein the first sensor layer, the second sensor layer, and the bonding layer are divided into a central region through which an optical axis of a lens module for transmitting light to the first photoelectric conversion elements passes and an edge region surrounding the central region, and
the lens layer located in the central region is disposed in a first interconnect layer of the bonding layer.
15. The image sensing device according to claim 14, wherein the first slit in the edge region is displaced from a center of the lens layer in a direction opposite to the optical axis.
16. The image sensing device according to claim 14, wherein the lens layer in the edge region is provided in a second interconnection layer of the bonding layer, and
wherein the second interconnect layer is disposed above the first interconnect layer.
17. The image sensing device according to claim 14, wherein the lens layer in the edge region is provided in each of the first interconnect layer of the bonding layer and a second interconnect layer provided over the first interconnect layer.
CN202111180049.6A 2020-12-08 2021-10-11 Image sensing device Pending CN114615447A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0170452 2020-12-08
KR1020200170452A KR20220081034A (en) 2020-12-08 2020-12-08 Image Sensing Device

Publications (1)

Publication Number Publication Date
CN114615447A true CN114615447A (en) 2022-06-10

Family

ID=81849460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111180049.6A Pending CN114615447A (en) 2020-12-08 2021-10-11 Image sensing device

Country Status (3)

Country Link
US (1) US20220181371A1 (en)
KR (1) KR20220081034A (en)
CN (1) CN114615447A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050280882A1 (en) * 2004-06-15 2005-12-22 Shoichi Nakano Method of reading photographic film and photographic film reading apparatus for implementing the method
US20120273683A1 (en) * 2011-04-26 2012-11-01 Canon Kabushiki Kaisha Solid-state image sensor and image sensing apparatus
JP2013219180A (en) * 2012-04-09 2013-10-24 Nikon Corp Image sensor and image pick-up device
US9184198B1 (en) * 2013-02-20 2015-11-10 Google Inc. Stacked image sensor with cascaded optical edge pass filters
US20170278887A1 (en) * 2016-03-24 2017-09-28 SK Hynix Inc. Image sensor with inner light-condensing scheme
CN109309103A (en) * 2017-07-28 2019-02-05 爱思开海力士有限公司 Imaging sensor with phase difference detection pixel
CN109786403A (en) * 2017-11-13 2019-05-21 爱思开海力士有限公司 Imaging sensor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050280882A1 (en) * 2004-06-15 2005-12-22 Shoichi Nakano Method of reading photographic film and photographic film reading apparatus for implementing the method
US20120273683A1 (en) * 2011-04-26 2012-11-01 Canon Kabushiki Kaisha Solid-state image sensor and image sensing apparatus
JP2013219180A (en) * 2012-04-09 2013-10-24 Nikon Corp Image sensor and image pick-up device
US9184198B1 (en) * 2013-02-20 2015-11-10 Google Inc. Stacked image sensor with cascaded optical edge pass filters
US20170278887A1 (en) * 2016-03-24 2017-09-28 SK Hynix Inc. Image sensor with inner light-condensing scheme
CN109309103A (en) * 2017-07-28 2019-02-05 爱思开海力士有限公司 Imaging sensor with phase difference detection pixel
CN109786403A (en) * 2017-11-13 2019-05-21 爱思开海力士有限公司 Imaging sensor

Also Published As

Publication number Publication date
US20220181371A1 (en) 2022-06-09
KR20220081034A (en) 2022-06-15

Similar Documents

Publication Publication Date Title
KR102437162B1 (en) Image sensor
CN107706200B (en) Image sensor
US9520423B2 (en) Image sensors including non-aligned grid patterns
KR102306670B1 (en) image sensor and manufacturing method thereof
US9876044B2 (en) Image sensor and method of manufacturing the same
JP5766663B2 (en) Backside image sensor pixel with silicon microlens and metal reflector
US10998365B2 (en) Image sensor
US8716769B2 (en) Image sensors including color adjustment path
US11031428B2 (en) Image sensor
KR20120001895A (en) An image sensor and package comprising the same
KR102653045B1 (en) Solid-state imaging devices, and electronic devices
CN113540133A (en) Image sensor with a plurality of pixels
CN113225541B (en) Image sensor
KR20210112055A (en) Pixel, and Image Sensor including the same
US20220293659A1 (en) Image sensing device
US20220181371A1 (en) Image sensing device
US20210280623A1 (en) Phase detection pixels with stacked microlenses
CN112635500A (en) Image sensing device
US20230343800A1 (en) Image sensor and electronic system including the same
US20220139989A1 (en) Image sensor
JP7457989B2 (en) Photodetector, solid-state imaging device, and method for manufacturing photodetector
US20240014241A1 (en) Image sensor
KR100872710B1 (en) Image sensor and method for manufacturing thereof
KR20230095687A (en) Image sensor
KR20240010396A (en) Image sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination