WO2023138355A1 - 图像传感器和电子设备 - Google Patents

图像传感器和电子设备 Download PDF

Info

Publication number
WO2023138355A1
WO2023138355A1 PCT/CN2023/070113 CN2023070113W WO2023138355A1 WO 2023138355 A1 WO2023138355 A1 WO 2023138355A1 CN 2023070113 W CN2023070113 W CN 2023070113W WO 2023138355 A1 WO2023138355 A1 WO 2023138355A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
pattern
splitting unit
light splitting
image sensor
Prior art date
Application number
PCT/CN2023/070113
Other languages
English (en)
French (fr)
Inventor
孟培雯
郭睿
姚湛史
谢振威
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023138355A1 publication Critical patent/WO2023138355A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Definitions

  • the present application relates to the field of optical technology, in particular to an image sensor and electronic equipment.
  • the camera function has become an increasingly important function.
  • the realization of the camera function mainly depends on the image sensor in the electronic device, such as a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) sensor.
  • CMOS complementary metal oxide semiconductor
  • An image sensor based on a metasurface structure including a cover plate, a metasurface layer, and a detection layer.
  • the incident light passes through the cover plate and reaches the metasurface layer.
  • the metasurface layer refracts the incident light so that the light of different colors is refracted to the detection area of the corresponding color on the detection layer, and the detection layer converts the received light into an electrical signal.
  • the metasurface layer includes a plurality of light splitting units, and different light splitting units receive incident light at different angles of incidence, and different light splitting units rely on different patterns to achieve light splitting processing for incident light at different incident angles.
  • the patterns of each light-splitting unit are designed separately using algorithms, resulting in a complex design process for large-area metasurface layers and low design efficiency.
  • the present application provides an image sensor and an electronic device. Based on the designed pattern transformation of the first light splitting unit, the pattern of the second light splitting unit is obtained. In this way, it is no longer necessary to design and manufacture the pattern of each light splitting unit. While ensuring the light splitting efficiency, the design process of the large-area metasurface structure is simplified, and the design and processing efficiency of the metasurface layer is improved.
  • the present application provides an image sensor, the image sensor comprising: a cover plate, a first metasurface layer and a detection layer, the first metasurface layer is located between the cover plate and the detection layer.
  • the cover plate plays a protective role, and the incident light enters through the cover plate.
  • the first metasurface layer is used to receive incident light, and the incident light includes incident light rays at multiple incident angles.
  • the metasurface layer includes a plurality of light-splitting units, and different light-splitting units correspondingly receive incident rays from different incident angles.
  • the plurality of light-splitting units include a first light-splitting unit and a second light-splitting unit.
  • the patterns of the first light-splitting unit and the second light-splitting unit are different. Divided into various colors of light.
  • the pattern of the second light splitting unit is obtained by transforming the pattern of the first light splitting unit.
  • the detection layer is used for receiving the multiple colors of light separated by the multiple light splitting units, and converting the received multiple colors of light into electrical signals.
  • the first spectroscopic unit and the second spectroscopic unit are two units used to split the incident light. Since different incident angles need to be split, the patterns of the two are different.
  • the pattern of the first light-splitting unit can be designed by using an algorithm, and the pattern of the second light-splitting unit is obtained based on the pattern conversion of the first light-splitting unit. Therefore, when designing a large-area metasurface structure layer, it is no longer necessary to carry out targeted optimization design on the pattern of each light-splitting unit, which simplifies the design process of the metasurface layer and improves the production efficiency of the metasurface layer.
  • the foregoing transformation may be translation transformation, that is, the pattern of the second light splitting unit is obtained through translation transformation of the pattern of the first light splitting unit.
  • the direction and distance of the translation transformation are determined according to the relative positional relationship between the second light splitting unit and the first light splitting unit.
  • the angle sensitivity of the first beam splitting unit and the second beam splitting unit to the received incident light is different. Therefore, if the second beam splitting unit directly adopts the same pattern as the first beam splitting unit, the angular sensitivity of the incident light is ignored, causing cracking of the beam splitting effect on the detection layer.
  • the pattern of the second light splitting unit formed by translating the pattern of the first light splitting unit can avoid cracking of the light splitting effect, so that the pattern of the second light splitting unit formed by translation can achieve efficient light splitting for the incident light received by the second light splitting unit.
  • the foregoing transformation may also be transformation forms such as rotation and flipping, which are not limited in the present application.
  • the second light splitting unit is located on the first side of the first light splitting unit, the pattern of the second light splitting unit is obtained by translational transformation of the pattern of the first light splitting unit to the second side, and the second side and the first side are opposite sides of the first light splitting unit.
  • the pattern of the second light-splitting unit is obtained by shifting the pattern of the first light-splitting unit to the right; the second light-splitting unit is obtained by shifting the pattern of the first light-splitting unit to the left;
  • the pattern of the second light splitting unit is obtained by shifting and transforming the pattern of the first light splitting unit upward.
  • the pattern of the second light-splitting unit is obtained by shifting the pattern of the first light-splitting unit to the lower right side;
  • the light unit is located on the upper right side of the first light splitting unit, and the pattern of the second light splitting unit is obtained by translating the pattern of the first light splitting unit to the lower left side.
  • the distance of the translation transformation is positively correlated with the distance between the second light splitting unit and the first light splitting unit.
  • the patterns of any two adjacent light splitting units are different.
  • the plurality of light splitting units are arranged in an array, and one row or column of light splitting units is divided into multi-component light units, and one group of light units in the multi-component light units includes a plurality of consecutively arranged light splitters, and the patterns of the plurality of consecutively arranged light splitters are the same.
  • the multiple light splitting units include multiple second light splitting units, and the multiple second light splitting units are arranged around the first light splitting unit.
  • the incident angle of the incident light corresponding to the first light splitting unit is within a range centered on 0°, such as -2° ⁇ 2°.
  • the first light splitting unit corresponds to incident light with an incident angle of 0°.
  • the first metasurface layer can not only divide the color of the incident light, but also divide the polarization of the incident light, that is, the first metasurface layer is also used to divide the corresponding incident light into multiple polarized light.
  • the detection layer is used to receive multiple lights output by the first metasurface layer, and convert the received multiple lights into electrical signals, and at least one of color and polarization of any two lights in the multiple lights is different.
  • the functions of polarization division and color division may be implemented by different metasurface layers.
  • the image sensor further includes a second metasurface layer for separating received light into multiple polarized lights.
  • the second metasurface layer is located between the cover plate and the first metasurface layer, and the detection layer is used to receive multiple lights output by the first metasurface layer, and convert the received multiple lights into electrical signals; or, the first metasurface layer is located between the cover plate and the second metasurface layer, and the detection layer is used to receive various lights output by the second metasurface layer, and convert the received multiple lights into electrical signals.
  • At least one of color and polarization of any two lights in the plurality of lights is different.
  • the pattern of the second light splitting unit is directly obtained through the translation transformation of the pattern of the first light splitting unit.
  • the pattern of the second light splitting unit is obtained by changing a part of the pattern in the shifted transformed pattern after the translation transformation of the pattern of the first light splitting unit.
  • the pattern of the first light splitting unit includes a plurality of pixels arranged in an array, and the ratio of the changed pixels in the pattern after translation to the total number of pixels in the pattern of the first light splitting unit does not exceed a threshold.
  • the value range of the threshold is 20%-30%.
  • the form in which the pixels are changed includes at least one of the following:
  • the first pixel point and the second pixel point are two kinds of pixel points corresponding to materials with different refractive indices among the plurality of pixel points.
  • the translation transformation distance is an integer multiple of the pixel size.
  • the image sensor further includes a filter located between the first metasurface layer and the detection layer.
  • the filter is used to filter the light of multiple colors separated by the light splitting unit, and to filter out other stray light other than the light of a specific color, that is to say, to filter out the stray light of other colors in each color of the light of the multiple colors, so as to reduce crosstalk.
  • the stray light here refers to light of a certain color mixed into the light of a certain color separated by the light splitting unit, for example, green and blue light mixed in the red light separated by the light splitting unit.
  • the supersurface layer divides the incident light into three colors of red, green and blue, and the filter filters out the stray light of other colors except red in red light, the stray light of other colors except green in green light, and the stray light of other colors except blue in blue light.
  • the present application provides an image sensing method, the image sensing method comprising:
  • incident light includes incident rays at multiple incident angles
  • the multiple colors of light are converted into electrical signals.
  • the method also includes:
  • the described conversion of the multiple colors of light into electrical signals includes:
  • any two lights in the multiple lights are different in at least one of color and polarization.
  • the method also includes:
  • the second metasurface layer receives the light output by the first metasurface layer, and converting the light of the multiple colors into electrical signals includes: converting the multiple lights output by the second metasurface layer into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different;
  • the second metasurface layer receives the incident light
  • the first metasurface layer receives the light output by the second metasurface layer
  • converting the light of the multiple colors into electrical signals includes: converting the multiple lights output by the first metasurface layer into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different.
  • the method also includes:
  • the multiple colors of light are respectively filtered to filter out other color stray light in each color of the multiple colors of light.
  • the present application provides an electronic device, the electronic device includes a processor and the image sensor according to any one of the first aspect, and the processor is configured to process an electrical signal output by the image sensor.
  • FIG. 1 is a schematic structural diagram of an image sensor provided in an embodiment of the present application.
  • Fig. 2 is a schematic structural diagram of an exemplary metasurface layer provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural view of a row of light splitting units provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a row of light splitting units provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of the position change of the imaging when the angle of the incident light changes according to the embodiment of the present application;
  • 6 to 8 are schematic diagrams of pattern transformation of the light splitting unit provided by the embodiment of the present application.
  • Fig. 9 is a schematic diagram of the pattern of the large-area metasurface layer provided by the embodiment of the present application.
  • Fig. 10 is a schematic diagram of the relationship between the metasurface layer and the detection layer provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of the pattern of the light splitting unit provided by the embodiment of the present application.
  • Fig. 12 is a schematic diagram of a simulated spectrum obtained by vertically incident on the first light splitting unit provided by the embodiment of the present application;
  • Figures 13 to 15 are schematic diagrams of light intensity distributions of red, green, and blue colors in the detection sub-region provided by the embodiment of the present application;
  • Figures 16 to 21 are schematic diagrams of simulated spectra obtained by incident light incident on the first light splitting unit at different angles provided by the embodiment of the present application;
  • Figures 22 to 27 are schematic diagrams of simulated spectra obtained by incident light incident on the second light splitting unit at different angles provided by the embodiment of the present application;
  • Figure 28 is a schematic diagram of the relationship between the metasurface layer and the detection layer provided by the embodiment of the present application.
  • Fig. 29 is a simulated spectrum diagram of horizontally polarized light provided by the embodiment of the present application.
  • Figure 30 is a simulated spectrum diagram of vertically polarized light provided by the embodiment of the present application.
  • Figures 31 to 40 are schematic diagrams of the distribution of specific colors and polarized light separated by the light splitting unit provided by the embodiment of the present application;
  • Fig. 41 is a schematic structural diagram of an image sensor provided by an embodiment of the present application.
  • Fig. 42 is a schematic diagram of a pattern after translation transformation of the first pattern provided by the embodiment of the present application.
  • Fig. 43 is a schematic diagram of the pattern obtained by changing the shape of the pixel point in the pattern in Fig. 42 provided by the embodiment of the present application;
  • Figures 44 to 46 are schematic diagrams of patterns obtained by changing the number and arrangement of pixels in the pattern in Figure 42 provided by the embodiment of the present application;
  • Fig. 47 is a schematic structural diagram of an image sensor provided by an embodiment of the present application.
  • Figure 48 and Figure 49 are schematic diagrams of the assembly tolerance of the metasurface layer provided by the embodiment of the present application.
  • Fig. 50 is a schematic structural diagram of a camera module provided by an embodiment of the present application.
  • Fig. 51 is a flowchart of an image sensing method provided by an embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of an image sensor provided by an embodiment of the present application.
  • the image sensor includes: a cover plate 10 , a first metasurface layer 11 and a detection layer 12 , and the first metasurface layer 11 is located between the cover plate 10 and the detection layer 12 .
  • the cover plate 10 plays a protective role, and the incident light enters through the cover plate.
  • the first metasurface layer 11 is used to receive incident light, and the incident light includes incident light rays at multiple incident angles.
  • Fig. 2 is a schematic structural diagram of a metasurface layer provided by an embodiment of the present application.
  • the first metasurface layer 11 includes a plurality of light splitting units 110, and different light splitting units 110 correspond to receive incident light from different incident angles.
  • the pattern is obtained through the pattern transformation of the first light splitting unit 111 .
  • the incident angles of the incident light corresponding to the first light splitting unit 111 and the second light splitting unit 112 are different.
  • the detection layer 12 is used to receive the light of various colors separated by the multiple light splitting units 110, and convert the received light of various colors into electrical signals.
  • the first metasurface layer 11 is used for color separation of light to realize routing of light of different colors, and may also be called a light routing device.
  • FIG. 1 shows is the exploded diagram of each component of image sensor, and the distance between each component actually, this application does not limit, for example can stick between cover plate 10 and first metasurface layer 11, there can be certain gap etc. between first supersurface layer 11 and detection layer 12.
  • the first spectroscopic unit and the second spectroscopic unit are two units used to split the incident light, and the patterns of the two are different due to the need to split the light at different incident angles.
  • the pattern of the first spectroscopic unit can be designed using an algorithm, and the pattern of the second spectroscopic unit is obtained based on the pattern transformation of the first spectroscopic unit. Therefore, when designing a large-area metasurface structure layer, it is no longer necessary to design the pattern of each spectroscopic unit one by one, which simplifies the design process of the metasurface layer and improves the design and processing efficiency of the metasurface layer.
  • the cover 10 is a transparent cover, such as a glass cover, a resin cover, and the like.
  • the detection layer 12 is a CMOS detection layer, including a plurality of detection regions (or called imaging regions) corresponding to the plurality of spectroscopic units 110, each detection region includes a plurality of detection sub-regions, and the plurality of detection sub-regions are respectively used to receive light of various colors separated by each corresponding spectroscopic unit, and convert the received multiple lights into electrical signals.
  • CMOS detection layer including a plurality of detection regions (or called imaging regions) corresponding to the plurality of spectroscopic units 110, each detection region includes a plurality of detection sub-regions, and the plurality of detection sub-regions are respectively used to receive light of various colors separated by each corresponding spectroscopic unit, and convert the received multiple lights into electrical signals.
  • the foregoing transformation may be translation transformation, that is, the pattern of the second light splitting unit is obtained through translation transformation of the pattern of the first light splitting unit.
  • the direction and distance of translation transformation are determined according to the positional relationship between the second light splitting unit and the first light splitting unit and the incident angle.
  • the translation transformation refers to taking the outer boundary of the original pattern as the picture frame, moving the pattern of the first light splitting unit, and cutting and supplementing the part protruding outside the picture frame into the vacated position in the picture frame after moving.
  • the pattern of the first light-splitting unit 111 is moved to the left by one column.
  • the columns of A and C reach the left boundary, and the column of B protrudes out of the picture frame.
  • the position of B is added to the right side of the column of D, and the pattern of the second light-splitting unit 112 on the right side of the first light-splitting unit 111 is obtained.
  • the foregoing transformation may also be transformation forms such as rotation and flipping, which are not limited in the present application.
  • the second light splitting unit is located on the first side of the first light splitting unit, the pattern of the second light splitting unit is obtained by translational transformation of the pattern of the first light splitting unit to the second side, and the second side and the first side are opposite sides of the first light splitting unit.
  • FIG. 3 is a schematic structural diagram of a row of light splitting units provided by an embodiment of the present application.
  • a row of light splitting units 110 includes: a first light splitting unit 111 located in the middle and second light splitting units 112 located on both sides of the first light splitting unit 111 .
  • the first light splitting unit 111 has a first pattern
  • the second light splitting unit 112 has a second pattern
  • the second pattern refers to a pattern transformed from the first pattern.
  • the multiple second patterns may be the same pattern or include multiple different patterns.
  • the second pattern on the left side of the first pattern is obtained by shifting the first pattern to the right, and the second pattern on the right side of the first pattern is obtained by shifting the first pattern to the left.
  • the protruding part of the pattern after the translation is spliced to the vacated position after the translation.
  • the second light-splitting unit 112 on the right side of the first light-splitting unit 111 is obtained by moving the first light-splitting unit 111 to the left. After the first light-splitting unit 111 is moved to the left by one column, the columns A and C are located on the far left. At this time, the column B is translated and protrudes out of the picture frame.
  • FIG. 4 is a schematic structural diagram of a row of light splitting units provided by an embodiment of the present application.
  • a column of light splitting units 110 includes: a first light splitting unit 111 located in the middle and second light splitting units 112 located on both sides of the first light splitting unit 111 .
  • the second pattern on the upper side of the first pattern is obtained by shifting the first pattern downward, and the second pattern on the lower side of the first pattern is obtained by shifting the first pattern upward.
  • the first metasurface layer 11 includes only one first light splitting unit 111, and the rest of the light splitting units are all second light splitting units 112, that is, they are all transformed from the first light splitting unit 111.
  • the first metasurface layer 11 includes a plurality of first light splitting units 111, and the rest of the light splitting units are second light splitting units 112. Usually, the second light splitting unit 112 is transformed from the adjacent first light splitting unit 111.
  • the incident angle of the incident light corresponding to the first light splitting unit 111 is within a range centered on 0°, such as -2° ⁇ 2°.
  • the first light splitting unit 111 corresponds to an incident light with an incident angle of 0°, and a plurality of second light splitting units 112 are arranged around the first light splitting unit 111 .
  • the first translation transformation obtains the pattern of the light splitting unit in the same row or column as the first light splitting unit 111, and then translates and transforms the pattern of the light splitting unit obtained from the first translation transformation to obtain the pattern of the second light splitting unit 112.
  • the pattern of the spectroscopic unit in a different row and column from the first spectroscopic unit 111 can also be obtained by a translation transformation along the oblique direction, and the effect is the same as the aforementioned two translation transformations.
  • the pattern of the spectroscopic unit located on the upper left side of the first spectroscopic unit 111 can be obtained by translating the pattern of the first spectroscopic unit 111 to the lower right side.
  • the situation that there are a plurality of first light splitting units 111 can include the following two situations: the first one is to independently design a plurality of first light splitting units 111 during design; the second one is to initially design only the first light splitting unit 111 corresponding to the incident light with an incident angle of 0°; 11.
  • the translation transformation forms the pattern of the subsequent second light splitting unit 112 .
  • the first spectroscopic unit 111 and the second spectroscopic unit 112 transformed by the first spectroscopic unit 111 are usually within a certain range, the distance between the two will not be too far, and the angle difference of the incident light received by the two is not too large, so as to ensure that the shifted pattern of the first spectroscopic unit 111 can complete the spectroscopic function of the second spectroscopic unit 112.
  • the difference between the incident angles of the first light splitting unit 111 and the incident light received by the second light splitting unit 112 transformed by the first light splitting unit 111 is within 10°.
  • the pattern of the light splitting unit 110 includes a plurality of pixel points arranged in an array, and these pixel points are formed of two different materials to obtain a mosaic pattern as shown in FIG. 3 and FIG. 4 .
  • the first pattern of the first spectroscopic unit 111 in the metasurface layer is shown in the dotted line box in the middle.
  • the mosaic pattern is translated in the x direction to adjust the arrangement of internal pixels, and the second pattern of the second spectroscopic unit 112 corresponding to the angles of ⁇ 1° and ⁇ 2° in the x direction is obtained.
  • the relative positional relationship between the mosaic pattern and the incident light angle here refers to the positional relationship of the patterns corresponding to the incident light at different incident angles in the x direction.
  • the pattern corresponding to the incident angle of 1° is on the right side of the incident angle of 0°, then the pattern corresponding to the incident angle of 1° is obtained from the pattern corresponding to the incident angle of 0°, and the pattern corresponding to the incident angle of 0° needs to be moved to the left.
  • the mosaic pattern can be adjusted in the y direction by shifting the internal pixel arrangement to obtain the second pattern of the second light splitting unit 112 corresponding to angles such as ⁇ 1° and ⁇ 2° in the y direction.
  • the two-dimensional supersurface layer structure can be designed by using the above method, and the design method is simpler, and there is no need to design pixel by pixel, which simplifies the process of design and processing.
  • the relative positional relationship between the mosaic pattern and the incident light angle here refers to the positional relationship in the y direction of the patterns corresponding to the incident light at different incident angles.
  • the pattern corresponding to the incident angle of 1° is on the upper side of the incident angle of 0°, then the pattern corresponding to the incident angle of 1° is obtained from the pattern corresponding to the incident angle of 0°, and the pattern corresponding to the incident angle of 0° needs to be moved down.
  • the translation distance is positively correlated with the distance between the first light splitting unit 111 and the second light splitting unit 112 .
  • the distance s obtained from the pattern translation of the first light-splitting unit 111 to obtain the pattern of the second light-splitting unit 112 can be calculated according to the following formula (1):
  • is the incident angle of the incident light corresponding to the second spectroscopic unit
  • z is the imaging distance
  • is the imaging coefficient including conditions such as the refractive index.
  • ⁇ [0,10], z ⁇ [max(x,y)/5,5*max(x,y)] is limited here
  • (x, y) are the dimensions of the pixel points in the x and y directions respectively.
  • Figure 6 shows that if the incident angle of the incident light is changed from 0° (vertical) incidence to +1° incidence along the x direction, the part in the dotted line on the left side of the pattern corresponding to 0° is moved to the dotted line part on the right side of the original pattern, that is, the pattern corresponding to 0° is moved to the left, and the part beyond the boundary after the movement is filled to the vacated position on the right side. If the incident angle of the incident light changes from 0° (vertical) incidence to -1° incidence along the x direction, then move the part in the solid line on the right side of the pattern corresponding to 0° to the left solid line part of the original pattern.
  • Figure 7 shows that the incident angle of the incident light along the y direction changes from vertical incidence to +1°, -1° incidence, and the corresponding pattern changes in the y direction.
  • the change method is similar to that in FIG. 6 and will not be repeated here.
  • a pattern of spectroscopic units applicable to any incident angle can be constructed.
  • the angle and moving distance of the pixels here are only used for illustration, the actual moving distance is determined by formula (1) combined with the size of the pixel in the light splitting unit, the refractive index of the light splitting unit, and the size of the incident angle.
  • the translation distance is usually an integer multiple of the pixel size.
  • the distance calculated according to the formula (1) is not an integer multiple of the pixel size, it can be approximated as an integer multiple of the pixel size.
  • the distance translated when the pattern of the first light splitting unit 111 is translated is 1 row or 1 column of pixels; when the distance between the second light splitting unit 112 and the first light splitting unit 111 is 2, the distance translated when the pattern of the first light splitting unit 111 is translated is 2 rows or 2 columns of pixels, and so on.
  • an example is given by taking the number of rows or columns (values) of the shifted pixel points to be equal to the distance between the first light splitting unit 111 and the second light splitting unit 112 .
  • the two may not be equal.
  • the number of rows or columns of pixels to be shifted is 2 times the distance between the first light splitting unit 111 and the second light splitting unit 112, or the two are not in a multiple relationship.
  • the translation distance for the pattern translation of the first light splitting unit 111 is 3 rows or 3 columns of pixels.
  • the time translation distance is 5 rows or 5 columns of pixels, and so on.
  • each light splitting unit 110 has been further translated compared to adjacent light splitting units 110, in a row or column of light splitting units 110, the patterns of any two adjacent light splitting units 110 are different.
  • the distance translated when the pattern of the first light-splitting unit 111 is translated is 1 row or 1 column of pixels;
  • the translation distance is 2 rows or 2 columns of pixel points;
  • the translation distance of the pattern of the first light-splitting unit 111 is 2 rows or 1 column of pixels, and so on.
  • a plurality of light-splitting units are arranged in an array, and one row or column of light-splitting units is divided into multiple groups, each group includes a plurality of light-splitting units arranged in succession, and the patterns of the light-splitting units arranged in a row are the same.
  • the multiple light splitting units in the aforementioned metasurface layer are arranged in an array as an example. In other embodiments, the light splitting units may not be arranged in an array, which is not limited in this application.
  • the first metasurface layer 11 is a micro-nano structure formed of two materials with different refractive indices.
  • Micro-nano structures refer to sub-wavelength-scale micro-nano planar structures with special electromagnetic properties, which can be fabricated using micro-nano fabrication techniques and are easy to mass produce.
  • the first metasurface layer 11 is a two-dimensional structure with a compact structure and can be combined with the current CMOS process. Therefore, a compact and large-area metasurface layer matching the CMOS (such as field of view (FOV), size, etc.) is designed, so that it can be easily integrated into the module of a camera or mobile phone.
  • the luminous flux can be improved, thereby improving the imaging quality.
  • the black part is a material with a high refractive index, such as titanium oxide
  • the white part is a material with a low refractive index, such as air.
  • the black part can also be called a micro-nano unit element
  • the white part can also be called a substrate, and the distribution of the two forms the aforementioned mosaic pattern.
  • the materials here are not limited to the above two.
  • the material of the black part and the material of the white part can be selected from the following types: titanium oxide, silicon nitride, silicon oxide, silicon, metal, etc., but the refractive index of the material of the black part must be guaranteed to be higher than that of the material of the white part.
  • FIG. 9 is a schematic diagram of a pattern of a large-area metasurface layer provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of the designed metasurface layer of a mosaic pattern of 5*5.
  • the corresponding incident light is irradiated in the range of -5° to 5° in the x direction and -5° to 5° in the y direction.
  • the mosaic pattern of 5*5 corresponds to 5*5 light splitting units, and the accuracy of the incident light corresponding to each light splitting unit is 1°. Of course, the accuracy of 1° is only an example. eg smaller or larger.
  • Fig. 9 illustrate how the pattern of supersurface layer is designed:
  • the pattern of the light splitting unit provided in the present application is distributed in a mosaic pattern.
  • the pattern of the first light splitting unit 111 is optimized by using the reverse design algorithm, which is the pattern in the middle of FIG. 9 .
  • the pattern of the light splitting unit 110 corresponding to any incident angle is transformed to obtain the pattern around the pattern in the middle of FIG.
  • the patterns corresponding to different angles are spliced in the two-dimensional direction according to the positional relationship, and the pattern of the metasurface layer shown in FIG. 9 can be obtained.
  • the metasurface layer provided by the present application can achieve a large-area color separation function, and various colors in the incident light are separated on the detection layer and refocused in the corresponding area.
  • the metasurface layer provided by the embodiment of the present application can realize the color separation of light at any incident angle, thereby improving the utilization efficiency of light.
  • the following analyzes the spectrum and light intensity distribution of the metasurface layer designed in the embodiments of the present application to illustrate the spectroscopic effect of the metasurface layer:
  • the size of the pattern of the spectroscopic unit is 1.6 ⁇ m ⁇ 1.6 ⁇ m, the thickness is 300 nm, and consists of 400 (20 ⁇ 20) pixels, each pixel can be titanium dioxide (TiO 2 ) or air.
  • the hollow arrow in the figure indicates the incident direction of the incident light with an incident angle of 0°, and the wavelength of the incident light is in the range of 400 to 700 nm, that is, visible light.
  • a CMOS detection area with four detection sub-areas G , B, R , and G is set up.
  • the four detection sub-areas are respectively used to detect green (500-600nm) , blue (400-500nm), red (600-700nm), and green (500-600nm) light. +T G2 ) and T B take the average to get the light-splitting efficiency of the light-splitting unit .
  • the four detection sub-regions G, B, R, and G of the CMOS detection area exhibit 45° diagonal symmetry, so the design of the light splitting unit can also adopt a 45° diagonal symmetric structure, such as shown in Figure 11, with the diagonal line a as the axis, and the patterns on both sides are the same. In this way, only half of the patterns need to be designed to obtain the other half, which can greatly reduce the number of solutions in the reverse design optimization process and speed up the optimization process.
  • the reverse design optimization process refers to the process of using the particle swarm algorithm, simulated annealing algorithm, etc. to determine the pattern of the first light splitting unit, and using the designed pattern to split light, optimizing the designed pattern according to the actual effect of light splitting, and finally obtaining the qualified pattern of the first light splitting unit.
  • Fig. 12 is a simulated spectrogram obtained by the incident light provided by the embodiment of the present application perpendicularly incident on the first spectroscopic unit shown in Fig. 11.
  • the abscissa is the wavelength
  • the unit is nm
  • the ordinate is the transmittance.
  • ⁇ , ⁇ R ⁇ , ⁇ (600-700nm) ⁇ , ⁇ T R ⁇ 49.9%( ⁇ 0.499), ⁇ 600-700nm ⁇ ; ⁇ G ⁇ , ⁇ (500-600nm) ⁇ , ⁇ T G ⁇ 42.0%( ⁇ TG1 ⁇ 19.5%,TG2 ⁇ 22.5%); ⁇ B ⁇ , ⁇ (400-500nm) ⁇ , ⁇ T B ⁇ 50.6%, ⁇ T R ⁇ T G ⁇ , ⁇ Calculated by T R , T G , and TB , the light splitting efficiency of the light splitting unit is 47.5% when the incident light is vertically incident.
  • Figure 13 to Figure 15 are the light intensity distribution diagrams obtained by the incident light provided by the embodiment of the present application perpendicularly incident on the first light splitting unit shown in Figure 11, and the light intensity distribution diagrams shown in Figure 13 to Figure 15 respectively show the light intensity distribution of the three colors of red, green and blue in the entire detection area.
  • most of the red light band is imaged in the R detection sub-region in the lower left corner after passing through the spectroscopic unit, and the light spots are relatively concentrated.
  • the green light band is mainly imaged in the G area in the upper left corner and the G area in the lower right corner after passing through the spectroscopic unit, and the light spots are relatively scattered, showing an obvious diagonal symmetrical distribution.
  • the abscissa and ordinate represent the size of the detection area, and the color depth in the area represents the detected light intensity of the corresponding color.
  • the light-splitting efficiency of the second light splitting unit corresponding to the incident light rays at each incident angle is 47.0%, 47.5%, 45.2%, 45.5%, and 39.4%.
  • the spectroscopic unit 110 of the first metasurface layer 11 can divide the incident light into three colors of light: red (red, R), green (green, G) and blue (blue, B).
  • the first supersurface layer 11 can also split light according to red yellow blue (RYB), or according to red green blue emerald (RGBE), or according to cyan yellow green magenta (cyan yellow green magenta, CYGM).
  • RYB red yellow blue
  • RGBE red green blue emerald
  • CYGM cyan yellow green magenta
  • the first metasurface layer 11 in addition to separating the color of light, can also divide light into different polarizations, that is, the first metasurface layer 11 is also used to divide the corresponding incident light into multiple polarizations of light.
  • the detection layer 12 is used to receive multiple lights output by the first metasurface layer 11 and convert the received multiple lights into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different.
  • the metasurface layer of this application extracts all polarization information, so that more polarization information is obtained, which in turn makes the final imaging more efficient.
  • the metasurface layer of the present application can not only separate and converge light colors, but also separate polarization states, thereby providing more imaging information and broadening the application scenarios of imaging systems.
  • the light-splitting unit 110 of the first metasurface layer 11 divides the incident light into three colors of R, G, and B. At the same time, the light-splitting unit 110 divides the incident light into horizontally polarized light and vertically polarized light.
  • the actual light-splitting unit 110 divides the incident light into six beams of light, namely: red horizontally polarized light, green horizontally polarized light, blue horizontally polarized light, red vertically polarized light, green vertically polarized light, and blue vertically polarized light.
  • the polarization In addition to dividing the polarization according to the two orthogonal polarizations of horizontal polarization and vertical polarization, it can also be divided according to four basic polarization states (0°, 45°, 90°, 135°), or according to linear polarization and circular polarization.
  • the color is divided according to CYGM, and the polarization is divided according to the orthogonal polarization mode.
  • the light splitting unit 110 divides the incident light into 8 beams of light, which are: C-color horizontally polarized light, Y-color horizontally polarized light, G-color horizontally polarized light, M-color horizontally polarized light, C-color vertically polarized light, Y-color vertically polarized light, G-color vertically polarized light, and M-color horizontally polarized light.
  • the pattern of light-splitting units in the metasurface layer has asymmetric randomness, which can cause different color separation responses to light with different polarizations.
  • a spectroscopic unit with a polarization splitting function can be obtained, as shown in Figure 28, taking the spectroscopic unit with a size of 3 ⁇ m ⁇ 3 ⁇ m and a thickness of 300nm, consisting of 900 (30 ⁇ 30) pixels as an example, each pixel can be titanium dioxide (TiO2) or air.
  • the incident light is vertically incident on the light-splitting unit from the normal direction (0°) of the light-splitting unit, and the incident light is visible light with a wavelength range of 400 to 700 nm. ⁇ 3 ⁇ m ⁇ , ⁇ 6 ⁇ CMOS ⁇ ,6 ⁇ R ⁇ R ⁇ G ⁇ G ⁇ B ⁇ B, ⁇ (600-700nm) ⁇ (500-600nm) ⁇ (400-500nm) ⁇ T R1 ⁇ T R2 ⁇ T G1 ⁇ T G2 ⁇ T B1 ⁇ T B2 , ⁇ T R1 ⁇ T G1 ⁇ T B1 ⁇ , ⁇ T R2 ⁇ T G2 ⁇ T B2 ⁇
  • the simulated spectra of the spectroscopic unit under horizontal and vertical polarized light are shown in Fig. 29 and Fig. 30 respectively.
  • the average transmittance T R1 of curve R in the 600-700nm band, the average transmittance T G1 of curve G in the 500-600nm band, and the average transmittance T B1 of curve B in the 400-500nm band are 23.1%, 24.8%, and 23.4%, respectively.
  • the light splitting efficiency of the light unit is 23.8%.
  • the average transmittance T R2 of curve R in the 600-700nm band, the average transmittance T G2 of curve G in the 500-600nm band, and the average transmittance T B2 of curve B in the 400-500nm band are 23.1%, 25.1%, and 25.1%, respectively, and the light-splitting efficiency of the light-splitting unit under vertically polarized light can be obtained as 24.4%.
  • the efficiency of the polarizing color splitter in the related art is about 16.7%, which shows that the light splitting efficiency of the light splitting unit designed in the embodiment of the present application is not only improved, but also has better uniformity in terms of color separation and polarization separation.
  • the spectroscopic unit provided in this application implements color separation and polarization separation at the same time, for the distribution of separated light of various colors and polarization combinations, reference can be made to the arrangement of detection sub-regions in the detection layer 12 in FIG. 28 .
  • the distribution of the light separated by the spectroscopic unit in the present application is not limited thereto, and the arrangement of the detection sub-regions in the corresponding detection layer 12 is not limited thereto either.
  • the position of the horizontally polarized light and the vertically polarized light can be exchanged according to the method shown in FIG. 31 , compared with the method shown in FIG.
  • the distribution of separated light can be referred to the two implementations in Fig. 33 and Fig. 34 .
  • the distribution of the separated light can refer to the implementation methods in Fig. 35 and Fig. 36 .
  • Fig. 37 to Fig. 39 give some examples of the arrangement of the detection sub-regions, of course, these are only some examples, and the arrangement of the detection sub-regions under any combination of the above-mentioned color separation and polarization separation can also be in other forms.
  • the detection sub-regions can be arranged in a regular arrangement as shown in the preceding figures, or in an irregular arrangement, such as shown in FIG. 40 , which is not limited in this embodiment of the present application.
  • the shape of the detection sub-region is not limited, and may be a regular shape such as a rectangle or a hexagon, or other regular or irregular shapes.
  • the color separation and polarization separation are both implemented by the same metasurface layer, and in another implementation manner, the color separation and polarization division can also be implemented by two metasurface layers. Wherein, the aforementioned first supersurface layer 11 is used to realize color separation.
  • FIG. 41 is a schematic structural diagram of an image sensor provided by an embodiment of the present application.
  • the image sensor further includes a second metasurface layer 13 , and the first metasurface layer 11 is located between the second metasurface layer 13 and the cover plate 10 .
  • the first metasurface layer 11 is used to transform the corresponding incident light into light of various colors.
  • the second metasurface layer 13 is used to divide the light of each color separated by the first metasurface layer 11 into multiple polarizations.
  • the second metasurface layer 13 does not design light-splitting units in the same way as the first metasurface layer 11, and the pattern of each light-splitting unit in the second metasurface layer 13 needs to be designed and optimized separately, and is obtained by using reverse design algorithm optimization.
  • the detection layer 12 is used to receive multiple lights output by the second metasurface layer 13, and convert the received multiple lights into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different.
  • first metasurface layer 11 and the second metasurface layer 13 can also be interchanged, that is, the second metasurface layer 13 is located between the first metasurface layer 11 and the cover plate 10, at this time, the incident light first passes through the second metasurface layer 13 for polarization, and then enters the first metasurface layer 11 for color division.
  • the detection layer 12 is used to receive various lights output by the first metasurface layer 11, and convert the received various lights into electrical signals.
  • the second pattern is obtained directly through translation transformation of the first pattern, without further processing.
  • the second pattern is obtained by changing part of the graphics in the transformed pattern after the first pattern is transformed.
  • the pixels in the pattern after translation transformation are changed, and the ratio of the changed pixels in the pattern after translation transformation to the total number of pixels in the pattern of the first light-splitting unit does not exceed a threshold.
  • the value range of the threshold is 20%-30%.
  • the second pattern is obtained by changing no more than 20% of the pixels in the first pattern after translation transformation.
  • the form in which the pixels are changed includes at least one of the following:
  • the first pixel point and the second pixel point are two kinds of pixel points corresponding to materials with different refractive indices among the plurality of pixel points.
  • the refractive index of the material corresponding to the first pixel is higher than that of the material corresponding to the second pixel, that is, the first pixel is a black pixel in the pattern, and the second pixel is a white pixel in the pattern.
  • the shape of the pixel point is a rectangle.
  • the shape of the pixel point may also be a circle, a hexagon or other regular or irregular figures.
  • the shape of the pixel point may also be a circle, a hexagon or other regular or irregular figures.
  • these gaps are usually filled with air or a material with the same low refractive index as the white part.
  • Fig. 42 is a schematic diagram of a pattern after translation transformation of the first pattern provided by the embodiment of the present application.
  • FIG. 43 is a schematic diagram of a pattern obtained by changing the shape of pixels in the pattern in FIG. 42 provided by the embodiment of the present application. Referring to Fig. 42 and Fig. 43, the shape of some pixels changes from rectangle to circle, but this part of pixels is less, less than 20% of the total pixels in the first pattern.
  • FIG. 44 is a schematic diagram of the pattern obtained by changing the refractive index of the pixel point in the pattern in FIG. 42 provided by the embodiment of the present application. Referring to Figure 42 and Figure 44, some of the first pixels are changed to second pixels to form the effect of partly missing the first pixels, that is, to reduce the number of first pixels, see the position corresponding to the dotted box in Figure 44.
  • FIG. 45 is a schematic diagram of the pattern obtained by changing the refractive index of the pixel point in the pattern in FIG. 42 provided by the embodiment of the present application. Referring to Figure 42 and Figure 45, some of the second pixels are changed to the first pixels to form the effect of partly increasing the number of first pixels, that is, to increase the number of first pixels, see the position corresponding to the dotted box in Figure 45.
  • FIG. 46 is a schematic diagram of a pattern obtained by changing the refractive index of pixels in the pattern in FIG. 42 provided by the embodiment of the present application. Referring to Figure 42 and Figure 46, change some of the first pixels to second pixels, and at the same time change some of the second pixels to first pixels to form the effect of rearranging the first pixels and the second pixels, see the position corresponding to the dashed box in Figure 46.
  • FIG. 47 is a schematic structural diagram of an image sensor provided by an embodiment of the present application.
  • the image sensor can also include a spacer 14, the spacer 14 is located between the first metasurface layer 11 and the detection layer 12, and the spacer 14 is used to limit the distance between the first metasurface layer 11 and the detection layer 12, thereby ensuring that the light separated by the first metasurface layer 11 can be imaged in the detection area of the detection layer 12.
  • the spacer 14 of the present application may be provided with a filler, such as a transparent material, or may not be provided with a filler.
  • the image sensor may further include a filter 15, which is located between the spacer 14 and the detection layer 12.
  • the filter 15 is used to filter the light of various colors separated by the light splitting unit, and to filter out other stray light other than the light of a specific color, that is to say, to filter out the stray light of other colors in each color of light among the multiple colors of light, so as to reduce crosstalk and further improve the color separation performance of the device. For example, if the multiple colors of light separated by the first metasurface layer 11 include a beam of red light, when the filter 15 filters the red light, it will filter out components other than red, reducing the interference of other colors, that is, reducing crosstalk.
  • the optical filter 15 includes a plurality of sub-pixels, and the distribution of the sub-pixels of the optical filter 15 corresponds to the arrangement of the detection sub-regions in the detection layer.
  • each detection area of the detection layer should be divided into six detection sub-areas, which are respectively: R-horizontal, G-horizontal, B-horizontal, R-vertical, G-vertical, and B-vertical.
  • the pixels are correspondingly located above the R-horizontal and R-vertical detection sub-regions.
  • Each sub-pixel filters out other colors of light, for example, a red sub-pixel filters out other colors of light and only passes red light. Due to the two-dimensional property of the supersurface layer, although the image sensor has both the supersurface layer and the optical filter, the volume of the image sensor is small and miniaturization can be realized.
  • a metasurface layer with a large-area two-dimensional mosaic pattern is designed by utilizing the relationship between the incident angle of the incident light and the position of the light splitting unit under the paraxial approximation condition, which solves the problem of the angle sensitivity of the metasurface layer in the traditional design, and at the same time simplifies the design method and improves the design efficiency.
  • the mosaic pattern is designed with higher diffraction efficiency, which improves the performance of the image sensor.
  • the metasurface layer can realize color separation and polarization at the same time, realize the separation, reconstruction and utilization of color and polarization information at the same time, improve the utilization efficiency of light, increase the imaging information, and then improve the imaging quality.
  • the metasurface layer is a micro-nano structure, it is small in size, compact in size, compatible with CMOS technology, can be directly integrated on a CMOS chip, and can be easily integrated into any optical system.
  • the metasurface structure can be realized based on the mature micro-nano preparation process, the preparation difficulty is low, and mass production is easy to achieve.
  • the structure has a certain assembly tolerance, which reduces the requirements on the processing technology during the processing.
  • the image sensor provided in the embodiment of the present application can not only be applied in the visible light band to realize imaging, but also can be used in the infrared band, ultraviolet band, and even terahertz, microwave, radio and other bands to realize light splitting or beam splitting of waves in different bands.
  • An embodiment of the present application also provides an electronic device, which includes the image sensor shown in any one of FIGS. 1 to 49 .
  • the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a camera, a video camera, and the like.
  • the above-mentioned image sensor is applied in the camera module of the above-mentioned electronic device.
  • FIG. 50 is a schematic structural diagram of a camera module provided by an embodiment of the present application.
  • the camera module includes: a lens 1 , a mirror 2 , a pentaprism 3 , a viewfinder 4 and an image sensor 5 .
  • the incident light enters the viewfinder 4 through the lens 1, reflector 2, and pentaprism 3, and then enters the human eye.
  • the shutter is pressed, and the reflector 2 is lifted quickly, and the incident light shines directly on the image sensor 5 on the right, and the image sensor 5 converts the optical signal into an electrical signal.
  • the electronic device also includes a processor for receiving the electrical signal output by the image sensor 5 and processing the electrical signal. For example, the electrical signal is recombined to generate an image and saved to the memory card of the electronic device.
  • the structure of the above-mentioned camera module is only an example, and the present application does not limit the structure of the camera module, as long as it includes the above-mentioned image sensor.
  • the image sensor proposed in this application can not only be used in consumer-grade cameras and mobile phone terminals, but can also be integrated into industrial-grade cameras, imaging systems, and display systems to realize applications in fields such as environmental monitoring and agricultural monitoring.
  • Fig. 51 is a flowchart of an image sensing method provided by an embodiment of the present application. This method is implemented by the aforementioned image sensor, see Figure 51, the image sensing method includes:
  • S51 Receive incident light, where the incident light includes incident light rays at multiple incident angles.
  • two different patterns in the first metasurface layer are used to split light at different incident angles, wherein one of the two different patterns is obtained by transforming the other pattern in the two different patterns. Therefore, during the fabrication of the metasurface layer, it is no longer necessary to design and manufacture the pattern of each light splitting unit, which simplifies the fabrication process of the metasurface layer and improves the fabrication efficiency of the metasurface layer.
  • the method also includes:
  • the multiple lights output by the first metasurface layer are converted into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different.
  • the method also includes:
  • the second metasurface layer receives the light output by the first metasurface layer, and converts the light of multiple colors into electrical signals, including: converting the multiple lights output by the second metasurface layer into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different;
  • the second metasurface layer receives incident light
  • the first metasurface layer receives the light output by the second metasurface layer
  • converts light of multiple colors into electrical signals including: converting multiple lights output by the first metasurface layer into electrical signals, and at least one of the colors and polarizations of any two lights in the multiple lights is different.
  • the color separation can be performed first and then the polarization division can be performed, or the polarization division can be performed first and then the color division can be performed.
  • the method also includes:
  • the light of multiple colors is filtered separately to filter out the stray light of other colors in each color of light of the multiple colors of light, thereby reducing crosstalk and further improving the color separation performance.
  • the multiple colors of light separated by the first metasurface layer include a beam of red light
  • the red light when the red light is filtered, components other than red will be filtered out, reducing the interference of other colors, that is, reducing crosstalk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

公开了一种图像传感器和电子设备。图像传感器包括: 盖板、第一超表面层和探测层,第一超表面层位于盖板和探测层之间。其中,第一超表面层用于接收入射光,入射光包括多个入射角度的入射光线; 第一超表面层包括多个分光单元,不同分光单元对应接收来自不同入射角度的入射光线,多个分光单元包括第一分光单元以及第二分光单元,第一分光单元和第二分光单元的图案不同,第一分光单元和第二分光单元分别通过不同的图案将对应的入射光线分为多种颜色的光,第二分光单元的图案是通过第一分光单元的图案变换得到的; 探测层用于接收多个分光单元分出的多种颜色的光,将接收到的多种光转为电信号。

Description

图像传感器和电子设备
本申请要求于2022年1月21日提交的申请号202210074665.1、申请名称为“图像传感器和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及光学技术领域,特别涉及一种图像传感器和电子设备。
背景技术
在手机等电子设备中,拍照功能成为越来越重要的功能。拍照功能的实现主要依赖于电子设备中的图像传感器,例如互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)传感器。
相关技术中提出了一种基于超表面结构的图像传感器,包括盖板、超表面层和探测层,入射光线经过盖板到达超表面层,超表面层对入射光线进行折射,使得不同颜色的光折射到探测层上对应颜色的探测区域,探测层将接收到的光转换为电信号。
超表面层包括多个分光单元,不同的分光单元会接收到不同入射角度的入射光线,不同的分光单元依靠不同的图案实现对不同入射角度的入射光线的分光处理。为了保证各个分光单元的分光效果,在设计超表面层时,各个分光单元的图案都是利用算法分别设计得到的,造成大面积超表面层设计过程复杂,设计效率低。
发明内容
本申请提供了一种图像传感器和电子设备,基于设计出的第一分光单元的图案变换得到第二分光单元的图案,这样就不再需要对每个分光单元的图案进行针对性设计和制作,在保证分光效率的同时,简化了大面积超表面结构的设计过程,提高了超表面层的设计与加工效率。
第一方面,本申请提供了一种图像传感器,所述图像传感器包括:盖板、第一超表面层和探测层,所述第一超表面层位于所述盖板和所述探测层之间。
其中,盖板起保护作用,且入射光从盖板进入。所述第一超表面层用于接收入射光,所述入射光包括多个入射角度的入射光线。超表面层包括多个分光单元,不同分光单元对应接收来自不同入射角度的入射光线,所述多个分光单元包括第一分光单元以及第二分光单元,所述第一分光单元和所述第二分光单元的图案不同,所述第一分光单元和所述第二分光单元分别通过不同的图案将对应的入射光线分为多种颜色的光,也即,第一分光单元通过具有的图案将对应的入射光线分为多种颜色的光,第二分光单元通过具有的图案将对应的入射光线分为多种颜色的光。所述第二分光单元的图案是通过所述第一分光单元的图案变换得到的。探测层用于接收所述多个分光单元分出的多种颜色的光,将接收到的所述多种光转为电信号。
在本申请提供的图像传感器的超表面层中,第一分光单元和第二分光单元是两个用来对 入射光线进行分光的单元,由于需要对不同入射角度进行分光,二者的图案是不同的。在本申请提供的图像传感器中,第一分光单元的图案可以利用算法设计得到,第二分光单元的图案是基于第一分光单元的图案变换得到的,因而在设计大面积超表面结构层时,不再需要对每个分光单元的图案进行针对性优化设计,简化了超表面层的设计过程,提高了超表面层的制作效率。
在本申请的一些可能的实现方式中,上述变换可以是平移变换,也即所述第二分光单元的图案是通过所述第一分光单元的图案平移变换得到的。其中,所述平移变换的方向及距离根据所述第二分光单元和所述第一分光单元的相对位置关系确定。
这里,第一分光单元和第二分光单元对接收到的入射光线角度敏感性存在差异,因此,若第二分光单元直接采用和第一分光单元相同的图案,则忽略入射光的角度敏感性,造成探测层上的分光效果发生裂化。而通过对第一分光单元的图案进行平移形成的第二分光单元的图案,能够避免分光效果的裂化,使得采用平移形成的第二分光单元的图案能够对第二分光单元接收到的入射光线实现高效率分光。
在本申请的另一些可能的实现方式中,上述变换还可以是旋转、翻转等变换形式,本申请对此不做限定。
示例性地,所述第二分光单元位于所述第一分光单元的第一侧,所述第二分光单元的图案是通过所述第一分光单元的图案向第二侧平移变换得到的,所述第二侧和所述第一侧是所述第一分光单元的相对两侧。
例如,第二分光单元位于第一分光单元的左侧,则第二分光单元的图案是通过第一分光单元的图案向右侧平移变换得到的;第二分光单元位于第一分光单元的右侧,则第二分光单元的图案是通过第一分光单元的图案向左侧平移变换得到的;第二分光单元位于第一分光单元的上侧,则第二分光单元的图案是通过第一分光单元的图案向下侧平移变换得到的;第二分光单元位于第一分光单元的下侧,则第二分光单元的图案是通过第一分光单元的图案向上侧平移变换得到的。
再例如,第二分光单元位于第一分光单元的左上侧,则第二分光单元的图案是通过第一分光单元的图案向右下侧平移变换得到的;第二分光单元位于第一分光单元的右下侧,则第二分光单元的图案是通过第一分光单元的图案向左上侧平移变换得到的;第二分光单元位于第一分光单元的左下侧,则第二分光单元的图案是通过第一分光单元的图案向右上侧平移变换得到的;第二分光单元位于第一分光单元的右上侧,则第二分光单元的图案是通过第一分光单元的图案向左下侧平移变换得到的。
示例性地,所述平移变换的距离与所述第二分光单元和所述第一分光单元的距离正相关。
在本申请的一些可能的实现方式中,任意相邻的两个分光单元的图案均不同。
在本申请的另一些可能的实现方式中,所述多个分光单元阵列布置,一行或一列分光单元分为多组分光单元,所述多组分光单元中的一组分光单元包括多个连续排列的分光单元,所述多个连续排列的分光单元的图案相同。
示例性地,所述多个分光单元包括多个第二分光单元,所述多个第二分光单元围绕所述第一分光单元布置。
在本申请的一些可能的实现方式中,所述第一分光单元对应的入射光线的入射角度在以0°为中心的范围内,比如-2°~2°。
示例性地,第一分光单元对应入射角度为0°的入射光线。
在本申请的一些可能的实现方式中,第一超表面层除了可以对入射光线分颜色外,还可以对入射光线分偏振,也即所述第一超表面层还用于将对应的入射光线分成多种偏振的光。
相应地,所述探测层用于接收所述第一超表面层输出的多种光,将接收到的所述多种光转为电信号,所述多种光中任意两种光的颜色和偏振中的至少一个不同。
在本申请的另一些可能的实现方式中,分偏振的功能和分颜色的功能可以由不同的超表面层实现。
例如,所述图像传感器还包括第二超表面层,第二超表面层用于将接收到的光分为多种偏振的光。
所述第二超表面层位于所述盖板和所述第一超表面层之间,所述探测层用于接收所述第一超表面层输出的多种光,将接收到的所述多种光转为电信号;或者,所述第一超表面层位于所述盖板和所述第二超表面层之间,所述探测层用于接收所述第二超表面层输出的多种光,将接收到的所述多种光转为电信号。
其中,所述多种光中任意两种光的颜色和偏振中的至少一个不同。
在本申请的一些可能的实现方式中,第二分光单元的图案是通过所述第一分光单元的图案平移变换直接得到的。
在本申请的另一些可能的实现方式中,所述第二分光单元的图案是通过所述第一分光单元的图案平移变换后,更改平移变换后的图案中的部分图形得到的。
示例性地,所述第一分光单元的图案包括阵列布置的多个像素点,平移变换后的图案中被更改的像素点占所述第一分光单元的图案中像素点总数的比例不超过阈值。
例如,所述阈值的取值范围为20%~30%。
示例性地,所述像素点被更改的形式包括如下至少一种:
改变所述像素点的形状、改变第一像素点的数量、改变第一像素点和第二像素点的排列;
其中,所述第一像素点和所述第二像素点是所述多个像素点中对应的材料折射率不同的两种像素点。
示例性地,所述平移变换的距离为像素点尺寸的整数倍。
可选地,所述图像传感器还包括滤光片,位于所述第一超表面层和所述探测层之间。所述滤光片用于对所述分光单元分出的多种颜色的光分别进行滤光,滤除特定颜色光以外的其他杂散光,也即是说滤除所述多种颜色的光中每种颜色的光内的其他颜色杂散光,以降低串扰。这里杂散光是指分光单元分出的某种颜色光中混入的其他颜色光,例如分光单元分出的红色光中混入的绿色和蓝色光。
例如,超表面层将入射光分为红绿蓝三种颜色,则滤光片滤除红光中除红色外其他颜色杂散光,滤除绿光中除绿色外其他颜色杂散光,滤除蓝光中除蓝色外其他颜色杂散光。
第二方面,本申请提供了一种图像传感方法,所述图像传感方法包括:
接收入射光,所述入射光包括多个入射角度的入射光线;
通过第一超表面层中两种不同的图案分别将不同入射角度的入射光线分为多种颜色的光,所述两种不同的图案中的一种图案是通过所述两种不同的图案中的另一种图案变换得到的;
将所述多种颜色的光转为电信号。
可选地,所述方法还包括:
通过所述第一超表面层中两种不同的图案分别将不同入射角度的入射光线分为多种偏振的光;
所述将所述多种颜色的光转为电信号,包括:
将所述第一超表面层输出的多种光转为电信号,所述多种光中任意两种光的颜色和偏振中的至少一个不同。
可选地,所述方法还包括:
通过第二超表面层将接收到的光分为多种偏振的光;
所述第二超表面层接收所述第一超表面层输出的光,所述将所述多种颜色的光转为电信号,包括:将所述第二超表面层输出的多种光转为电信号,所述多种光中任意两种光的颜色和偏振中的至少一个不同;
或者,所述第二超表面层接收所述入射光,所述第一超表面层接收所述第二超表面层输出的光,所述将所述多种颜色的光转为电信号,包括:将所述第一超表面层输出的多种光转为电信号,所述多种光中任意两种光的颜色和偏振中的至少一个不同。
可选地,所述方法还包括:
在将所述多种颜色的光转为电信号之前,对所述多种颜色的光分别进行滤光,滤除所述多种颜色的光中每种颜色的光内的其他颜色杂散光。
第三方面,本申请提供了一种电子设备,所述电子设备包括处理器以及如第一方面任一项所述的图像传感器,所述处理器用于处理所述图像传感器输出的电信号。
附图说明
图1是本申请实施例提供的一种图像传感器的结构示意图;
图2是本申请实施例提供的一种示例性地超表面层的结构示意图;
图3是本申请实施例提供的一行分光单元的结构示意图;
图4是本申请实施例提供的一行分光单元的结构示意图;
图5是本申请实施例提供的入射光线角度变化时成像的位置变化示意图;
图6~图8是本申请实施例提供的分光单元的图案变换示意图;
图9是本申请实施例提供的大面积超表面层的图案示意图;
图10是本申请实施例提供的超表面层和探测层的关系示意图;
图11是本申请实施例提供的分光单元的图案示意图;
图12是本申请实施例提供的入射光线垂直入射到第一分光单元得到的模拟光谱示意图;
图13~图15是本申请实施例提供的红、绿、蓝三种颜色在探测子区域中的光强分布示意图;
图16~图21是本申请实施例提供的入射光线以不同角度入射到第一分光单元得到的模拟光谱示意图;
图22~图27是本申请实施例提供的入射光线以不同角度入射到第二分光单元得到的模拟光谱示意图;
图28是本申请实施例提供的超表面层和探测层的关系示意图;
图29是本申请实施例提供的水平偏振光的模拟光谱图;
图30是本申请实施例提供的垂直偏振光的模拟光谱图;
图31~图40是本申请实施例提供的分光单元分离出特定颜色和偏振光的分布示意图;
图41是本申请实施例提供的一种图像传感器的结构示意图;
图42是本申请实施例提供的第一图案平移变换后的图案的示意图;
图43是本申请实施例提供的图42中的图案改变像素点形状得到的图案的示意图;
图44~图46是本申请实施例提供的图42中的图案改变像素点数量、排列得到的图案的示意图;
图47是本申请实施例提供的一种图像传感器的结构示意图;
图48和图49是本申请实施例提供的超表面层的装配容差示意图;
图50是本申请实施例提供的一种相机模组的结构示意图;
图51是本申请实施例提供的一种图像传感方法的流程图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种图像传感器的结构示意图。参见图1,图像传感器包括:盖板10、第一超表面层11和探测层12,第一超表面层11位于盖板10和探测层12之间。
其中,盖板10起保护作用,入射光从盖板进入。第一超表面层11用于接收入射光,入射光包括多个入射角度的入射光线。
图2是本申请实施例提供的一种超表面层的结构示意图。参见图2,第一超表面层11包括多个分光单元110,不同分光单元110对应接收来自不同入射角度的入射光线,多个分光单元110包括第一分光单元111以及第二分光单元112,第一分光单元111和第二分光单元112的图案不同,第一分光单元111和第二分光单元112分别通过不同的图案将对应的入射光线分为多种颜色的光,第二分光单元112的图案是通过第一分光单元111的图案变换得到的。其中,第一分光单元111和第二分光单元112对应的入射光线的入射角度不同。
探测层12用于接收多个分光单元110分出的多种颜色的光,将接收到的所述多种光转为电信号。
其中,第一超表面层11用于对光进行分色,实现不同颜色光的选路,也可以称为光路由器件。
图1示出的是图像传感器各个部件分解示意图,实际上各个部件之间的距离,本申请不做限定,例如盖板10和第一超表面层11之间可以贴合,第一超表面层11和探测层12之间可以具有一定间隙等。
在本申请提供的图像传感器的超表面层中,第一分光单元和第二分光单元是两个用来对入射光线进行分光的单元,由于需要对不同入射角度进行分光,二者的图案是不同的。在本申请提供的图像传感器中,第一分光单元的图案可以利用算法设计得到,第二分光单元的图案是基于第一分光单元的图案变换得到的,因而在设计大面积超表面结构层时,不再需要对每个分光单元的图案进行逐个设计,简化了超表面层的设计过程,提高了超表面层的设计与 加工效率。
示例性地,盖板10为透明盖板,例如玻璃盖板、树脂盖板等。
示例性地,探测层12为CMOS探测层,包括与多个分光单元110对应的多个探测区域(或称为成像区域),每个探测区域包括多个探测子区域,多个探测子区域分别用于接收对应的每个分光单元分出的多种颜色的光,将接收到的所述多种光转为电信号。
在本申请的一些可能的实现方式中,上述变换可以是平移变换,也即第二分光单元的图案是通过第一分光单元的图案平移变换得到的。其中,平移变换的方向及距离根据第二分光单元和第一分光单元的位置关系及入射角度确定。
这里,平移变换是指以原图案的外边界为相框,移动第一分光单元的图案,将突出相框外的部分切割补入相框内移动后空出的位置。以图2为例,将第一分光单元111的图案向左移一列,此时A、C所在列到达左边界,B所在列突出相框外,将B所在补入D所在列的右侧,得到第一分光单元111右侧的第二分光单元112的图案。
在本申请的另一些可能的实现方式中,上述变换还可以是旋转、翻转等变换形式,本申请对此不做限定。
示例性地,第二分光单元位于第一分光单元的第一侧,第二分光单元的图案是通过第一分光单元的图案向第二侧平移变换得到的,第二侧和第一侧是第一分光单元的相对两侧。
下面结合图示,对第二分光单元的图案是如何从第一分光单元的图案平移变换而成进行说明。
图3是本申请实施例提供的一行分光单元的结构示意图。参见图3,一行分光单元110包括:位于中部的第一分光单元111和位于第一分光单元111两侧的第二分光单元112。
其中,第一分光单元111具有第一图案,第二分光单元112具有第二图案。
值得说明的是,本申请实施例中,第二图案是指从第一图案变换而来的图案。第二图案可以有多个,这多个第二图案可以是同一种图案或者包括多种不同图案。例如存在多个第二分光单元112对应的入射光线角度不同,这些第二分光单元112的第二图案可以不同,但都是从第一图案变换来的。
示例性地,在一行分光单元110中,位于第一图案左侧的第二图案是由第一图案向右平移得到的,位于第一图案右侧的第二图案是由第一图案向左平移得到的。图案平移后突出的部分拼接到平移后空出的位置。
为了方便观察,可以参考图2中简化的分光单元的图案,位于第一分光单元111右侧的第二分光单元112是通过第一分光单元111左移得到的,将第一分光单元111左移一列后,A和C所在列到达最左侧,此时B所在列平移后突出相框外,此时将B所在列拼接到右侧,得到位于第一分光单元111右侧的第二分光单元112的图案。
图4是本申请实施例提供的一行分光单元的结构示意图。参见图4,一列分光单元110包括:位于中部的第一分光单元111和位于第一分光单元111两侧的第二分光单元112。
示例性地,在一列分光单元110中,位于第一图案上侧的第二图案是由第一图案向下平移得到的,位于第一图案下侧的第二图案是由第一图案向上平移得到的。
在本申请实施例一些可能的实现方式中,第一超表面层11仅包括一个第一分光单元111,其余分光单元均为第二分光单元112,也即均是从第一分光单元111的基础上变换得到的。在本申请实施例另一些可能的实现方式中,第一超表面层11包括多个第一分光单元111,其余 分光单元均为第二分光单元112,通常,第二分光单元112是由与之相邻的第一分光单元111变换得到的。
下面先对仅有一个第一分光单元111的情况进行说明,通常在这种情况下,第一分光单元111对应的入射光线的入射角度在以0°为中心的范围内,比如-2°~2°。示例性地,第一分光单元111对应入射角度为0°的入射光线,多个第二分光单元112围绕第一分光单元111布置。
对于和第一分光单元111同行或者同列的分光单元,只需要平移变换一次即可从第一分光单元111的图案得到第二分光单元112的图案。而和第一分光单元111不同行也不同列的分光单元,则需要经过两次平移变换,例如第一次平移变换得到和第一分光单元111同行或者同列的分光单元的图案,然后从第一次平移变换得到的分光单元的图案基础上平移变换,得到第二分光单元112的图案。当然,和第一分光单元111不同行也不同列的分光单元的图案也可以通过沿倾斜方向的一次平移变换得到,效果和前述两次平移变换相同,例如,位于第一分光单元111左上侧的分光单元的图案,可以通过将第一分光单元111的图案向右下侧平移得到。
而存在多个第一分光单元111的情况,又可以包括如下两种情形:第一种,设计时即独立设计多个第一分光单元111,第二种,设计初仅设计入射角度为0°的入射光线对应的第一分光单元111,但当第一分光单元111的平移变换的幅度较大时,导致分光效果下降较多,此时可以对平移变换的幅度较大后得到的第二分光单元112的图案进行二次设计,作为新的第一分光单元111,平移变换形成后续第二分光单元112的图案。
因此,在本申请实施例中,通常第一分光单元111和由第一分光单元111变换得到的第二分光单元112通常在一定范围内,二者距离不会过远,二者接收到的入射光线角度差值不过大,这样才能保证第一分光单元111平移后的图案能够完成第二分光单元112的分光功能。例如,第一分光单元111和由第一分光单元111变换得到的第二分光单元112接收到的入射光线的入射角度的差值范围在10°以内。
在本申请实施例中,分光单元110的图案包括阵列布置的多个像素点,这些像素点由两种不同材料形成,得到如图3和图4所示的马赛克图案。
以图3为例,最中间的虚线框中所示为超表面层中的第一分光单元111的第一图案,根据在行方向(x方向)上的马赛克图案与入射光线角度相对位置关系,通过对马赛克图案进行x方向平移调整内部像素点排列,得到x方向上的±1°、±2°等角度对应的第二分光单元112的第二图案。
如图3所示,这里的马赛克图案与入射光线角度相对位置关系是指,不同入射角度的入射光线对应的图案在x方向上的位置关系,例如,入射角度1°对应的图案在入射角度0°的右侧,那么从入射角度0°对应的图案得到入射角度1°对应的图案,则需要左移入射角度0°对应的图案。
同理,以图4为例,可以根据在列方向(y方向)上的马赛克图案与入射光线角度相对位置关系,通过对马赛克图案进行y方向平移调整内部像素点排列,得到y方向上的±1°、±2°等角度对应的第二分光单元112的第二图案。利用上述方式可以设计出二维的超表面层结构,设计方式更为简单,不需要逐个像素点去设计,简化了设计与加工的过程。
如图4所示,这里的马赛克图案与入射光线角度相对位置关系是指,不同入射角度的入 射光线对应的图案在y方向上的位置关系,例如,入射角度1°对应的图案在入射角度0°的上侧,那么从入射角度0°对应的图案得到入射角度1°对应的图案,则需要下移入射角度0°对应的图案。
示例性地,平移的距离与第一分光单元111和第二分光单元112的距离正相关。
也即,第一分光单元111和第二分光单元112的距离越远,则从第一分光单元111的图案平移得到第二分光单元112的图案时,需要移动的距离越大。
例如,以第一分光单元111是入射角度为0°的入射光线对应的分光单元为例,可以按照如下公式(1)计算从第一分光单元111的图案平移得到第二分光单元112的图案的距离s:
s=α·z·tanθ
或s=α·z·cosθ
或s=α·z·sinθ                  (1)
或s=α·z·θ
在公式(1)中,θ为第二分光单元对应的入射光线的入射角度,z为成像距离,α为包含折射率等条件在内的成像系数。为保证分光单元分光得到的光线在探测层上的成像效果,这里限制α∈[0,10],z∈[max(x,y)/5,5*max(x,y)],(x,y)是像素点分别在x和y方向上的尺寸。需要注意的是,公式(1)中使用了傍轴近似的条件,因此,此公式仅在小角度入射的条件下有效,例如-10°至10°。
当入射光线从0°(对应输出的光线为虚线部分)正入射变为以θ角度(对应输出的光线为实线部分)入射时,则像的位置变化了s,如图5所示,为了使像的位置保持不变,则分光单元的图案需要移动-s,即为θ角度下对应的分光单元的图案,该移动过程如图6和图7所示。
图6所示出的是若入射光线的入射角沿x方向由0°(垂直)入射变为+1°入射,则将0°对应的图案左侧虚线中的部分移动到原图案的右侧虚线部分,也即将0°对应的图案向左移动,并将移动后超出边界的部分补到右侧空出的位置。若入射光线的入射角沿x方向由0°(垂直)入射变为-1°入射,则将0°对应的图案右侧实线中的部分移动到原图案的左侧实线部分。
图7所示出的是入射光线的入射角沿y方向由垂直入射变为+1°、-1°入射,对应的图案在y方向上的变化,变化方式和图6类似,这里不再赘述。
根据这样的位置与角度平移关系,可以构造出任意入射角度适用的分光单元的图案。这里的角度与像素点的移动距离仅示意所用,实际的移动距离由公式(1)结合分光单元中像素点的尺寸、分光单元的折射率、入射角的大小来决定。
在本申请实施例中,由于分光单元的图案是由像素点组成的,因此平移的距离通常为像素点尺寸的整数倍。当根据公式(1)计算出的距离不是像素点尺寸的整数倍时,可以近似成像素点尺寸的整数倍。
例如,以相邻2个图案的距离为1,第二分光单元112距离第一分光单元111距离为1时,对第一分光单元111的图案平移时平移的距离为1行或1列像素点,第二分光单元112距离第一分光单元111距离为2时,对第一分光单元111的图案平移时平移的距离为2行或2列像素点,依次类推。
这里是以平移像素点的行数或列数(数值)与第一分光单元111和第二分光单元112的 距离相等来进行示例性说明的。在其他实现中,二者可以不等。例如,平移像素点的行数或列数是第一分光单元111和第二分光单元112的距离的2倍,或者,二者并非倍数关系,比如,第二分光单元112距离第一分光单元111距离为1时,对第一分光单元111的图案平移时平移的距离为3行或3列像素点,第二分光单元112距离第一分光单元111距离为2时,对第一分光单元111的图案平移时平移的距离为5行或5列像素点,等等。
在采用上述举例的实现方式时,由于每一个分光单元110相比于相邻的分光单元110都进行了进一步平移,因此,在一行或一列分光单元110中,任意相邻的两个分光单元110的图案均不同。
再例如,以相邻2个图案的距离为1,第二分光单元112距离第一分光单元111距离为1时,对第一分光单元111的图案平移时平移的距离为1行或1列像素点,第二分光单元112距离第一分光单元111距离为2时,对第一分光单元111的图案平移时平移的距离为1行或1列像素点,第二分光单元112距离第一分光单元111距离为3时,对第一分光单元111的图案平移时平移的距离为2行或2列像素点,第二分光单元112距离第一分光单元111距离为4时,对第一分光单元111的图案平移时平移的距离为2行或1列像素点,依次类推。
在这种实现方式中,存在连续的分光单元采用相同的平移距离,采用相同的平移距离得到的图案相同。也即,多个分光单元阵列布置,一行或一列分光单元分为多组,每组包括多个连续排列的分光单元,多个连续排列的分光单元的图案相同。
这里,在生成大面积二维超表面层时,利用每个图案覆盖的角度范围(也称为角带宽)能够进一步降低结构复杂性,提高制作效率。由于存在角带宽,因此当入射光线的角度变化很小时,不需要每次变化对应的分光单元的图案,如图8所示,当入射光线的角度从x=0°变成x=±θ1°时,对应的分光单元的图案可以直接使用x=0°时的图案,当入射光线的角度超过角带宽时,例如,入射光线的角度变成x=±θ2°时,再移动x=0°时的图案得到新的图案。同样的方法也适用于y方向。这样可以在不影响超表面层性能的基础上,使超表面层结构复杂度再次降低,同时由于存在相同的图案,使得设计和加工过程也进一步简化。
前述超表面层中多个分光单元均以阵列排布为例,在其他实施例中,分光单元排布也可以不是阵列排布的,本申请对此不做限制。
在本申请实施例中,第一超表面层11是由两种折射率不同的材料形成的微纳结构。微纳结构是指具有特殊电磁属性的亚波长尺度的微纳平面结构,可以使用微纳制备工艺制作,易于量产。该第一超表面层11是二维结构,结构紧凑,能够与当前CMOS工艺相结合,因此设计出与CMOS匹配(如视场角(field of view,FOV),尺寸等)的紧凑型大面积超表面层,从而易于集成在相机或手机的模组中,并且相比于相关技术中采用拜耳滤光片实现的图像传感器,能够提高光通量,从而提升成像质量。
以图8为例,黑色部分是折射率高的材料,例如氧化钛,白色部分是折射率低的材料,例如空气。黑色部分也可以称为微纳单元元件,白色部分也可以称为衬底,二者的分布形成前述马赛克图案。
当然,这里的材料不限于以上两种,黑色部分的材料和白色部分的材料均可以从以下几种中选择:氧化钛、氮化硅、氧化硅、硅、金属等,但要保证的黑色部分的材料的折射率高于白色部分的材料的折射率。
图9是本申请实施例提供的大面积超表面层的图案示意图。图9是设计出的5*5的马赛 克图案的超表面层的示意图,对应的入射光线在x方向以-5°~5°、y方向以-5°~5°范围照射,5*5的马赛克图案对应5*5个分光单元,每个分光单元对应的入射光线精度为1°,当然以1°作为精度仅为一种示例,在图9及其他附图所对应的超表面层结构中,每个分光单元对应的入射光线精度也可以不是1°,例如更小或更大。下面结合图9说明超表面层的图案如何设计:
本申请提供的分光单元的图案呈马赛克图形分布,在正入射(也即入射光线的入射角度为0°)条件下,利用逆向设计算法优化得到第一分光单元111的图案,图9正中部的图案。再利用马赛克图案与入射光线角度相对位置关系,变换得到对应任意入射角的分光单元110的图案,也即图9中位于正中部的图案周围的图案,该变换得到的图案能够保证对于该区域入射光线有较好的衍射效率,进而保证分光效果。最后,将对应不同角度的图案按照位置关系在二维方向上拼接起来,即可得到图9所示的超表面层的图案。
本申请提供的超表面层能够实现大面积的分色功能,入射光线中各种颜色在探测层上实现了分离并且在相应区域实现重新聚焦。
也即,本申请实施例提供的超表面层可以实现对任意入射角度的光的颜色分离,从而提升光的利用效率。以下对本申请实施例中设计的超表面层进行光谱和光强分布分析,用以说明该超表面层的分光效果:
以具有2*2分光效果的分光单元的超表面层为例,如图10所示。分光单元的图案的尺寸为1.6μm×1.6μm,厚度为300nm,由400(20×20)个像素点组成,每个像素点可以是二氧化钛(TiO 2),也可以是空气。图中空心箭头表示入射角度为0°的入射光线的入射方向,入射光线的波长范围是400至700nm,也即可见光。在分光单元下方2μm处,设置了具有四个探测子区域G、B、R、G的CMOS探测区域,四个探测子区域分别用于探测绿(500-600nm)、蓝(400-500nm)、红(600-700nm)、绿(500-600nm)光的平均透过率T G1、T R、T B、T G2,对T R、T G(T G=T G1+T G2)、T B取平均即为该分光单元的分光效率。
这里CMOS探测区域的四个探测子区域G、B、R、G呈现45°对角对称,因此设计分光单元也可以采用45°对角对称结构,例如图11所示,以对角线a为轴线,两侧的图案相同,这样设计时仅需要设计一半图案,即可得到另一半,可以大大减少逆向设计优化过程求解的个数,加快寻优过程。其中,逆向设计优化过程是指利用粒子群算法、模拟退火算法等确定第一分光单元的图案,并采用设计出的图案进行分光,根据分光的实际效果对设计的图案进行优化,最终得到符合条件的第一分光单元的图案的过程。
图12是本申请实施例提供的入射光线垂直入射到图11所示的第一分光单元得到的模拟光谱图,在该光谱图中,横坐标是波长,单位是nm,纵坐标是透过率,R、G、B三条曲线分别展示了整个可见光波段在R、G(G=G1+G2)、B探测子区域上的透过率。可以看出,在R探测子区域中,红光波段(600-700nm)透过率高于其他波段,红光波段透过率T R为49.9%(也即对应图中的0.499),这里的透过率是600-700nm波段的平均透过率;在G探测子区域中,则是绿光波段(500-600nm)透过率高于其他波段,绿光波段透过率T G为42.0%(其中TG1为19.5%,TG2为22.5%);在B探测子区域中,蓝光波段(400-500nm)透过率高于其他波段,蓝光波段透过率T B为50.6%,相对T R和T G高较多,表明分光单元对蓝光分色效率最高。通过T R、T G、T B计算得到入射光线垂直入射下分光单元的分光效率为47.5%。
图13至图15所是本申请实施例提供的入射光线垂直入射到图11所示的第一分光单元 得到的光强分布图,图13至图15所示出的光强分布图分别展示了红、绿、蓝三种颜色在整个探测区域中的光强分布情况。如图13所示,红光波段经过分光单元后大部分成像在左下角R探测子区域,且光斑比较集中。如图14所示,绿光波段经过分光单元后主要成像在左上角G区域和右下角G区域,光斑相对分散,呈现出明显的对角线对称分布。如图15,而蓝光波段经过分光单元后则绝大部分成像在右上角B区域,光斑集中最明显。在图13至图15中,横坐标和纵坐标表示探测区域的尺寸,区域内的颜色深浅代表探测到的对应颜色的光强。
采用图11中的第一分光单元对其他入射角度的入射光线进行分光,分析第一分光单元的角度敏感性,得到如图16~图21所示出的入射光线以不同入射角度入射到第一分光单元得到的模拟光谱。如图16~图21所示,在入射光线的入射角度分别为-2°(图16)、2°(图17)、-5°(图18)、5°(图19)、-10°(图20)、10°(图21)时,按照垂直入射时相同的计算方法,可以计算出,前述分光单元的分色效率分别为42.8%、46.5%、31.4%、40.1%、30.0%、29.0%,表明该分光单元对角度敏感,且入射光线倾斜角度越大分光效率越低。
利用图11中的第一分光单元的图案生成对应各个入射角度的入射光线的第二分光单元的图案,利用对应各个入射角度的入射光线的第二分光单元进行分光,得到如图22~图27所示出的入射光线以不同角度入射到对应的第二分光单元得到的模拟光谱图。如图22~图27所示,在入射光线的入射角度分别为-2°(图22)、2°(图23)、-5°(图24)、5°(图25)、-10°(图26)、10°(图27)时,对应各个入射角度的入射光线的第二分光单元的分光效率分别为47.0%、47.5%、45.2%、45.5%、39.4%、41.2%,相较于图16~图21中的分光效率具有明显提升,且每个入射角度下的模拟光谱图都与图12中垂直入射的模拟光谱图相似,表明本申请实施例提出的通过对第一分光单元的图案平移得到的第二分光单元的图案的方法,具有较好的分光效果。
在本申请实施例中,第一超表面层11的分光单元110可以将入射光线分为红色(red,R)、绿色(green,G)和蓝色(blue,B)三种颜色的光。
第一超表面层11除了可以按照RGB进行分光外,还可以按照红黄蓝(red yellow blue,RYB),或者按照红绿蓝宝石蓝(red green blue emerald,RGBE),或者按照青黄绿品红(cyan yellow green magenta,CYGM)进行分光。
在本申请实施例中,第一超表面层11除了可以对光的颜色进行分离外,还可以将光分成不同偏振,也即,第一超表面层11还用于将对应的入射光线分成多种偏振的光。相应地,探测层12用于接收第一超表面层11输出的多种光,将接收到的所述多种光转为电信号,多种光中任意两种光的颜色和偏振中的至少一个不同。
不同于传统的偏振片将无关的偏振态信息直接过滤,本申请的超表面层是将偏振信息全部提取出来,这样得到的偏振信息更多,进而使得最终成像的效率更高。本申请的超表面层不仅能够实现对光的颜色的分离及会聚,还能够实现对偏振态的分离,从而提供更多的成像信息,拓宽了成像系统的应用场景。
例如,第一超表面层11的分光单元110将入射光线分为R、G、B三种颜色的光,同时,分光单元110将入射光线分为水平偏振和垂直偏振的光,则实际分光单元110将入射光线分成了6束光,分别是:红色水平偏振光、绿色水平偏振光、蓝色水平偏振光、红色垂直偏振光、绿色垂直偏振光、蓝色垂直偏振光。
偏振除了按照水平偏振和垂直偏振两种正交偏振进行划分外,还可以按照四种基本偏振态(0°、45°、90°、135°)进行划分,或者按照线偏振和圆偏振进行划分。
上述任一种偏振划分方式和任一种颜色划分方式均可以组合,例如,颜色按照CYGM进行划分,偏振按照正交偏振方式进行划分,则分光单元110将入射光线分成了8束光,分别是:C色水平偏振光、Y色水平偏振光、G色水平偏振光、M色水平偏振光、C色垂直偏振光、Y色垂直偏振光、G色垂直偏振光、M色水平偏振光。
超表面层中分光单元的图案具有非对称随机性,可以引起不同偏振的光产生不同的分色响应。利用这个特性,能够得到具有分偏振功能的分光单元,如图28所示,以分光单元的尺寸为3μm×3μm,厚度为300nm,由900(30×30)个像素点组成为例,每个像素点可以是二氧化钛(TiO2),也可以是空气。入射光线由分光单元法线方向(0°)垂直入射到分光单元上,入射光线是波长范围400至700nm的可见光。在超表面层下方3μm处,设置了具有6个探测子区域的CMOS探测区域,6个探测子区域分别是R、R、G、G、B、B,分别用于监测水平和垂直偏振下的红(600-700nm)、绿(500-600nm)、蓝(400-500nm)波段的平均透过率T R1、T R2、T G1、T G2、T B1、T B2,对T R1、T G1、T B1取平均即为分光单元在水平偏振下的分光效率,对T R2、T G2、T B2取平均即为分光单元在垂直偏振下的分光效率。
该分光单元在水平和垂直偏振光下的模拟光谱分别展示在图29和图30中,如图29所示,曲线R在600-700nm波段的平均透过率T R1、曲线G在500-600nm波段的平均透过率T G1、曲线B在400-500nm波段的平均透过率T B1分别为23.1%、24.8%、23.4%,可得到水平偏振光下分光单元的分光效率为23.8%。如图30所示,曲线R在600-700nm波段的平均透过率T R2、曲线G在500-600nm波段的平均透过率T G2、曲线B在400-500nm波段的平均透过率T B2分别为23.1%、25.1%、25.1%,可得到垂直偏振光下分光单元的分光效率为24.4%。而相关技术中偏振分色器的效率约为16.7%,可以表明利用本申请实施例设计的分光单元的分光效率不仅有所提升,而且无论是在分色还是分偏振方面都有较好的均匀性。
在利用上述分光单元同时进行分颜色和偏振时,以图28中的6个探测子区域为例,在水平偏振下,红光波段经过分光单元后大部分成像在左上角R区域,光斑比较集中,绿光波段经过分光单元后主要成像在中上方G区域,光斑相对分散,而蓝光波段经过分光单元后则大部分成像在右上角B区域和中上方G区域交接处,光斑比较集中。在垂直偏振下,红光波段经过分光单元后大部分成像在左下角R区域,光斑比较集中,但也有相当一部分光强分布在中下方G区域中心,绿光波段经过分光单元后主要成像在中下方G区域,光斑相对分散,而蓝光波段经过分光单元后则大部分成像在右下角B区域,光斑较集中。由此可知,本申请实施例提供的分光单元起到了较好的分离偏振和颜色作用。
本申请提供的分光单元在同时实现分颜色和分偏振时,对于分离出的各种颜色及偏振组合的光的分布,可以参考图28的探测层12中探测子区域的排列方式。
但是,本申请中分光单元分离出的光的分布也不限于此,对应的探测层12中探测子区域的排列方式也不限制于此,例如也可以按照图31的方式,相比于图28的方式,水平偏振和垂直偏振的光的位置互换,也可以按照图32的方式,仅对绿色光的偏振进行分离,而不分离红色和蓝色光的偏振。对于颜色按照RGB、偏振按照基本偏振态(0°、45°、90°、135°)的分离方式,分离出的光的分布可以参见图33和图34的两种实现方式。对于颜色按照RGB、偏振按照线偏振和圆偏振的分离方式,分离出的光的分布可以参见图35和图36的实现方式。 而对于采用其他颜色分离方案加偏振分离的方式,图37~图39给出了探测子区域的排列方式的一些示例,当然这里仅是一些示例,上述任一种颜色分离和偏振分离组合下的探测子区域的排列方式都还可以是其他形式。
另外,需要说明的是,探测子区域的排列方式既可以按照前述附图中示出的规则排布方式,也可以按照不规则方式排布,例如图40所示,本申请实施例对此不做限制。
在本申请实施例中,探测子区域的形状也不做限制,既可以是矩形、六边形等规则形状,也可以是其他规则或不规则形状。
在前述实现方式中,分颜色和分偏振均由同一超表面层完成,在另一种实现方式中分颜色和分偏振也可以由两个超表面层实现。其中,前述第一超表面层11用于实现分颜色。
图41是本申请实施例提供的一种图像传感器的结构示意图。参见图41,该图像传感器还包括第二超表面层13,第一超表面层11位于第二超表面层13和盖板10之间。
第一超表面层11用于将对应的入射光线为多种颜色的光。第二超表面层13用于将第一超表面层11分出的每种颜色的光分为多种偏振。第二超表面层13不是按照第一超表面层11的方式设计分光单元,第二超表面层13中每一个分光单元的图案都需要单独进行设计和优化,利用逆向设计算法优化得到。
相应地,探测层12用于接收第二超表面层13输出的多种光,将接收到的所述多种光转为电信号,多种光中任意两种光的颜色和偏振中的至少一个不同。
当然,上述第一超表面层11和第二超表面层13的位置也可以互换,也即,第二超表面层13位于第一超表面层11和盖板10之间,此时入射光线先经过第二超表面层13分偏振,然后进入到第一超表面层11分颜色。
相应地,探测层12用于接收第一超表面层11输出的多种光,将接收到的所述多种光转为电信号。
对于第一超表面层11而言,在本申请的一些实施方式中,第二图案是直接通过第一图案平移变换得到的,无需再进行其他处理。
在本申请的另一些实施方式中,第二图案是在第一图案平移变换后,更改平移变换后的图案中的部分图形得到的。
例如,更改平移变换后的图案中的像素点,平移变换后的图案中被更改的像素点占第一分光单元的图案中像素点总数的比例不超过阈值。
示例性地,阈值的取值范围为20%~30%。
例如,第二图案是在第一图案平移变换后,改变其中不超过20%的像素点得到的。
示例性地,像素点被更改的形式包括如下至少一种:
改变像素点的形状、改变第一像素点的数量、改变第一像素点和第二像素点的排列;
其中,第一像素点和第二像素点是多个像素点中对应的材料折射率不同的两种像素点。例如,第一像素点对应的材料折射率高于第二像素点对应的材料折射率,也即第一像素点是图案中的黑色像素点,第二像素点是图案中的白色像素点。
在本申请的一些实施方式中,像素点的形状为矩形。
在本申请的另一些实施方式中,像素点的形状还可以为圆形、六边形或其他规则或不规则图形。对于圆形等形状而言,相邻像素点之间会存在空隙,这些空隙通常用空气或者采用和白色部分相同的折射率低的材料填充。
图42是本申请实施例提供的第一图案平移变换后的图案的示意图。图43是本申请实施例提供的图42中的图案改变像素点形状得到的图案的示意图。参见图42和图43,部分像素点的形状从矩形改变成圆形,但这部分像素点较少,低于第一图案中总像素点的20%。
还是以图42作为第一图案平移变换后的图案,对于改变第一像素点的数量包括如下2种情况:
第一种,图44是本申请实施例提供的图42中的图案改变像素点折射率得到的图案的示意图。参见图42和图44,将部分第一像素点改成第二像素点,形成第一像素点部分缺失的效果,也即减少第一像素点的数量,参见图44中虚线框对应的位置。
第二种,图45是本申请实施例提供的图42中的图案改变像素点折射率得到的图案的示意图。参见图42和图45,将部分第二像素点改成第一像素点,形成第一像素点部分增多的效果,也即增加第一像素点的数量,参见图45中虚线框对应的位置。
图46是本申请实施例提供的图42中的图案改变像素点折射率得到的图案的示意图。参见图42和图46,将部分第一像素点改成第二像素点,同时将部分第二像素点改成第一像素点,形成第一像素点和第二像素点重新排列的效果,参见图46中虚线框对应的位置。
图47是本申请实施例提供的一种图像传感器的结构示意图。参见图47,相比于图1所示的图像传感器的结构,该图像传感器还可以包括间隔区14,间隔区14位于第一超表面层11和探测层12之间,间隔区14用于限制第一超表面层11和探测层12之间的距离,从而保证第一超表面层11分离的光能够在探测层12的探测区域成像。
示例性地,本申请的间隔区14可以设置有填充物,例如透明材料,也可以不设置填充物。
再次参见图47,该图像传感器还可以包括滤光片15,滤光片15位于间隔区14和探测层12之间,滤光片15用于对分光单元分出的多种颜色的光分别进行滤光,滤除特定颜色光以外的其他杂散光,也即是说滤除多种颜色的光中每种颜色的光内的其他颜色杂散光,从而起到降低串扰的作用,可以进一步提升器件分色性能。例如,第一超表面层11分出的多种颜色的光中包括一束红色光,则滤光片15对该红色光进行滤光时,会滤除红色以外的成分,降低其他颜色的干扰,也即减小串扰。
示例性地,滤光片15包括多个子像素,滤光片15的子像素的分布和探测层中探测子区域的排列对应。
例如,如果超表面层实现的是分色与分偏振功能,即一方面进行R、G、B三色分色,另一方面将水平和垂直两个方向的偏振光分离开,则探测层的每个探测区域应分为六个探测子区域,分别为:R-水平,G-水平,B-水平,R-垂直,G-垂直,B-垂直,分布如图28所示,此时滤光泡15对应布置3个子像素,分别为红、绿、蓝,且和六个探测子区域对应,例如,红色子像素对应位于R-水平和R-垂直两个探测子区域上方。每种子像素滤除其他颜色的光,如红色子像素滤除其他颜色光仅通过红色光。由于超表面层的二维属性,因此该图像传感器虽然同时具有超表面层和滤光片,但是图像传感器的体积较小,能够实现小型化。
为了保证图像传感器的指质量,本申请实施例提供的超表面层在设计时,需要考虑实际加工和使用中可能存在的xyz三个维度的装配容差,如图48和图49所示,如果dx(x方向装配容差),dy(y方向装配容差),dz(第一超表面层和探测层在z方向的间隔容差)过大则势必导致整个图像传感器的效果,因此需要控制dx、dy和dz在一定精度范围内。例如,dx∈[- x/3,x/3],dy∈[-y/3,y/3],dz∈[-z 0/4,z 0/4],其中x,y为单个分光单元分别在x和y方向上的尺寸,z 0为分光单元距离探测层的距离。
本申请实施例利用傍轴近似条件下,入射光线的入射角度与分光单元的位置关系,设计出具有大面积二维马赛克图案的超表面层,解决了传统设计中超表面层角度敏感性问题,同时简化了设计方法,提高了设计效率。利用入射角度与分光单元的位置关系,设计出的马赛克图案,衍射效率更高,使得图像传感器的性能得到提升。
该超表面层同时能够实现分颜色和偏振,同时实现色彩和偏振信息的分离、重构和利用,提升了对光的利用效率,增加了成像信息,进而提升了成像质量。
另外,由于该超表面层是微纳结构,尺寸小、体积紧凑,与CMOS工艺兼容,可以直接集成在CMOS芯片上,易于集成到任何光学系统中。超表面结构可基于成熟的微纳制备工艺而实现,制备难度低,易实现量产。并且,该结构具有一定的装配容差,在加工过程中,降低了对加工工艺的要求。
本申请实施例提供的图像传感器不仅可以应用于可见光波段,实现成像,还可以用于在红外波段、紫外波段,甚至太赫兹、微波、无线电等波段,实现分光或不同波段的波的分束。
本申请实施例还提供了一种电子设备,该电子设备包括如图1至图49任一幅所示的图像传感器。
示例性地,该电子设备包括但不限于手机、平板电脑、相机、摄像头等。
上述图像传感器应用在上述电子设备的相机模组中。
图50是本申请实施例提供的相机模组的结构示意图。参见图50,该相机模组包括:镜头1、反光镜2、五棱镜3、取景器4和图像传感器5。
如图50所示,入射光线通过镜头1、反光镜2、五棱镜3进入到取景器4中,进而进入到人眼,当人眼判断是拍照场景时,按下快门,此时反光镜2迅速抬起,入射光线径直地照向右侧的图像传感器5,图像传感器5将光信号转化为电信号。
该电子设备还包括处理器,处理器用于接收图像传感器5输出的电信号,并对电信号进行处理。例如,对电信号进行重组处理,生成图像,并保存到电子设备的存储卡中。
当然上述相机模组的结构仅为示例,本申请对相机模组的结构不做限定,只要包括上述图像传感器即可。
值得说明的是,本申请提出的图像传感器,不仅可以利用在消费级相机、手机终端,也可以集成在工业级相机、成像系统、显示系统中,实现在环境监测、农业监测等领域的应用。
图51是本申请实施例提供的一种图像传感方法的流程图。该方法由前述图像传感器实现,参见图51,该图像传感方法包括:
S51:接收入射光,入射光包括多个入射角度的入射光线。
S52:通过第一超表面层中两种不同的图案分别将不同入射角度的入射光线分为多种颜色的光,两种不同的图案中的一种图案是通过两种不同的图案中的另一种图案变换得到的。
S53:将多种颜色的光转为电信号。
在本申请提供的图像传感方法中,通过第一超表面层中两种不同的图案对不同入射角度进行分光,其中,两种不同的图案中的一种图案是通过两种不同的图案中的另一种图案变换 得到的,因而在超表面层的制作时,不再需要对每个分光单元的图案进行针对性设计和制作,简化了超表面层的制作过程,提高了超表面层的制作效率。
可选地,该方法还包括:
通过第一超表面层中两种不同的图案分别将不同入射角度的入射光线分为多种偏振的光;
将多种颜色的光转为电信号,包括:
将第一超表面层输出的多种光转为电信号,多种光中任意两种光的颜色和偏振中的至少一个不同。
可选地,该方法还包括:
通过第二超表面层将接收到的光分为多种偏振的光;
第二超表面层接收第一超表面层输出的光,将多种颜色的光转为电信号,包括:将第二超表面层输出的多种光转为电信号,多种光中任意两种光的颜色和偏振中的至少一个不同;
或者,第二超表面层接收入射光,第一超表面层接收第二超表面层输出的光,将多种颜色的光转为电信号,包括:将第一超表面层输出的多种光转为电信号,多种光中任意两种光的颜色和偏振中的至少一个不同。
这里采用两个超表面层时,既可以先进行分颜色再进行分偏振,也可以先进行分偏振再进行分颜色。
可选地,该方法还包括:
在将多种颜色的光转为电信号之前,对多种颜色的光分别进行滤光,滤除多种颜色的光中每种颜色的光内的其他颜色杂散光,从而起到降低串扰的作用,可以进一步提升分色性能。
例如,第一超表面层分出的多种颜色的光中包括一束红色光,则对该红色光进行滤光时,会滤除红色以外的成分,降低其他颜色的干扰,也即减小串扰。

Claims (17)

  1. 一种图像传感器,其特征在于,所述图像传感器包括盖板、第一超表面层和探测层,所述第一超表面层位于所述盖板和所述探测层之间;
    所述第一超表面层用于接收入射光,所述入射光包括多个入射角度的入射光线;所述第一超表面层包括多个分光单元,不同分光单元对应接收来自不同入射角度的入射光线,所述多个分光单元包括第一分光单元以及第二分光单元,所述第一分光单元和所述第二分光单元的图案不同,所述第一分光单元和所述第二分光单元分别通过不同的图案将对应的入射光线分为多种颜色的光,所述第二分光单元的图案是通过所述第一分光单元的图案变换得到的;
    所述探测层用于接收所述多个分光单元分出的多种颜色的光,将接收到的所述多种颜色的光转为电信号。
  2. 根据权利要求1所述的图像传感器,其特征在于,
    所述第二分光单元的图案是通过所述第一分光单元的图案平移变换得到的,所述平移变换的方向及距离根据所述第二分光单元和所述第一分光单元的位置关系确定。
  3. 根据权利要求1或2所述的图像传感器,其特征在于,
    所述第二分光单元位于所述第一分光单元的第一侧,所述第二分光单元的图案是通过所述第一分光单元的图案向第二侧平移变换得到的,所述第二侧和所述第一侧是所述第一分光单元的相对两侧。
  4. 根据权利要求2或3所述的图像传感器,其特征在于,所述平移变换的距离与所述第二分光单元和所述第一分光单元的距离正相关。
  5. 根据权利要求1至4任一项所述的图像传感器,其特征在于,任意相邻的两个分光单元的图案均不同。
  6. 根据权利要求1至4任一项所述的图像传感器,其特征在于,所述多个分光单元阵列布置,一行或一列分光单元分为多组分光单元,所述多组分光单元中的一组分光单元包括多个连续排列的分光单元,所述多个连续排列的分光单元的图案相同。
  7. 根据权利要求1至6任一项所述的图像传感器,其特征在于,所述多个分光单元包括多个第二分光单元,所述多个第二分光单元围绕所述第一分光单元布置。
  8. 根据权利要求1至7任一项所述的图像传感器,其特征在于,所述第一分光单元对应的入射光线的入射角度在以0°为中心的范围内。
  9. 根据权利要求1至8任一项所述的图像传感器,其特征在于,所述第一超表面层还用于将对应的入射光线分成多种偏振的光;
    所述探测层,用于接收所述第一超表面层输出的多种光,将接收到的所述多种光转为电信号,所述多种光中任意两种光的颜色和偏振中的至少一个不同。
  10. 根据权利要求1至8任一项所述的图像传感器,其特征在于,所述图像传感器还包括第二超表面层;
    所述第二超表面层用于将接收到的光分为多种偏振的光;
    所述第二超表面层位于所述盖板和所述第一超表面层之间,所述探测层用于接收所述第一超表面层输出的多种光,将接收到的所述多种光转为电信号;或者,所述第一超表面层位于所述盖板和所述第二超表面层之间,所述探测层用于接收所述第二超表面层输出的多种光, 将接收到的所述多种光转为电信号;
    其中,所述多种光中任意两种光的颜色和偏振中的至少一个不同。
  11. 根据权利要求1至10任一项所述的图像传感器,其特征在于,所述第二分光单元的图案是通过所述第一分光单元的图案平移变换后,更改平移变换后的图案中的部分图形得到的。
  12. 根据权利要求11所述的图像传感器,其特征在于,所述第一分光单元的图案包括阵列布置的多个像素点;
    平移变换后的图案中被更改的像素点占所述第一分光单元的图案中像素点总数的比例不超过阈值。
  13. 根据权利要求12所述的图像传感器,其特征在于,所述阈值的取值范围为20%~30%。
  14. 根据权利要求12或13所述的图像传感器,其特征在于,所述像素点被更改的形式包括如下至少一种:
    改变所述像素点的形状、改变第一像素点的数量、改变第一像素点和第二像素点的排列;
    其中,所述第一像素点和所述第二像素点是所述多个像素点中对应的材料折射率不同的两种像素点。
  15. 根据权利要求12至14任一项所述的图像传感器,其特征在于,所述平移变换的距离为像素点尺寸的整数倍。
  16. 根据权利要求1至15任一项所述的图像传感器,其特征在于,所述图像传感器还包括滤光片;
    所述滤光片位于所述第一超表面层和所述探测层之间,所述滤光片用于对所述分光单元分出的多种颜色的光分别进行滤光,滤除所述多种颜色的光中每种颜色的光内的其他颜色杂散光。
  17. 一种电子设备,其特征在于,所述电子设备包括处理器以及如权利要求1至16任一项所述的图像传感器,所述处理器用于处理所述图像传感器输出的电信号。
PCT/CN2023/070113 2022-01-21 2023-01-03 图像传感器和电子设备 WO2023138355A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210074665.1 2022-01-21
CN202210074665.1A CN116528069A (zh) 2022-01-21 2022-01-21 图像传感器和电子设备

Publications (1)

Publication Number Publication Date
WO2023138355A1 true WO2023138355A1 (zh) 2023-07-27

Family

ID=87347760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/070113 WO2023138355A1 (zh) 2022-01-21 2023-01-03 图像传感器和电子设备

Country Status (2)

Country Link
CN (1) CN116528069A (zh)
WO (1) WO2023138355A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779288A (zh) * 2008-06-18 2010-07-14 松下电器产业株式会社 固体摄像装置
WO2021070305A1 (ja) * 2019-10-09 2021-04-15 日本電信電話株式会社 分光素子アレイ、撮像素子および撮像装置
CN112701132A (zh) * 2019-10-23 2021-04-23 三星电子株式会社 图像传感器和包括该图像传感器的电子装置
CN113055575A (zh) * 2021-03-30 2021-06-29 Oppo广东移动通信有限公司 图像传感器、摄像头模组及电子设备
CN113286067A (zh) * 2021-05-25 2021-08-20 Oppo广东移动通信有限公司 图像传感器、摄像装置、电子设备及成像方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779288A (zh) * 2008-06-18 2010-07-14 松下电器产业株式会社 固体摄像装置
WO2021070305A1 (ja) * 2019-10-09 2021-04-15 日本電信電話株式会社 分光素子アレイ、撮像素子および撮像装置
CN112701132A (zh) * 2019-10-23 2021-04-23 三星电子株式会社 图像传感器和包括该图像传感器的电子装置
CN113055575A (zh) * 2021-03-30 2021-06-29 Oppo广东移动通信有限公司 图像传感器、摄像头模组及电子设备
CN113286067A (zh) * 2021-05-25 2021-08-20 Oppo广东移动通信有限公司 图像传感器、摄像装置、电子设备及成像方法

Also Published As

Publication number Publication date
CN116528069A (zh) 2023-08-01

Similar Documents

Publication Publication Date Title
US10403665B2 (en) Two-dimensional solid-state image capture device with polarization member, color filter and light shielding layer for sub-pixel regions and polarization-light data processing method to obtain polarization direction and polarization component intensity
US9532033B2 (en) Image sensor and imaging device
US8208052B2 (en) Image capture device
US10348990B2 (en) Light detecting device, solid-state image capturing apparatus, and method for manufacturing the same
CN106412389A (zh) 具有选择性红外滤光片阵列的传感器组件
US11659289B2 (en) Imaging apparatus and method, and image processing apparatus and method
KR20170037452A (ko) 색분리 소자를 포함하는 이미지 센서 및 이를 포함하는 촬상 장치
US11460666B2 (en) Imaging apparatus and method, and image processing apparatus and method
US9425229B2 (en) Solid-state imaging element, imaging device, and signal processing method including a dispersing element array and microlens array
WO2013164902A1 (ja) 固体撮像装置
WO2013094178A1 (ja) 撮像装置
US20130135502A1 (en) Color separation filter array, solid-state imaging element, imaging device, and display device
WO2023051475A1 (zh) 图像传感器、摄像设备及显示装置
WO2021136469A1 (zh) 一种图像传感器、分光滤色器件及图像传感器的制备方法
KR20210028808A (ko) 이미지 센서 및 이를 포함하는 촬상 장치
CN216748162U (zh) 一种多光谱大视场曲面复眼透镜系统
CN102918355A (zh) 三维摄像装置、光透过部、图像处理装置及程序
US20140168485A1 (en) Solid-state image sensor, image capture device and signal processing method
WO2021070305A1 (ja) 分光素子アレイ、撮像素子および撮像装置
WO2023138355A1 (zh) 图像传感器和电子设备
WO2023185915A1 (zh) 偏振成像传感器及电子设备
WO2022111459A1 (zh) 芯片结构、摄像组件和电子设备
US20230239552A1 (en) Image sensor and imaging device
CN209105345U (zh) 一种图像传感器以及成像模组
WO2022023170A1 (en) Color splitter system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23742671

Country of ref document: EP

Kind code of ref document: A1