WO2021036721A1 - Image sensor, imaging system, and terminal - Google Patents

Image sensor, imaging system, and terminal Download PDF

Info

Publication number
WO2021036721A1
WO2021036721A1 PCT/CN2020/106985 CN2020106985W WO2021036721A1 WO 2021036721 A1 WO2021036721 A1 WO 2021036721A1 CN 2020106985 W CN2020106985 W CN 2020106985W WO 2021036721 A1 WO2021036721 A1 WO 2021036721A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
pixel
sub
lens
group
Prior art date
Application number
PCT/CN2020/106985
Other languages
French (fr)
Chinese (zh)
Inventor
张海裕
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021036721A1 publication Critical patent/WO2021036721A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Definitions

  • This application relates to the technical field of consumer electronics, in particular to an image sensor, an imaging system and a terminal.
  • the image sensor generally splits light through a color filter array (CFA), splits the light into three colors of red, green and blue, and then enters the pixel array of the image sensor to perform photoelectric conversion for imaging.
  • CFA color filter array
  • the embodiments of the present application provide an image sensor, an imaging system, and a terminal.
  • the image sensor of the embodiment of the present application includes a super lens and a pixel array.
  • the pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form a variety of exit light with different wavelengths.
  • the outgoing light rays are emitted toward the pixel array from the light outgoing side at different outgoing angles.
  • the imaging system of the embodiment of the present application includes a lens group and an image sensor.
  • the image sensor is arranged on the image side of the lens group.
  • the image sensor includes a hyper lens and a pixel array.
  • the pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form a variety of exit light with different wavelengths.
  • the outgoing light rays are emitted toward the pixel array from the light outgoing side at different outgoing angles.
  • the terminal of the present application includes a housing and an imaging system.
  • the imaging system is installed on the housing.
  • the imaging system includes a lens group and an image sensor.
  • the image sensor is arranged on the image side of the lens group.
  • the image sensor includes a hyper lens and a pixel array.
  • the pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form a variety of exit light with different wavelengths.
  • the outgoing light rays are emitted toward the pixel array from the light outgoing side at different outgoing angles.
  • FIG. 1 is a schematic plan view of a terminal according to some embodiments of the present application.
  • FIG. 2 is a schematic plan view from another perspective of the terminal according to some embodiments of the present application.
  • Fig. 3 is a schematic structural diagram of an imaging system according to some embodiments of the present application.
  • FIG. 4 is a three-dimensional exploded schematic diagram of an image sensor according to some embodiments of the present application.
  • FIG. 5 is a three-dimensional schematic diagram of a microlens, a microstructure group, and a pixel group in an image sensor according to some embodiments of the present application.
  • FIG. 6 is a schematic diagram of the offset of the microlens and the microstructure group of the sub-photosensitive surface of the image sensor according to some embodiments of the present application.
  • FIG. 7 is a three-dimensional schematic diagram of a pixel array according to some embodiments of the present application.
  • Fig. 8 is a schematic plan view of an imaging system according to some embodiments of the present application.
  • FIG. 9 is a schematic plan view of a sub-photosensitive surface in the image sensor of FIG. 8.
  • FIG. 10 is a schematic plan view of an imaging system according to some embodiments of the present application.
  • FIG. 11 is a schematic diagram of the field of view of the lens group according to some embodiments of the present application.
  • Figures 12 and 13 are three-dimensional assembly diagrams of imaging systems according to certain embodiments of the present application.
  • FIG. 14 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
  • 15a and 15b are schematic diagrams of the principle of image acquisition methods in some embodiments of the present application.
  • FIG. 16 is a schematic plan view of an imaging system according to some embodiments of the present application.
  • FIG. 17 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
  • 18a and 18b are schematic diagrams of the principle of image acquisition methods in some embodiments of the present application.
  • 19 and 20 are schematic flowcharts of image acquisition methods in some embodiments of the present application.
  • the first feature “on” or “under” the second feature may be in direct contact with the first and second features, or the first and second features may be indirectly through an intermediary. contact.
  • the "above”, “above” and “above” of the first feature on the second feature may mean that the first feature is directly above or diagonally above the second feature, or it simply means that the level of the first feature is higher than the second feature.
  • the “below”, “below” and “below” of the second feature of the first feature may be that the first feature is directly below or obliquely below the second feature, or it simply means that the level of the first feature is smaller than the second feature.
  • the image sensor 10 includes a metalenses 16 and a pixel array 13.
  • the pixel array 13 is located on the light exit side 166 of the hyper lens 16, and the hyper lens 16 is used to split the incident light L from the light entrance side 165 of the hyper lens 16 to form a variety of exit light L'with different wavelengths.
  • the emitted light L′ is emitted from the light emitting side 166 toward the pixel array 13 at different emission angles.
  • the super lens 16 includes a lens body 161 and a microstructure array 162.
  • the lens body 161 includes a light incident surface 163 located on the light incident side 165 of the super lens 16 and a light output surface 164 located on the light output side 166 of the super lens 16.
  • the microstructure array 162 is arranged on the incident surface 163.
  • the microstructure array 162 includes a plurality of microstructure groups 1621.
  • the microstructure group 1621 includes a plurality of microstructure units 1622.
  • the pixel array 13 includes a plurality of pixel groups 132, and the pixel groups 132 and the microstructure groups 1621 correspond one to one.
  • the shape, size, arrangement, and angle of the multiple microstructure units 1622 of the microstructure group 1621 are determined according to the wavelength and the exit angle of the emitted light L'.
  • the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314.
  • a variety of emitted light with different wavelengths includes red light, first green light Light, second green light and blue light, the first pixel 1311 is used to receive red light, the second pixel 1312 is used to receive the first green light, the third pixel 1313 is used to receive blue light, and the fourth pixel 1314 is used to receive the second green light. Light.
  • the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314, and multiple emitted lights with different wavelengths include red light and first yellow light.
  • first pixel 1311 is used to receive red light
  • second pixel 1312 is used to receive the first yellow light
  • the third pixel 1313 is used to receive blue light
  • the fourth pixel 1314 is used to receive the second yellow light.
  • the image sensor 10 includes a microlens array 12, and the microlens array 12 is disposed on the light incident side 165.
  • the microlens array 12 includes a plurality of microlenses 121.
  • the pixel group 132, the microstructure group 1621, and the microlens 121 correspond one to one.
  • the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the photosensitive surface 11 includes a plurality of sub-sensing surfaces 111, on each sub-sensing surface 111, the sub-sensing surface
  • the microlens 121 corresponding to the center position of the surface 111 is aligned with the microstructure group 1621, while the microlens 121 and the microstructure group 1621 corresponding to the non-central position are offset from each other.
  • a plurality of circles with a center as the center are all located in non-central positions. As the radius of the circle where the microlens 121 is located gradually increases, the microlens 121 and the corresponding The offset of the microstructure group 1621 also gradually increases.
  • the imaging system 100 of the embodiment of the present application includes an image sensor 10 and a lens group 20.
  • the image sensor 10 is provided on the image side of the lens group 20.
  • the image sensor 10 includes a super lens 16 (metelenses) and a pixel array 13.
  • the pixel array 13 is located on the light exit side 166 of the hyper lens 16, and the hyper lens 16 is used to split the incident light L from the light entrance side 165 of the hyper lens 16 to form a variety of exit light L'with different wavelengths.
  • the emitted light L′ is emitted from the light emitting side 166 toward the pixel array 13 at different emission angles.
  • the super lens 16 includes a lens body 161 and a microstructure array 162.
  • the lens body 161 includes a light incident surface 163 located on the light incident side 165 of the super lens 16 and a light output surface 164 located on the light output side 166 of the super lens 16.
  • the microstructure array 162 is arranged on the incident surface 163.
  • the microstructure array 162 includes a plurality of microstructure groups 1621.
  • the microstructure group 1621 includes a plurality of microstructure units 1622.
  • the pixel array 13 includes a plurality of pixel groups 132, and the pixel groups 132 and the microstructure groups 1621 correspond one to one.
  • the shape, size, arrangement, and angle of the multiple microstructure units 1622 of the microstructure group 1621 are determined according to the wavelength and the exit angle of the emitted light L'.
  • the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314.
  • a variety of emitted light with different wavelengths includes red light, first green light Light, second green light and blue light, the first pixel 1311 is used to receive red light, the second pixel 1312 is used to receive the first green light, the third pixel 1313 is used to receive blue light, and the fourth pixel 1314 is used to receive the second green light. Light.
  • the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314, and multiple emitted lights with different wavelengths include red light and first yellow light.
  • first pixel 1311 is used to receive red light
  • second pixel 1312 is used to receive the first yellow light
  • the third pixel 1313 is used to receive blue light
  • the fourth pixel 1314 is used to receive the second yellow light.
  • the image sensor 10 includes a microlens array 12, and the microlens array 12 is disposed on the light incident side 165.
  • the microlens array 12 includes a plurality of microlenses 121.
  • the pixel group 132, the microstructure group 1621, and the microlens 121 correspond one to one.
  • the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the photosensitive surface 11 includes a plurality of sub-sensing surfaces 111, on each sub-sensing surface 111, the sub-sensing surface
  • the microlens 121 corresponding to the center position of the surface 111 is aligned with the microstructure group 1621, while the microlens 121 and the microstructure group 1621 corresponding to the non-central position are offset from each other.
  • a plurality of circles with a center as the center are all located in non-central positions. As the radius of the circle where the microlens 121 is located gradually increases, the microlens 121 and the corresponding The offset of the microstructure group 1621 also gradually increases.
  • the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1
  • the lens group 20 includes multiple groups of lenses 21, each group of lenses 21 on the imaging surface S1 corresponding imaging
  • the area 215 covers a part of the photosensitive surface 11, and the imaging area 215 corresponding to the multiple lenses 21 on the imaging surface S1 collectively covers all the photosensitive surface 11.
  • the terminal 1000 of the embodiment of the present application includes a housing 200 and the imaging system 100 of the above-mentioned embodiment.
  • the imaging system 100 is installed on the housing 200.
  • the terminal 1000 in the embodiment of the present application includes a housing 200 and an imaging system 100.
  • the imaging system 100 is installed on the housing 200.
  • the imaging system 100 includes an image sensor 10 and a lens group 20.
  • the image sensor 10 is provided on the image side of the lens group 20.
  • the image sensor 10 includes a metalenses 16 and a pixel array 13.
  • the pixel array 13 is located on the light exit side 166 of the hyper lens 16, and the hyper lens 16 is used to split the incident light L from the light entrance side 165 of the hyper lens 16 to form a variety of exit light L'with different wavelengths.
  • the emitted light L′ is emitted from the light emitting side 166 toward the pixel array 13 at different emission angles.
  • each CFA As the light passes through each CFA, only one color of light passes through, and the other light is filtered and lost, so the light utilization rate is low.
  • the super lens 16 divides the incident light L from the light entrance side 165 into a plurality of outgoing light rays L'of different wavelengths, and the outgoing light rays L'of different wavelengths are emitted at different exit angles.
  • the light is not filtered, almost no loss, and the light utilization rate is high.
  • the terminal 1000 may be a mobile phone, a tablet computer, a monitor, a notebook computer, a teller machine, a gate, a smart watch, a head-mounted display device, a game console, and the like.
  • the embodiments of this application are described by taking the terminal 1000 as a mobile phone as an example. It can be understood that the specific form of the terminal 1000 is not limited to a mobile phone.
  • the housing 200 can be used to install the imaging system 100, or in other words, the housing 200 can be used as an installation carrier of the imaging system 100.
  • the terminal 1000 includes a front 901 and a back 902.
  • the imaging system 100 can be set on the front 901 as a front camera, and the imaging system 100 can also be set on the back 902 as a rear camera. In the embodiment of the application, the imaging system 100 is set on the back 902 as a rear camera. rear camera.
  • the housing 200 can also be used to install functional modules such as the imaging system 100, power supply device, and communication device of the terminal 1000, so that the housing 200 provides protections such as dustproof, anti-drop, and waterproof for the functional modules.
  • the image sensor 10 includes a photosensitive surface 11, a micro lens array 12, a super lens 16 and a pixel array 13.
  • the photosensitive surface 11 is located on the imaging surface S1.
  • the photosensitive surface 11 is rectangular.
  • the photosensitive surface 11 includes a plurality of sub photosensitive surfaces 111.
  • the photosensitive surface 11 includes one sub photosensitive surface 111, two sub photosensitive surfaces 111, three sub photosensitive surfaces 111, four sub photosensitive surfaces 111, or even more sub photosensitive surfaces 111.
  • the photosensitive surface 11 includes four sub-photosensitive surfaces 111, the four sub-photosensitive surfaces 111 are all rectangular, the lengths of the four rectangles are the same, and the widths of the four rectangles are the same.
  • the four sub-photosensitive surfaces 111 may all be circular, diamond-shaped, etc., or the four sub-photosensitive surfaces 111 may be partially rectangular, and partially circular, diamond-shaped, or the like.
  • the sizes of the four sub-photosensitive surfaces 111 may also be different from each other, or two of them are the same, or three of them are the same.
  • the micro lens array 12 is located on the photosensitive surface 11, and the micro lens array 12 is located between the lens group 20 and the super lens 16, and the micro lens array 12 is located on the light incident side 165 of the super lens 16.
  • the microlens array 12 includes a plurality of microlenses 121.
  • the micro lens 121 may be a convex lens for condensing the light emitted from the lens group 20 to the micro lens 121 so that more light is irradiated on the super lens 16.
  • the hyper lens 16 is located between the micro lens array 12 and the pixel array 13.
  • the super lens 16 includes a lens body 161 and a microstructure array 162.
  • the lens body 161 includes a light incident surface 163 located on the light incident side 165 of the super lens 16 and a light output surface 164 located on the light output side 166 of the super lens 16.
  • the light entrance side 165 is the side of the super lens 16 opposite to the micro lens array 12
  • the light exit side 166 is the side of the super lens 16 opposite to the micro lens array 12.
  • the lens body 161 can be made of materials with high light transmittance.
  • the lens body 161 can be made of plastic or glass with high light transmittance (transmittance greater than 90%).
  • the lens body 161 can be used as a carrier of the microstructure array 162, and the light entering from the light incident side 165 passes through the lens body 161 without loss, which is beneficial to improve the light utilization efficiency.
  • the microstructure array 162 is arranged on the incident surface 163.
  • the microstructure array 162 includes a plurality of microstructure groups 1621.
  • the microstructure group 1621 corresponds to the microlens 121.
  • the microstructure group 1621 corresponds to one microlens 121, or the microstructure group 1621 corresponds to two microlenses 121, or the microstructure group 1621 corresponds to three microlenses 121, or the microstructure group 1621 corresponds to four microlenses.
  • the micro-lens 121 corresponds, etc.
  • the micro-structure group 1621 can also correspond to more (more than 4) micro-lenses 121, which will not be listed here. In the embodiment of the present application, the microstructure group 1621 corresponds to one microlens 121.
  • the microstructure group 1621 includes a plurality of microstructure units 1622.
  • the shape, size, arrangement and angle of the plurality of microstructure units 1622 are determined according to the wavelength and the exit angle of the emitted light L'.
  • the shape of the microstructure unit 1622 may be a rectangular parallelepiped, a cube, a cylinder, or even other irregular shapes (such as a rectangular parallelepiped with a portion of which is cut off). In the embodiment of the present application, the microstructure unit 1622 is a rectangular parallelepiped.
  • the size of the microstructure units 1622 may be the same or different.
  • the sizes of multiple microstructure units 1622 are all the same, or the multiple microstructure units 1622 are divided into multiple parts (for example, two parts, Three parts, etc.), the size of the microstructure unit 1622 in each part is the same, and the size of the microstructure unit 1622 in different parts is different.
  • the size of the microstructure unit 1622 in each microstructure group 1621 is the same.
  • the arrangement of the microstructure units 1622 in each microstructure group 1621 can be arranged in a regular pattern (such as rectangular, circular, "L"-shaped, "T-shaped", etc.), or in an irregular pattern (such as intercepted Part of the rectangle, circle, etc.).
  • the angle of the plurality of microstructure units 1622 refers to the included angle between the microstructure unit 1622 and the incident surface 163, and the included angle can be any angle in the interval [0 degree, 90 degrees].
  • the angle between the microstructure unit 1622 in each microstructure group 1621 and the incident surface 163 is 90 degrees, that is, the long side of the rectangular parallelepiped microstructure unit 1622 and the incident surface
  • the angle between 163 is 90 degrees.
  • the microstructure units 1622 of each microstructure group 1621 have the same shape, size, arrangement, and angle.
  • the microstructure unit 1622 is formed of nano-scale titanium dioxide, so that the microstructure unit 1622 can achieve high smoothness and precise ratio of length to width, which is beneficial for the microstructure group 1621 to accurately divide the incident light L into multiple beams of outgoing light L of different wavelengths. '.
  • the super lens 16 (specifically, the microstructure group 1621) is used to split the incident light L from the light incident side 165 to form a variety of outgoing rays L'with different wavelengths, and the outgoing rays L'with different wavelengths are emitted in different ways
  • the angle is emitted from the light exit side 166 toward the pixel array 13.
  • the incident light L after passing through the microstructure array 162 is divided into a plurality of outgoing light rays L'of different wavelengths, which are red light R, first green light G1, second green light G2, and blue light B, respectively.
  • the wavelengths of the first green light G1 and the second green light G2 may be the same or different.
  • the pixel array 13 is located on the light exit side 166 of the super lens 16.
  • the pixel array 13 includes a plurality of pixel groups 132, and the pixel groups 132, the microstructure groups 1621 and the microlenses 121 are arranged in a one-to-one correspondence.
  • each pixel group 132 includes four pixels 131 (respectively the first pixel 1311, the second pixel 1312, the third pixel 1313, and the fourth pixel 1314), and each microstructure group 165 will pass through the microstructure group 165.
  • the incident light L is divided into four different wavelengths of outgoing light L'(including red light R, first green light G1, blue light B, and second green light G2), red light R, first green light G1, blue light B, and second green light G2.
  • the two green lights G2 respectively enter the first pixel 1311, the second pixel 1312, the third pixel 1313, and the fourth pixel 1314 in the corresponding pixel group 132 for photoelectric conversion.
  • the red light R can include part or all of the light within the wavelength range [622 nanometers (nm), 770nm]
  • the first green light R1 can include part or all of the light within the wavelength range [492nm, 500nm]
  • the second The green light R2 may include part or all of the light having a wavelength in the range (500nm, 577nm)
  • the blue light B may include some or all of the light having a wavelength in the range of [455nm, 492nm).
  • each microstructure group 165 divides the incident light L passing through the microstructure group 165 into four types of outgoing light L′ with different wavelengths (including red light R, first yellow light Y1, blue light B, and second light).
  • Yellow light Y2), the red light R, the first yellow light Y1, the blue light B, and the second yellow light Y2 respectively enter the first pixel 1311, the second pixel 1312, the third pixel 1313, and the fourth pixel in the corresponding pixel group 132.
  • the pixel 1314 performs photoelectric conversion.
  • the red light R may include part or all of the light within the wavelength range [622nm, 770nm]
  • the first yellow light Y1 may include part or all of the light within the wavelength range [577nm, 580nm]
  • the second yellow light Y2 may Including part or all of the light within the wavelength range (580nm, 597nm]
  • the blue light B may include part or all of the light within the wavelength interval [455nm, 492nm].
  • the filter is used to filter and absorb the light, so that the light of the corresponding wavelength enters the corresponding In terms of pixels, the super lens 16 is used to replace the role of the filter.
  • the light is not filtered and absorbed but is directly divided by the microstructure group 1621 into multiple outgoing light beams of different wavelengths and shoots L'to the corresponding pixel 131. There is almost no light. Loss, the light utilization rate is higher.
  • the microlens 121 does not need to be set in a one-to-one correspondence between the microlens and the pixels as in the traditional image sensor, and then the microlens 121 is used to converge the light into the corresponding pixel, but only the microlens 121 is required to converge the light.
  • the light is emitted to the corresponding microstructure group 1621, and then the light is divided into light of different wavelengths by the corresponding microstructure group 1621 and then directed to the corresponding pixel 131. Since the light is not lost by filtering, fewer microlenses 121 are used.
  • the amount of light received by the pixel array 13 can also meet the shooting requirements, and the manufacturing requirements and costs of the microlens array 121 can be reduced.
  • the size of the microlens 121 may be larger than the size of the microlens in a conventional image sensor, so that the microlens 121 can condense more light to the microstructure group 1621, thereby increasing the amount of light reaching the pixel array 13 .
  • the microlens 121 corresponding to the center position of the sub-photosensitive surface 111 is aligned with the microstructure group 1621, while the microlens 121 and the microstructure group 1621 corresponding to the non-central position are offset from each other. shift.
  • the center position of the sub-photosensitive surface 111 is the intersection of the diagonal lines of the rectangle.
  • the center position is the center, and the multiple circles whose radius is greater than 0 and less than half of the diagonal length are all located in non-central positions.
  • the offset of the microstructure group 1621 and the corresponding microlens 121 distributed on the circle is the same, and the offset of the microstructure group 1621 and the corresponding microlens 121 is positively correlated with the size of the radius.
  • the offset refers to the distance between the center of the orthographic projection of the microlens 121 on the microstructure array 16 and the center of the corresponding microstructure group 1621.
  • the offset between the microlens 121 and the corresponding pixel 131 is positively correlated with the radius of the circle where the microlens 121 is located. This means that as the radius of the circle where the microlens 121 is located gradually increases, the microlens 121 and the corresponding The offset of the microstructure group 1621 also gradually increases. For example, the radii of the three circles r1, r2, and r3 gradually increase, and the offsets of the microlenses 121 and the corresponding microstructure group 1621 distributed on the circumferences of r1, r2, and r3 are X1, X2, and X3, respectively. X1 ⁇ X2 ⁇ X3.
  • the shading member 14 is formed at the junction of the two sub-photosensitive surfaces 111.
  • the light-shielding member 14 may be arranged at the junction of the two sub-photosensitive surfaces 111 by gluing or the like.
  • the shading member 14 may be made of a opaque material, and the shading member 14 may also be made of a material that can absorb light.
  • the lens group 20 includes a multi-group lens 21.
  • the lens group 20 includes one group of lenses 21, two groups of lenses 21, three groups of lenses 21, four groups of lenses 21, even more groups of lenses 21, and the like.
  • the lens group 20 of the embodiment of the present application includes four groups of lenses 21.
  • the imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 partially covers the photosensitive surface 11.
  • the imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 refers to the coverage area of the light rays emitted after passing through the group of lenses 21 on the imaging surface S1.
  • the corresponding imaging area 215 of each group of lenses 21 on the imaging surface S1 covers at least one corresponding sub-photosensitive surface 111.
  • the imaging area 215 of the four groups of lenses 21 collectively covers all the photosensitive surfaces 11, that is to say, the photosensitive surface 11 is located in the range jointly covered by the imaging areas 215 of the four groups of lenses 21.
  • the first imaging area 2151 corresponding to the first group lens 211 on the imaging surface S1 covers the first sub-photosensitive surface 1111
  • the second imaging area 2152 corresponding to the second group lens 212 on the imaging surface S1 covers the second sub-photosensitive surface 1112
  • the third imaging area 2153 corresponding to the third group lens 213 on the imaging surface S1 covers the third sub-photosensitive surface 1113
  • the fourth imaging area 2154 corresponding to the fourth group lens 214 on the imaging surface S1 covers the fourth sub-photosensitive surface 1114, so that the first imaging area 2151, the second imaging area 2152, the third imaging area 2153, and the fourth imaging area 2154 collectively cover the entire photosensitive surface 11.
  • Each lens group 21 may include one or more lenses.
  • each group of lenses 21 may include one lens, which may be a convex lens or a concave lens; for another example, each group of lenses 21 may include multiple lenses (more than and equal to two), and the multiple lenses are sequentially along the optical axis O'direction. Arrangement, multiple lenses can all be convex lenses or concave lenses, or part of convex lenses and part of concave lenses.
  • each group of lenses 21 includes one lens.
  • the imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 may be circular, rectangular, rhombus, etc. In the embodiment of the present application, each group of lenses 21 adopts an aspheric lens, and the imaging area 215 is circular.
  • the circular imaging area 215 is exactly the circumscribed circle of the rectangular sub-photosensitive surface 111. In the area of the circular imaging area 215 that does not overlap with the rectangular sub-photosensitive surface 111, a part of the corresponding light does not enter the range of the photosensitive surface 11, and the other part of the corresponding light is blocked and absorbed by the shading member 14. It cannot be directed into the adjacent sub-photosensitive surfaces 111, thereby preventing the light rays from different groups of lenses 21 from interfering with each other.
  • FIGS. 8 and 9 taking the first sub-photosensitive surface 1111 and the corresponding first imaging area 2151 as an example for description.
  • the light corresponding to the area 2155 in FIG. 9 does not enter the first sub-photosensitive surface. Within the range of the surface 1111, it does not fall within the range of the photosensitive surface 11, and cannot be received by the pixels 131 corresponding to the photosensitive surface 11 for imaging.
  • the light of the lens group 211 cannot affect the imaging of the pixels 131 corresponding to the second sub-photosensitive surface 1112 and the imaging of the pixels 131 corresponding to the fourth sub-photosensitive surface 1114.
  • the light from the second lens group 212 cannot affect the imaging of the pixels 131 corresponding to the first sub-photoreceptive surface 1111 and the image formation of the pixels 131 corresponding to the third sub-photoreceptive surface 1113.
  • the light from the third lens group 213 cannot affect the The imaging of the pixels 131 corresponding to the second photosensitive surface 1112 and the imaging of the pixels 131 corresponding to the fourth sub-sensing surface 1114.
  • the light of the fourth lens group 214 cannot affect the imaging and the first imaging of the pixels 131 corresponding to the third sub-sensing surface 1113.
  • the imaging of the pixels 131 corresponding to the sub-photosensitive surface 1114 is such that the light passing through the first group of lenses 211, the second group of lenses 212, the third group of lenses 213, and the fourth group of lenses 214 does not affect each other, thereby ensuring the accuracy of imaging.
  • At least one surface of at least one lens in each group of lenses 21 is a free-form surface.
  • the lens 21 including a free-form surface is a non-rotationally symmetrical design, including multiple symmetry axes.
  • the design of the imaging area 215 is not restricted by a circle, and can be designed into a rectangle, a rhombus, or even an irregular shape (such as a "D" shape). )Wait.
  • the imaging area 215 corresponding to each group of lenses 21 of the present application is rectangular, and has the same rectangular size as the corresponding sub-photosensitive surface 111. At this time, there is no need to provide the shading member 14 and the light between different groups of lenses 21 will not interfere with each other.
  • the optical axis O of each lens group 21 is inclined with respect to the photosensitive surface 11, and the optical axis O of the multi-group lens 21 is on the object side of the lens group 20 (that is, the lens group 20 is opposite to the photosensitive surface 11). Side) converge.
  • the optical axis O of each group of lenses 21 may intersect a central axis O'perpendicular to the photosensitive surface 11 and passing through the center of the photosensitive surface 11, and intersect on the object side.
  • the included angle ⁇ between the optical axis O and the central axis O'of each group of lenses 21 is any angle between the interval (0 degrees, 15 degrees), for example, the included angle ⁇ is 1 degree, 2 degrees, 3 degrees, 5 degrees, 7 degrees.
  • the angle ⁇ of different groups of lenses 21 may be the same or different.
  • the first group of lenses 211, the second group of lenses 212, the third group of lenses 213, and the fourth group of lenses The included angle ⁇ of 214 is the same, all being 10 degrees; or, the included angle ⁇ of the first group lens 211, the second group lens 212, the third group lens 213, and the fourth group lens 214 are all different, being 5 degrees and 7 degrees respectively.
  • each lens group 21 is located on the diagonal of the corresponding sub-photosensitive surface 111 and the central axis O' In the plane where the lens is located, specifically, the projection of the optical axis O of each group of lenses 21 on the photosensitive surface 11 is located on the diagonal of the corresponding sub-photosensitive surface 111.
  • the optical axis O of each lens group 21 is inclined with respect to the photosensitive surface 11, and the optical axis O of the multi-group lens 21 converges on the image side of the lens group 20.
  • the optical axis O of each group of lenses 21 intersects a central axis O'perpendicular to the photosensitive surface 11 and passing through the center of the photosensitive surface 11, and intersects on the image side.
  • the included angle ⁇ between the optical axis O and the central axis O'of each group of lenses 21 is any angle between the interval (0 degrees, 15 degrees), for example, the included angle ⁇ is 1 degree, 2 degrees, 3 degrees, 5 degrees, 7 degrees. Degrees, 10 degrees, 13 degrees, 15 degrees, etc.
  • the optical axis O of each group of lenses 21 is located in the plane where the diagonal of the corresponding sub-photosensitive surface 111 and the central axis O'are located, specifically, the light of each group of lenses 21
  • the projection of the axis O on the photosensitive surface 11 is located on the diagonal of the corresponding sub-photosensitive surface 111.
  • the field of view FOV of each lens group 21 is any angle in the interval [60 degrees, 80 degrees], for example, the field of view FOV is 60 degrees, 62 degrees, 65 degrees, 68 degrees, 70 degrees, 75 degrees, 78 degrees, 80 degrees and so on.
  • the angle of view FOV of the lenses 21 of different groups may be the same or different.
  • the FOV of the first group lens 211, the second group lens 212, the third group lens 213, and the fourth group lens 214 are the same, 60 degrees; or, the first group lens 211, the second group lens 212 , The FOV of the third group lens 213 and the fourth group lens 214 are different, respectively 60 degrees, 65 degrees, 70 degrees and 75 degrees; or the first group lens 211, the second group lens 212 field of view
  • the field of view range of the multi-group lens 21 sequentially forms a blind area range a0, a first field of view distance a1, and a second field of view distance a2.
  • the blind zone range a0, the first field of view distance a1, and the second field of view distance a2 are all distance ranges from the optical center plane S2, and the optical centers of the multiple lenses 21 are all on the optical center plane S2.
  • the blind zone range a0 is the distance range where the field of view of the multi-lens 21 does not overlap the area.
  • the blind zone a0 is based on the FOV of the multi-lens 21 and the clamp between the optical axis O and the central axis O'of the multi-lens 21 The angle ⁇ is determined.
  • the blind zone range a0 is negatively correlated with the angle ⁇ between the optical axis O of the multi-group lens 21 and the central axis O'; for another example, the multi-group lens 21
  • the included angle ⁇ between the optical axis O and the central axis O′ of the optical axis is unchanged, and the blind zone range a0 is negatively correlated with the field angle FOV of the multi-lens 21.
  • the angle ⁇ between the optical axis O and the central axis O'of each group of lenses 21 is any angle between the interval (0 degrees, 15 degrees), and the blind zone range a0 is relatively small.
  • the blind zone range a0 is [1mm, 7mm]
  • the first field of view distance a1 is the interval (7mm, 400mm]
  • the second field of view distance a2 is the interval (400mm, + ⁇ ).
  • the first field of view distance a1 is located between the blind zone range a0 and the second field of view distance a2. As the distance from the optical center plane S2 increases, within the first field of view distance a1, the combined field of view of the multiple lenses 21 The overlap area in the field range gradually increases, and then reaches the maximum when it reaches the intersection of the second field of view distance a2 and the first field of view distance a1 (the overlap area accounts for 100% of the total composite field of view range); Within the field of view distance a2, in the direction from the lens 21 to the object side, the proportion of the overlapping area in the combined field of view of the multi-lens 21 to the entire combined field of view gradually decreases, and then reaches a value at infinity.
  • the limit value, the combined field of view of the imaging system 100 of the present application at infinity is shown in FIG. 11, and the overlap area 711 is the overlap portion of the field of view 71 of the four groups of lenses 21.
  • the present application limits the range of the field of view of each group of lenses 21.
  • the angle of view FOV and the angle ⁇ between the optical axis O and the central axis O'of each group of lenses 21 make the overlap area 711 at infinity occupies the entire synthetic field of view (the field of view of the four groups of lenses 21 jointly cover
  • the ratio of the range) is greater than 25%, which can ensure that the image in the overlapping area 711 has sufficient sharpness.
  • the shading member 14 can also be used as an extension of the image sensor 10 and integrally formed with the image sensor 10.
  • the shading member 14 is also provided with a microlens array 12 and a super lens 16 And the pixel array 13, so that the shading member 14 can receive light for imaging.
  • the light from each group of lenses 21 to the sub-photosensitive surface 111 corresponding to the adjacent two groups of lenses 21 can be received by the shading member 14
  • the light emitted by the first lens group 211 toward the second sub-photosensitive surface 1112 and the fourth sub-photosensitive surface 1114 can be received by the shading member 14, and the second lens group 212 shoots toward the first sub-photosensitive surface 1111 and the third sub-surface 1111.
  • the light from the photosensitive surface 1113 can be received by the shading member 14.
  • the light from the third lens group 213 directed to the second sub-photosensitive surface 1112 and the fourth sub-photosensitive surface 1114 can be received by the shading member 14, and the fourth lens group 214 is directed toward the first
  • the light from the sub-photosensitive surface 1111 and the third sub-photosensitive surface 1113 can be received by the shading member 14.
  • the shading member 14 Compared with the shading member 14 only shielding and absorbing the light in the area 2156, resulting in image loss in the area 2156, the light in the imaging area 215 of each group of lenses 21 located in the area 2156 is received by the shading member 14 for imaging. The image loss is small.
  • the imaging system 100 may further include a substrate 30 and a lens holder 40.
  • the substrate 30 may be a flexible circuit board, a rigid circuit board, or a rigid-flex circuit board.
  • the substrate 30 is a flexible circuit board, which is convenient for installation.
  • the substrate 30 includes a carrying surface 31.
  • the lens holder 40 is arranged on the bearing surface 31.
  • the lens holder 40 can be installed on the carrying surface 31 by gluing or the like.
  • the lens holder 40 includes a lens holder 41 and a plurality of lens barrels 42 provided on the lens holder 41.
  • the image sensor 10 (shown in FIG. 4) is arranged on the carrying surface 31 and is housed in the lens holder 41.
  • the number of lens barrels 42 may be one, two, three, four, or even more. In this embodiment, the number of lens barrels 42 is four.
  • the four lens barrels 42 are arranged at independent intervals and are used to install four groups of lenses 21. Each group of lenses 21 is installed in the corresponding lens barrel 42.
  • the number of the lens barrel 42 is one, and the four groups of lenses 21 are installed in the same lens barrel 42 at the same time. At this time, the four groups of lenses 21 can be separately manufactured and molded and installed in Inside the one lens barrel 42. The four groups of lenses 21 can also be integrally formed and installed in the one lens barrel 42. At this time, the four groups of lenses 21 are installed in the same lens barrel 42 at the same time.
  • the manufacturing process of the lens barrel 42 does not need to be changed.
  • the traditional lens barrel manufacturing process can be used; on the other hand, the positional relationship between the four groups of lenses 21 is precisely determined by the mold when the lens 21 is manufactured, and the four lenses 21 are installed in the four lens barrels 42 respectively. In other words, it can be avoided that the positional relationship between the four groups of lenses 21 does not meet the requirements due to installation errors.
  • the image acquisition method of the embodiment of the present application can be applied to the imaging system 100 of any embodiment of the present application.
  • the imaging system 100 includes an image sensor 10 and The lens group 20, the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the image sensor 10 includes a super lens 16 and a pixel array 13, the pixel array 13 is located on the light exit side 166 of the super lens 16, and the super lens 16 is used to contrast the super lens 16
  • the incident light L that enters the light incident side 165 of the lens 16 is split to form a variety of outgoing rays L'with different wavelengths, and the outgoing rays L'of different wavelengths are emitted from the light emitting side 166 to the pixel array 13 at different exit angles.
  • the photosensitive surface 11 includes a plurality of sub-photosensitive surfaces 111, and the lens group 20 includes multiple groups of lenses 21.
  • the imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 covers part of the photosensitive surface 11.
  • the multiple groups of lenses 21 correspond to the imaging surface S1.
  • the imaging area 215 covers all the photosensitive surfaces 11 together, and at least one surface of each group of lenses 21 is a free-form surface, so that the corresponding imaging area 215 of each group of lenses 21 on the imaging surface S1 is rectangular.
  • Image acquisition methods include:
  • the imaging system 100 may further include a processor 60 (shown in FIG. 1 ), and the processor 60 is connected to the image sensor 10. All the pixels 131 on the image sensor 10 can be individually exposed.
  • the processor 60 can control all the pixels 131 of the image sensor 10 to be exposed at the same time to obtain the first initial sub-surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 respectively.
  • the pixels 131 corresponding to the first sub-sensing surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 are all Complete exposure.
  • the first sub-sensing surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 have the same exposure time for the pixels 131 corresponding to T, then the first sub-sensing surface 1111, the The pixels 131 corresponding to the second sub-photosensitive surface 1112, the third sub-photosensitive surface 1113, and the fourth sub-photosensitive surface 1114 can start exposure at the same time and stop exposure at the same time; or, the first sub-photosensitive surface 1111, the second sub-photosensitive surface 1112, and the second sub-photosensitive surface 1112.
  • the exposure durations of the pixels 131 corresponding to the three sub-photosensitive surfaces 1113 and the fourth sub-photosensitive surface 1114 are different, which are 1/4T, 1/2T, 3/4T, and T, respectively.
  • the processor 60 can control the first sub-photosensitive surface 1111.
  • the pixels 131 corresponding to the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 start to be exposed at the same time. Because the exposure time is different, the time to end the exposure is also different.
  • the first sub-sensing surface 1111 is at 1/ Exposure is stopped at 4T, the second sub-photosensitive surface 1112 stops at 1/2T, the third sub-photosensitive surface 1113 stops at 3/4T, and the fourth sub-photosensitive surface 1114 stops at T.
  • a corresponding initial image P0 can be obtained.
  • the first sub-photosensitive surface 1111, the second sub-photosensitive surface 1112, the third sub-photosensitive surface 1113, and the fourth sub-photosensitive surface 1114 are exposed to light.
  • the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04 are obtained respectively.
  • the processor 60 may control the pixels 131 corresponding to multiple regions of the image sensor 10 to be sequentially exposed, for example, sequentially exposing the first sub-sensing surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface.
  • Figure 15a taking T as an exposure period (within one exposure period, the four sub-photosensitive surfaces 111 are sequentially exposed) as an example.
  • the first sub-photosensitive surface 1111 corresponds to Expose all the pixels 131 in the first sub-photosensitive surface 1111 to obtain an initial image P0 (hereinafter referred to as the first initial image P01, the first initial image P01 includes 1, 2 in Figure 15a).
  • the exposure start time of all pixels 131 corresponding to the first sub-photosensitive surface 1111 are the same, and the exposure end time is also the same, that is, all the corresponding pixels 131 in the first sub-photosensitive surface 1111 have the same
  • the exposure time experienced by the pixels 131 are all the same, for example, 1/4T; or, the exposure start time of all the pixels 131 corresponding to the first sub-photosensitive surface 1111 may be different, but the exposure end time is the same, that is, the first sub-surface 1111
  • the exposure time experienced by all the pixels 131 corresponding to the photosensitive surface 1111 can be different, but at 1/4T, all the pixels 131 corresponding to the first sub-photosensitive surface 1111 need to be fully exposed, for example, the exposure time experienced by a part of the pixels 131 It is 1/4T, and the exposure time experienced by the remaining part of the pixels 131 is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
  • an initial image P0 (hereinafter referred to as The second initial image P02, the second initial image P02 includes the four image areas of 5, 6, 7 and 8 in Figure 15a), the second initial image P02 is only based on the electricity generated by the (1/4T, 2/4T) internal exposure The signal is obtained.
  • the exposure start time of all pixels 131 corresponding to the second sub-photosensitive surface 1112 are the same, and the exposure end time is also the same, that is, the exposure experienced by all the pixels 131 corresponding to the second sub-photosensitive surface 1112
  • the duration is the same, for example, 1/4T; or, the exposure start time of all the pixels 131 corresponding to the second sub-photosensitive surface 1112 may be different, but the exposure end time is the same, that is, the corresponding pixels in the second sub-photosensitive surface 1112
  • the exposure time experienced by all pixels 131 can be different, but at 2/4T, all the pixels 131 corresponding to the second sub-photosensitive surface 1112 need to be fully exposed.
  • the exposure time experienced by some pixels 131 is 1/4T, and the rest The exposure time experienced by some pixels 131 is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
  • the third initial image P03 includes the four image areas of 9, 10, 11 and 12 in Figure 15a
  • the third initial image P03 is only based on the electricity generated by the (2/4T, 3/4T) internal exposure
  • the signal is obtained, where the exposure start time of all pixels 131 corresponding to the third sub-photosensitive surface 1113 are the same, and the exposure end time is also the same, that is, the exposure experienced by all the pixels 131 corresponding to the third sub-photosensitive surface 1113
  • the time length is the same, for example, 1/4T; or, the exposure start time of all the pixels 131 corresponding to the third sub-photosensitive surface 1113 may be different, but the exposure end time is the same, that is, corresponding to the third sub-photosensitive surface 1113
  • the fourth initial image P04 includes four image areas 13, 14, 15 and 16 in Fig.
  • the fourth initial image P04 is obtained only based on the electrical signal generated by the (3/4T, T] internal exposure, where ,
  • the exposure start time of all pixels 131 corresponding to the fourth sub-photosensitive surface 1114 are the same, and the exposure end time is also the same, that is, all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 have the same exposure time, For example, it is 1/4T; or, the exposure start time of all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 may be different, but the exposure end time is the same, that is, all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 are located
  • the exposure time experienced can be different, but at 4/4T, all pixels 131 corresponding to the fourth sub-photosensitive surface 1114 need to be fully exposed.
  • the exposure time experienced by some pixels 131 is 1/4T, and the remaining pixels 131
  • the exposure time experienced is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
  • the light emitted from the central area of each group of lenses 21 is generally strong, while the light emitted from the edge area is relatively weak, because in order to prevent the central area from overexposing, the exposure time of a part of the pixels 131 corresponding to the central area is set to be relatively long. Small (such as 1/8), and the exposure time of another part of the pixels 131 corresponding to the edge area is set to 1/4, which can prevent a part of the pixels 131 corresponding to the central area from overexposing, but also prevent another part of the pixels corresponding to the edge area 131 insufficient exposure, thereby improving image quality.
  • four initial images P0 (respectively the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04) with better imaging quality can be obtained by sequentially exposing in one exposure period.
  • the processor 60 obtains the final image P2 according to the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04.
  • the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04 will have an area with the same scene (ie, the overlapping area 711 in FIG. 9), and any adjacent two sets of lenses 21 will also have an area with the same scene (ie, the area 712 in FIG. 9).
  • the processor 60 can identify the same scene area in the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04 (hereinafter referred to as the first overlapping area M1, the image and the image of the first overlapping area M1)
  • the coincidence area 711 in FIG. 9 corresponds), it can be understood that there are four first coincidence areas M1 (respectively the four areas 3, 8, 9 and 14 in FIG. 15a), and the four areas 3, 8, 9 and 14 respectively It corresponds to the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04.
  • the processor 60 only reserves the first overlap area M1 of any initial image P0 (for example, the first overlap area M1 of the first initial image P01, that is, area 3), and replaces the first overlap area M1 (That is, areas 8, 9 and 14) are deleted.
  • the processor 60 recognizes areas of the same scene in two adjacent initial images P0 (hereinafter referred to as the second overlapping area M2, the second overlapping area M2 is obtained by only exposing the two adjacent sub-photosensitive surfaces 111). In the area with the same scene in the two initial images P0, the second overlapping area M2 corresponds to the area 712 in FIG. 9).
  • each initial image P0 is adjacent to two initial images P0, so each initial image P0 corresponds to two second overlapping areas M2, that is, the number of second overlapping areas M2 is eight, where the first The second overlapping area M2 with the same scene in the initial image P01 and the second initial image P02 is area 2 and area 5, respectively, and the second overlapping area M2 with the same scene in the second initial image P02 and the third initial image P03 is area 7 respectively And area 10, the second overlapping area M2 with the same scene in the third initial image P03 and the fourth initial image P04 is area 12 and area 15, respectively, and the second overlapping area with the same scene in the fourth initial image P04 and the first initial image P01 Area M2 is area 13 and area 4, respectively.
  • the processor 60 may retain any one of the second overlapping areas M2 of the two adjacent initial images P0 and delete The other one, for example, keep the second overlapping area M2 (ie, area 2) in the first initial image P01 that is the same scene as the second initial image P02, while deleting the second initial image P02 only has the same scene as the first initial image P01
  • the second overlapping area M2 (ie, area 5) of the second initial image P02 and the second overlapping area M2 (ie, area 7) with the same scene as the third initial image P03 in the second initial image P02 are retained, and the third initial image P03 is deleted only in the
  • the second overlapping area M2 (ie, area 10) that is the same scene as the second initial image P02; the second overlapping area M2 (ie, area 12) that is the same scene as the fourth initial image P04 in the third initial image P03 is retained, and Delete the second
  • first overlapping area M1 and four second overlapping areas M2 are finally reserved.
  • the processor 60 stitches a first overlap area M1 (ie, area 3), four second overlap areas M2 (ie, areas 2, 7, 12, and 13), and four initial images P0 to remove the first overlap.
  • the area M1 and the area of the second overlapping area M2 are used to generate the final image P2.
  • multiple sub-photosensitive surfaces 111 are time-divisionally exposed to acquire multiple initial images P0, and the final image P2 can be quickly generated based on the multiple initial images P0.
  • the lens group 20 is divided into multiple groups of lenses 21.
  • the imaging area 215 of each group of lenses 21 on the imaging surface S1 covers part of the photosensitive surface 11 of the image sensor 10, and the imaging area 215 of the multiple groups of lenses 21 collectively covers all the photosensitive surfaces 11 Compared with a group of lenses 21 corresponding to all photosensitive surfaces 11, the total length (length along the central axis O'direction) of each group of lenses 21 corresponding to part of the photosensitive surface 11 is shorter, so that the overall length of the lens group 20 (The length in the direction of the central axis O′) is short, and the imaging system 100 is easier to install on the terminal 1000.
  • the imaging system 100 further includes a plurality of diaphragms 70.
  • the multiple diaphragms 70 are respectively used to control the amount of incident light of the multiple lenses 21.
  • the diaphragm 70 is arranged on the side of each group of lenses 21 opposite to the image sensor 10, the number of the diaphragm 70 can be two, three, four or more, and the number of the diaphragm 70 can be Determined according to the number of groups of the lens 21, in the embodiment of the present application, the number of diaphragms 70 is the same as the number of groups of the lens 21, which is four (hereinafter referred to as the first diaphragm, the second diaphragm, the third diaphragm and the fourth diaphragm).
  • the aperture, the first aperture, the second aperture, the third aperture and the fourth aperture are respectively arranged on the four groups of lenses 21, and respectively used to control reaching the first sub-photoreceptive surface 1111 and the second sub-photoreceptive surface 1112 , The amount of light of the third sub-sensing surface 1113 and the fourth sub-sensing surface 1114).
  • the plurality of apertures 70 can be driven by the driving structure to change the size of the light inlet of the aperture 70, thereby controlling the amount of light incident by the corresponding set of lenses 21.
  • the processor 60 (shown in FIG. 1) is connected to the driving structure, and the processor 60 controls the time-sharing exposure of the image sensor 10.
  • the processor 60 controls the driving structure to drive the second diaphragm, the third diaphragm, and the fourth diaphragm to close so that the light cannot reach the second sub-photosensitive surface 1112, the third diaphragm.
  • the sub-photosensitive surface 1113 and the fourth sub-photosensitive surface 1114 when the pixel 131 corresponding to the second sub-photosensitive surface 1112 is exposed, the processor 60 controls the driving structure to drive the first diaphragm, the third diaphragm, and the fourth diaphragm to close so that The light cannot reach the first sub-photoreceptive surface 1111, the third sub-photoreceptive surface 1113, and the fourth sub-photoreceptive surface 1114; when the pixel 131 corresponding to the third sub-photoreceptive surface 1113 is exposed, the processor 60 controls the driving structure to drive the first diaphragm, The second diaphragm and the fourth diaphragm are closed so that the light cannot reach the first sub-photosensitive surface 1111, the second sub-photosensitive surface 1112, and the fourth sub-photosensitive surface 1114; when the pixel 131 corresponding to the fourth sub-photosensitive surface 1114 is exposed, The processor 60 controls the driving structure to drive the first diaphragm, the second diaphra
  • the processor 60 drives the corresponding diaphragm 70 to close by controlling the driving structure to control the time-sharing exposure of the image sensor 10, which can ensure that different groups of lenses 21 will not cause light interference, and there is no need to provide a light shielding member 14 on the image sensor 10.
  • the area occupied by the light shielding member 14 is reduced, and the area of the image sensor 10 can be reduced.
  • step 02 includes: 021: Rotate multiple initial images P0;
  • the first overlapping image N1 is a partial image of the same scene in all the initial images P0, and the second overlapping image N2 is only in two adjacent images.
  • the partial images of the same scene in the two initial images P0 obtained by the exposure of the sub-photosensitive surfaces 111; and 023: splicing the first overlapping image N1, the second overlapping image N2, and the first overlapping image N1 and the first overlapping image N1 and the first overlapping image N1 and the first overlapping image N2 among the multiple initial images P0.
  • the two overlapping images N2 have different partial images in different scenes.
  • the initial image P0 formed by each group of lenses 21 is an inverted image of the actual scene
  • the initial image P0 should be rotated, specifically by 180 degrees, so that the direction of the initial image P0 is the same as the actual scene.
  • the direction of the scene is the same. This ensures the accuracy of the orientation of the scene in the image when the multiple initial images P0 are subsequently spliced to generate the final image P2.
  • the processor 60 shown in FIG.
  • the processor 60 may change the first overlapping area M1 of any initial image P0 (such as the first initial The image of the first overlapping area M1 of the image P01, that is, the area 3) is taken as the first overlapping image N1.
  • the processor 60 identifies the second overlapping area M2 in the two adjacent initial images P0, and then obtains a second overlapping image N2 according to the second overlapping area M2 in the two adjacent initial images P0, for example, the processor 60.
  • Any one of the images of the second overlapping area M2 of the two adjacent initial images P0 can be used as the second overlapping image N2, so that four second overlapping images N2 (such as areas 2, 7, 12, respectively) can be obtained.
  • the first overlapping image N1 is a partial image with the same scene in all the initial images P0
  • the second overlapping image N2 is a partial image with the same scene in the two initial images P0 obtained by exposing only two adjacent sub-photosensitive surfaces 111.
  • the processor 60 stitches the first overlapping image N1, the second overlapping image N2, and the partial images of the multiple initial images P0 that are different from the scenes of the first overlapping image N1 and the second overlapping image N2 (ie, multiple initial images). Remove the corresponding images of the first overlapping area M1 and the second overlapping area M2 from P0 to generate the final image P2. In this way, only the first overlapping area M1 and the second overlapping area M2 need to be identified, the calculation amount is small, and the final image P2 can be quickly generated.
  • regions with the same scene in the multiple initial images P0 are defined as the first overlap area M1, and each first overlap area M1 includes Multiple sub-regions, multiple first overlapping areas M1 include multiple sub-regions with the same scene; areas with the same scene in two adjacent initial images P0 are defined as second overlapping areas M2, and each second overlapping area M2 includes multiple The two adjacent second overlapping areas M2 include multiple sub-regions with the same scene; step 022 includes: 0221: comparing the sub-regions of the same scene in the multiple first overlapping areas M1 to obtain each first overlap The sub-regions at the non-edge positions in the area M1 are used as the first splicing area N3; 0222: compare the sub-areas of the same scene in the adjacent second overlapping area M2 to obtain the sub-regions at the non-corner positions in each second overlapping area M2 The area is used as the second stitching area N4; 0223: stitching multiple
  • the processor 60 compares the sub-regions of the same scene in the plurality of first overlapping areas M1 to obtain the sub-regions at non-distant locations in the first overlapping area M1 as the first splicing area N3. It can be understood that when each group of lenses 21 is imaging, the sharpness and accuracy of the image in the edge area are generally lower than that of the image in the central area, as shown in FIG. 18a, for example, the first overlap area M1 in the first initial image P01 is divided into There are four sub-regions A1, A2, A3, and A4. The first overlapping area M1 in the second initial image P02 is divided into four sub-regions B1, B2, B3, and B4.
  • the first overlapping area M1 in the third initial image P03 is divided into four sub-regions. There are four sub-regions C1, C2, C3, and C4.
  • the first overlapping area M1 in the fourth initial image P04 is divided into four sub-regions D1, D2, D3, and D4.
  • the four sub-areas A1, B1, C1, D1 represent the same scene
  • the four sub-areas A2, B2, C2, D2 represent the same scene
  • the four sub-areas A3, B3, C3, and D3 represent the same scene
  • A4, B4 The four sub-areas of, C4 and D4 represent the same scene.
  • the processor 60 selects a sub-areas at a non-edge position among multiple sub-areas with the same scene as the first stitching area N3, and then stitches the multiple first stitching areas N3 to obtain the first overlapping image N1. Since A1 is close to the center of the first initial image P01, B2 is close to the center of the second initial image P02, C3 is close to the center of the third initial image P03, D4 is close to the center of the fourth initial image P04, A1, B2, C3, and D4 The four sub-regions are all non-edge locations, with high definition and accuracy.
  • the three sub-regions B1, C1 and D1, which are the same as the A1 sub-region scene, are at the edge position, and the definition and accuracy are low; the same as the B2 sub-region scene
  • the three sub-areas of A2, C2 and D2 are at the edge position, and the definition and accuracy are lower; the three sub-areas of A3, B3 and D3, which are the same as the C3 sub-area scene, are at the edge position, and the definition and accuracy are lower; and C4
  • the three sub-regions A4, B4, and C4 with the same sub-region scene are at the edge position, and the definition and accuracy are low.
  • the processor 60 can select the four sub-regions A1, B2, C3, and D4 as the four first splicing areas N3, and then splicing the four first splicing areas N3 together to obtain the first overlapping image N1.
  • the positions of the scenes corresponding to each first splicing area N3 are spliced to ensure the accuracy of the spliced first overlapping image N1.
  • the four first splicing areas N3 (A1, B2, C3, and D4) of the first overlapping image N1
  • the images of the region) are the clearest and most accurate images among the images with the same scene, and the definition and accuracy of the first overlapping image N1 are relatively high.
  • the processor 60 compares the sub-regions of the same scene in the adjacent second overlapping areas M2 to obtain the sub-regions at non-corner positions in each second overlapping area M2 as the second splicing area N4.
  • the second overlapping area M2 in the first initial image P01 that has the same scene as the second initial image P02 includes two sub-areas E1 and E2, and the second overlapping area M2 in the second initial image P02 is the same scene as the first initial image P01. Including two sub-regions F1 and F2.
  • the scenes of E1 and F1 are the same, and the scenes of E2 and F2 are the same, but the E1 sub-region is close to the center of the first initial image P01, which is a non-corner position, and the definition and accuracy are higher than that of the F1 sub-region located at the corner. And the accuracy is higher. Similarly, the definition and accuracy of the F2 sub-region located in the non-corner position is higher than that of the E2 sub-region located in the corner position. Similar to the above description, in the second overlapping area M2 in the adjacent second initial image P02 and the third initial image P03, the definition and accuracy of the H1 sub-region is higher than that of the I1 sub-region.
  • the definition and accuracy of the I2 sub-region is higher than that of the H2 sub-region; in the second overlapping area M2 in the adjacent third initial image P03 and the fourth initial image P04, the J1 sub-region
  • the definition and accuracy of the area is higher than that of the K1 sub-area, and the definition and accuracy of the K2 sub-area is higher than that of the J2 sub-area; in the adjacent fourth initial image
  • the definition and accuracy of the L1 sub-region is higher than that of the Q1 sub-region
  • the definition and accuracy of the Q2 sub-region is higher than that of the L2 sub-region.
  • the clarity and accuracy of the area is higher.
  • the processor 60 may use the E1 sub-region in the first initial image P01 and the F2 sub-region in the second initial image P02 as the two second stitching regions N4 of the first second overlapping image N2, and
  • the H1 subarea in the second initial image P02 and the I2 subarea of the third initial image P03 are used as the two second stitching areas N4 of the second second overlapping image N2, and the J1 subarea and the first subarea in the third initial image P03
  • the K2 subregion of the four initial image P04 is used as the two second stitching regions N4 of the third second overlapping image N2, and the L1 subregion in the fourth initial image P04 and the Q2 subregion of the first initial image P01 are used as the fourth Two second stitching areas N4 of the second overlapping image N2.
  • the processor 60 stitches the two second stitching areas N4 corresponding to the two adjacent initial images P0 together according to the corresponding scene positions to obtain four second overlapping images N2 respectively.
  • the two second stitching regions N4 ie, the E1 sub-region and the F2 sub-region
  • the second initial image is stitched P02 and the third initial image P03
  • two second splicing areas N4 ie H1 sub-area and I2 sub-area
  • the two second splicing areas N4 ie, the J1 sub-area and the K2 sub-area
  • the two second splicing areas formed by splicing the fourth initial image P04 and the first initial image P01 N4 ie L1 sub-region and Q2 sub-region
  • the processor 60 splices the first overlapping image N1, the four second overlapping images N2, and the four initial images to remove the parts of the first overlapping area M1 and the second overlapping area M2 to form the final image P2 as shown in FIG. 18b.
  • stitching can be performed according to the position of the scene corresponding to the part of the first overlapping image N1, the four second overlapping images N2 and the four initial images excluding the first overlapping area M1 and the second overlapping area M2 to ensure the stitching The accuracy of the final image P2.
  • step 022 includes: 0225: Obtain the first pixel value of each pixel 131 in the plurality of first overlap regions; 0226: Take the first average value of the first pixel value of the pixel 131 corresponding to the same scene in each of the multiple first overlapping areas, and generate the first overlapping image according to the multiple first average values; 0227: Obtain each of the multiple second overlapping areas The second pixel value of each pixel 131; and 0228: Obtain the second average value of the second pixel value of each pixel 131 corresponding to the same scene in two adjacent second overlapping areas, and generate a plurality of second pixel values according to the plurality of second average values.
  • the second overlay image is Obtain the first pixel value of each pixel 131 in the plurality of first overlap regions.
  • the processor 60 obtains the first pixel value of each pixel 131 in the plurality of first overlapping areas M1 in the plurality of initial images P0, and can determine the pixel value corresponding to each of the same scenes in the plurality of first overlapping areas M1.
  • the first pixel value of 131 is calculated to obtain the first average value. For example, assuming that each sub-region corresponds to a pixel 131, as shown in FIG. 18a, in the first initial image P01 to the fourth initial image P04, the scenes of the four sub-regions A1, B1, C1, and D1 are the same, and the scenes of A1, B1, C1, and C1 are the same.
  • the pixels 131 in the four sub-regions of D1 correspond one-to-one, and the first pixel values of the pixels 131 corresponding to the four regions A1, B1, C1, and D1 are added together and the average value is taken to obtain the first pixel value.
  • the pixels 131 corresponding to the four sub-regions A2, B2, C2, D2 correspond one-to-one
  • the pixels 131 corresponding to the four sub-regions A3, B3, C3, and D3 correspond one-to-one
  • the corresponding pixels 131 are in one-to-one correspondence.
  • each sub-region corresponds to a pixel 131 to facilitate the description of the principle of obtaining the first overlapping image N1. It cannot be understood that each sub-image can only correspond to one pixel 131, and each sub-region can correspond to multiple pixels. 131, such as 2, 3, 5, 10, 100, 1,000, or even 100,000, millions, etc.
  • the processor 60 obtains the second pixel value of each pixel 131 in the second overlapping area N2 in the plurality of initial images P0, and according to the second pixel value of each pixel 131 corresponding to the same scene in the plurality of second overlapping areas N2
  • the value is calculated to obtain the second mean value.
  • the scenes of the E1 area of the first initial image P01 and the F1 area of the second initial image P02 are the same.
  • the pixels 131 of the two areas E1 and F1 correspond to each other, and the two areas E1 and F1
  • the second pixel values of the corresponding pixels 131 are summed and then averaged to obtain a second average value.
  • the second pixel values of the corresponding pixels 131 in the two regions E2 and F2 can be summed and then averaged to obtain a second average value.
  • the second average value is used to generate a second overlapping image N2 based on the two second average values.
  • the two first average values are used as the pixel values of the two pixels 131 of the second overlap image N2 to generate the second overlap image N2. It can be understood that the methods for acquiring the other three second overlapping images N2 are basically the same as those described above, and will not be repeated here.
  • the processor 60 calculates the first average value by using the first pixel values of the corresponding pixels 131 of the four first overlapping areas M1, and uses the first average value as the first overlap.
  • the pixel value of the pixel corresponding to the image N1 is calculated from the second pixel value of the pixel 131 corresponding to the second overlapping area M2 of the two adjacent initial images P0, and the second average value is calculated as the pixel value of the pixel corresponding to the second overlapping image N2 to obtain
  • the first overlapping image N1 and the second overlapping image N2 are more clear.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • the features defined with “first” and “second” may explicitly or implicitly include at least one feature.
  • a plurality of means at least two, for example two, three, unless otherwise specifically defined.

Abstract

An image sensor (10), an imaging system (100), and a terminal (1000). The image sensor (10) comprises a metalens (16) and a pixel array (13) positioned at a light exiting side (166) of the metalens (16). The metalens (16) is used to split incident light rays entering from a light entering side (165) into exiting light rays having multiple different wavelengths. The exiting light rays having different wavelengths are emitted to the pixel array (13) from the light exiting side (166) at different exit angles.

Description

图像传感器、成像系统和终端Image sensor, imaging system and terminal
优先权信息Priority information
本申请请求2019年8月29日向中国国家知识产权局提交的、专利申请号为201910809194.2的专利申请的优先权和权益,并且通过参照将其全文并入此处。This application requests the priority and rights of the patent application with the patent application number 201910809194.2 filed with the State Intellectual Property Office of China on August 29, 2019, and the full text is incorporated herein by reference.
技术领域Technical field
本申请涉及消费性电子技术领域,尤其是涉及一种图像传感器、成像系统和终端。This application relates to the technical field of consumer electronics, in particular to an image sensor, an imaging system and a terminal.
背景技术Background technique
相关技术中,图像传感器一般通过颜色滤波阵列(color filter array,CFA)对光线进行分光,分成红绿蓝三种颜色的光,然后进入图像传感器的像素阵列内进行光电转换以成像。In the related art, the image sensor generally splits light through a color filter array (CFA), splits the light into three colors of red, green and blue, and then enters the pixel array of the image sensor to perform photoelectric conversion for imaging.
发明内容Summary of the invention
本申请的实施方式提供一种图像传感器、成像系统和终端。The embodiments of the present application provide an image sensor, an imaging system, and a terminal.
本申请实施方式的图像传感器包括超透镜和像素阵列。所述像素阵列位于所述超透镜的出光侧,所述超透镜用于对从所述超透镜的入光侧射入的入射光线进行分光以形成多种波长不同的出射光线,不同波长的所述出射光线以不同的出射角度从所述出光侧射向所述像素阵列。The image sensor of the embodiment of the present application includes a super lens and a pixel array. The pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form a variety of exit light with different wavelengths. The outgoing light rays are emitted toward the pixel array from the light outgoing side at different outgoing angles.
本申请实施方式的成像系统包括透镜组和图像传感器。所述图像传感器设置在所述透镜组的像侧。图像传感器包括超透镜和像素阵列。所述像素阵列位于所述超透镜的出光侧,所述超透镜用于对从所述超透镜的入光侧射入的入射光线进行分光以形成多种波长不同的出射光线,不同波长的所述出射光线以不同的出射角度从所述出光侧射向所述像素阵列。The imaging system of the embodiment of the present application includes a lens group and an image sensor. The image sensor is arranged on the image side of the lens group. The image sensor includes a hyper lens and a pixel array. The pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form a variety of exit light with different wavelengths. The outgoing light rays are emitted toward the pixel array from the light outgoing side at different outgoing angles.
本申请的终端包括壳体和成像系统。所述成像系统安装在所述壳体上。所述成像系统包括透镜组和图像传感器。所述图像传感器设置在所述透镜组的像侧。图像传感器包括超透镜和像素阵列。所述像素阵列位于所述超透镜的出光侧,所述超透镜用于对从所述超透镜的入光侧射入的入射光线进行分光以形成多种波长不同的出射光线,不同波长的所述出射光线以不同的出射角度从所述出光侧射向所述像素阵列。The terminal of the present application includes a housing and an imaging system. The imaging system is installed on the housing. The imaging system includes a lens group and an image sensor. The image sensor is arranged on the image side of the lens group. The image sensor includes a hyper lens and a pixel array. The pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form a variety of exit light with different wavelengths. The outgoing light rays are emitted toward the pixel array from the light outgoing side at different outgoing angles.
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。The additional aspects and advantages of the embodiments of the present application will be partly given in the following description, and part of them will become obvious from the following description, or be understood through the practice of the embodiments of the present application.
附图说明Description of the drawings
本申请的实施方式的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the embodiments of the present application will become obvious and easy to understand from the description of the embodiments in conjunction with the following drawings, in which:
图1是本申请某些实施方式的终端的平面示意图。FIG. 1 is a schematic plan view of a terminal according to some embodiments of the present application.
图2是本申请某些实施方式的终端另一视角的平面示意图。FIG. 2 is a schematic plan view from another perspective of the terminal according to some embodiments of the present application.
图3是本申请某些实施方式的成像系统的结构示意图。Fig. 3 is a schematic structural diagram of an imaging system according to some embodiments of the present application.
图4是本申请某些实施方式的图像传感器的立体分解示意图。FIG. 4 is a three-dimensional exploded schematic diagram of an image sensor according to some embodiments of the present application.
图5是本申请某些实施方式的图像传感器中微透镜、微结构组和像素组的立体示意图。FIG. 5 is a three-dimensional schematic diagram of a microlens, a microstructure group, and a pixel group in an image sensor according to some embodiments of the present application.
图6是本申请某些实施方式的图像传感器的子感光面的微透镜和微结构组的偏移示意图。FIG. 6 is a schematic diagram of the offset of the microlens and the microstructure group of the sub-photosensitive surface of the image sensor according to some embodiments of the present application.
图7是本申请某些实施方式的像素阵列的立体示意图。FIG. 7 is a three-dimensional schematic diagram of a pixel array according to some embodiments of the present application.
图8是本申请某些实施方式的成像系统的平面示意图。Fig. 8 is a schematic plan view of an imaging system according to some embodiments of the present application.
图9是图8的图像传感器中的一个子感光面的平面示意图。FIG. 9 is a schematic plan view of a sub-photosensitive surface in the image sensor of FIG. 8.
图10是本申请某些实施方式的成像系统的平面示意图。FIG. 10 is a schematic plan view of an imaging system according to some embodiments of the present application.
图11是本申请某些实施方式的透镜组的视场范围示意图。FIG. 11 is a schematic diagram of the field of view of the lens group according to some embodiments of the present application.
图12和图13是本申请某些实施方式的成像系统的立体装配示意图。Figures 12 and 13 are three-dimensional assembly diagrams of imaging systems according to certain embodiments of the present application.
图14是本申请某些实施方式的图像获取方法的流程示意图。FIG. 14 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
图15a和图15b是本申请某些实施方式的图像获取方法的原理示意图。15a and 15b are schematic diagrams of the principle of image acquisition methods in some embodiments of the present application.
图16是本申请某些实施方式的成像系统的平面示意图。FIG. 16 is a schematic plan view of an imaging system according to some embodiments of the present application.
图17是本申请某些实施方式的图像获取方法的流程示意图。FIG. 17 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
图18a和图18b是本申请某些实施方式的图像获取方法的原理示意图。18a and 18b are schematic diagrams of the principle of image acquisition methods in some embodiments of the present application.
图19和图20是本申请某些实施方式的图像获取方法的流程示意图。19 and 20 are schematic flowcharts of image acquisition methods in some embodiments of the present application.
具体实施方式detailed description
以下结合附图对本申请的实施方式作进一步说明。附图中相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。The implementation of the present application will be further described below in conjunction with the accompanying drawings. The same or similar reference numerals in the drawings indicate the same or similar elements or elements with the same or similar functions throughout.
另外,下面结合附图描述的本申请的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的限制。In addition, the implementation manners of the present application described below in conjunction with the drawings are exemplary, and are only used to explain the implementation manners of the application, and should not be construed as limiting the application.
在本申请中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。In this application, unless expressly stipulated and defined otherwise, the first feature “on” or “under” the second feature may be in direct contact with the first and second features, or the first and second features may be indirectly through an intermediary. contact. Moreover, the "above", "above" and "above" of the first feature on the second feature may mean that the first feature is directly above or diagonally above the second feature, or it simply means that the level of the first feature is higher than the second feature. The “below”, “below” and “below” of the second feature of the first feature may be that the first feature is directly below or obliquely below the second feature, or it simply means that the level of the first feature is smaller than the second feature.
请参阅图4和图5,本申请实施方式的图像传感器10包括超透镜16(metelenses)和像素阵列13。像素阵列13位于超透镜16的出光侧166,超透镜16用于对从超透镜16的入光侧165射入的入射光线L进行分光以形成多种波长不同的出射光线L’,不同波长的出射光线L’以不同的出射角度从出光侧166射向像素阵列13。Please refer to FIG. 4 and FIG. 5, the image sensor 10 according to the embodiment of the present application includes a metalenses 16 and a pixel array 13. The pixel array 13 is located on the light exit side 166 of the hyper lens 16, and the hyper lens 16 is used to split the incident light L from the light entrance side 165 of the hyper lens 16 to form a variety of exit light L'with different wavelengths. The emitted light L′ is emitted from the light emitting side 166 toward the pixel array 13 at different emission angles.
请参阅图4,在某些实施方式中,超透镜16包括透镜本体161和微结构阵列162。透镜本体161包括位于超透镜16的入光侧165的入光面163及位于超透镜16的出光侧166的出光面164。微结构阵列162设置在入射面163。Please refer to FIG. 4. In some embodiments, the super lens 16 includes a lens body 161 and a microstructure array 162. The lens body 161 includes a light incident surface 163 located on the light incident side 165 of the super lens 16 and a light output surface 164 located on the light output side 166 of the super lens 16. The microstructure array 162 is arranged on the incident surface 163.
请参阅图4,在某些实施方式中,微结构阵列162包括多个微结构组1621。微结构组1621包括多个微结构单元1622。像素阵列13包括多个像素组132,像素组132和微结构组1621一一对应。Please refer to FIG. 4, in some embodiments, the microstructure array 162 includes a plurality of microstructure groups 1621. The microstructure group 1621 includes a plurality of microstructure units 1622. The pixel array 13 includes a plurality of pixel groups 132, and the pixel groups 132 and the microstructure groups 1621 correspond one to one.
请参阅图5,在某些实施方式中,微结构组1621的多个微结构单元1622的形状、尺寸、排列和角度根据出射光线L’的波长及出射角度确定。Referring to FIG. 5, in some embodiments, the shape, size, arrangement, and angle of the multiple microstructure units 1622 of the microstructure group 1621 are determined according to the wavelength and the exit angle of the emitted light L'.
请参阅图5,在某些实施方式中,像素组132包括第一像素1311、第二像素1312、第三像素1313和第四像素1314,多种波长不同的出射光线包括红光、第一绿光、第二绿光和蓝光,第一像素1311用于接收红光,第二像素1312用于接收第一绿光,第三像素1313用于接收蓝光,第四像素1314用于接收第二绿光。Referring to FIG. 5, in some embodiments, the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314. A variety of emitted light with different wavelengths includes red light, first green light Light, second green light and blue light, the first pixel 1311 is used to receive red light, the second pixel 1312 is used to receive the first green light, the third pixel 1313 is used to receive blue light, and the fourth pixel 1314 is used to receive the second green light. Light.
请参阅图5,在某些实施方式中,像素组132包括第一像素1311、第二像素1312、第三像素1313和第四像素1314,多种波长不同的出射光线包括红光、第一黄光、第二黄光和蓝光,第一像素1311用于接收红光,第二像素1312用于接收第一黄光,第三像素1313用于接收蓝光,第四像素1314用于接收第二黄光。Referring to FIG. 5, in some embodiments, the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314, and multiple emitted lights with different wavelengths include red light and first yellow light. Light, second yellow light and blue light, the first pixel 1311 is used to receive red light, the second pixel 1312 is used to receive the first yellow light, the third pixel 1313 is used to receive blue light, and the fourth pixel 1314 is used to receive the second yellow light. Light.
请参阅图4,在某些实施方式中,图像传感器10包括微透镜阵列12,微透镜阵列12设置在入光侧165。微透镜阵列12包括多个微透镜121。像素组132、微结构组1621和微透镜121一一对应。Please refer to FIG. 4, in some embodiments, the image sensor 10 includes a microlens array 12, and the microlens array 12 is disposed on the light incident side 165. The microlens array 12 includes a plurality of microlenses 121. The pixel group 132, the microstructure group 1621, and the microlens 121 correspond one to one.
请参阅图6至图8,在某些实施方式中,图像传感器10包括位于成像面S1上的感光面11,感光面11包括多个子感光面111,在每一个子感光面111上,子感光面111的中心位置对应的微透镜121和微结构组1621对准,而非中心位置对应的微透镜121和微结构组1621互相偏移。6-8, in some embodiments, the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the photosensitive surface 11 includes a plurality of sub-sensing surfaces 111, on each sub-sensing surface 111, the sub-sensing surface The microlens 121 corresponding to the center position of the surface 111 is aligned with the microstructure group 1621, while the microlens 121 and the microstructure group 1621 corresponding to the non-central position are offset from each other.
请参阅图6至图8,在某些实施方式中,以中心位置为圆心的多个圆均位于非中心位置,随着微透镜121所处圆的半径的逐渐增大,微透镜121和对应的微结构组1621的偏移量也逐渐增大。Referring to FIGS. 6 to 8, in some embodiments, a plurality of circles with a center as the center are all located in non-central positions. As the radius of the circle where the microlens 121 is located gradually increases, the microlens 121 and the corresponding The offset of the microstructure group 1621 also gradually increases.
请参阅图3至图5,本申请实施方式的成像系统100包括图像传感器10和透镜组20。图像传感器10设置在透镜组20的像侧。图像传感器10包括超透镜16(metelenses)和像素阵列13。像素阵列13位于超透镜16的出光侧166,超透镜16用于对从超透镜16的入光侧165射入的入射光线L进行分光以形成多种波长不同的出射光线L’,不同波长的出射光线L’以不同的出射角度从出光侧166射向像素阵列13。Please refer to FIGS. 3 to 5, the imaging system 100 of the embodiment of the present application includes an image sensor 10 and a lens group 20. The image sensor 10 is provided on the image side of the lens group 20. The image sensor 10 includes a super lens 16 (metelenses) and a pixel array 13. The pixel array 13 is located on the light exit side 166 of the hyper lens 16, and the hyper lens 16 is used to split the incident light L from the light entrance side 165 of the hyper lens 16 to form a variety of exit light L'with different wavelengths. The emitted light L′ is emitted from the light emitting side 166 toward the pixel array 13 at different emission angles.
请参阅图4,在某些实施方式中,超透镜16包括透镜本体161和微结构阵列162。透镜本体161包括位于超透镜16的入光侧165的入光面163及位于超透镜16的出光侧166的出光面164。微结构阵列162设置在入射面163。Please refer to FIG. 4. In some embodiments, the super lens 16 includes a lens body 161 and a microstructure array 162. The lens body 161 includes a light incident surface 163 located on the light incident side 165 of the super lens 16 and a light output surface 164 located on the light output side 166 of the super lens 16. The microstructure array 162 is arranged on the incident surface 163.
请参阅图4,在某些实施方式中,微结构阵列162包括多个微结构组1621。微结构组1621包括多个微结构单元1622。像素阵列13包括多个像素组132,像素组132和微结构组1621一一对应。Please refer to FIG. 4, in some embodiments, the microstructure array 162 includes a plurality of microstructure groups 1621. The microstructure group 1621 includes a plurality of microstructure units 1622. The pixel array 13 includes a plurality of pixel groups 132, and the pixel groups 132 and the microstructure groups 1621 correspond one to one.
请参阅图5,在某些实施方式中,微结构组1621的多个微结构单元1622的形状、尺寸、排列和角度根据出射光线L’的波长及出射角度确定。Referring to FIG. 5, in some embodiments, the shape, size, arrangement, and angle of the multiple microstructure units 1622 of the microstructure group 1621 are determined according to the wavelength and the exit angle of the emitted light L'.
请参阅图5,在某些实施方式中,像素组132包括第一像素1311、第二像素1312、第三像素1313和第四像素1314,多种波长不同的出射光线包括红光、第一绿光、第二绿光和蓝光,第一像素1311用于接收红光,第二像素1312用于接收第一绿光,第三像素1313用于接收蓝光,第四像素1314用于接收第二绿光。Referring to FIG. 5, in some embodiments, the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314. A variety of emitted light with different wavelengths includes red light, first green light Light, second green light and blue light, the first pixel 1311 is used to receive red light, the second pixel 1312 is used to receive the first green light, the third pixel 1313 is used to receive blue light, and the fourth pixel 1314 is used to receive the second green light. Light.
请参阅图5,在某些实施方式中,像素组132包括第一像素1311、第二像素1312、第三像素1313和第四像素1314,多种波长不同的出射光线包括红光、第一黄光、第二黄光和蓝光,第一像素1311用于接收红光,第二像素1312用于接收第一黄光,第三像素1313用于接收蓝光,第四像素1314用于接收第二黄光。Referring to FIG. 5, in some embodiments, the pixel group 132 includes a first pixel 1311, a second pixel 1312, a third pixel 1313, and a fourth pixel 1314, and multiple emitted lights with different wavelengths include red light and first yellow light. Light, second yellow light and blue light, the first pixel 1311 is used to receive red light, the second pixel 1312 is used to receive the first yellow light, the third pixel 1313 is used to receive blue light, and the fourth pixel 1314 is used to receive the second yellow light. Light.
请参阅图4,在某些实施方式中,图像传感器10包括微透镜阵列12,微透镜阵列12设置在入光侧165。微透镜阵列12包括多个微透镜121。像素组132、微结构组1621和微透镜121一一对应。Please refer to FIG. 4, in some embodiments, the image sensor 10 includes a microlens array 12, and the microlens array 12 is disposed on the light incident side 165. The microlens array 12 includes a plurality of microlenses 121. The pixel group 132, the microstructure group 1621, and the microlens 121 correspond one to one.
请参阅图6至图8,在某些实施方式中,图像传感器10包括位于成像面S1上的感光面11,感光面11包括多个子感光面111,在每一个子感光面111上,子感光面111的中心位置对应的微透镜121和微结构组1621对准,而非中心位置对应的微透镜121和微结构组1621互相偏移。6-8, in some embodiments, the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the photosensitive surface 11 includes a plurality of sub-sensing surfaces 111, on each sub-sensing surface 111, the sub-sensing surface The microlens 121 corresponding to the center position of the surface 111 is aligned with the microstructure group 1621, while the microlens 121 and the microstructure group 1621 corresponding to the non-central position are offset from each other.
请参阅图6至图8,在某些实施方式中,以中心位置为圆心的多个圆均位于非中心位置,随着微透镜121所处圆的半径的逐渐增大,微透镜121和对应的微结构组1621的偏移量也逐渐增大。Referring to FIGS. 6 to 8, in some embodiments, a plurality of circles with a center as the center are all located in non-central positions. As the radius of the circle where the microlens 121 is located gradually increases, the microlens 121 and the corresponding The offset of the microstructure group 1621 also gradually increases.
请参阅图3和图5,在某些实施方式中,图像传感器10包括位于成像面S1上的感光面11,透镜组20包括多组透镜21,每组透镜21在成像面S1上对应的成像区域215覆盖部分感光面11,多组透镜21在成像面S1上对应的成像区域215共同覆盖全部感光面11。3 and 5, in some embodiments, the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the lens group 20 includes multiple groups of lenses 21, each group of lenses 21 on the imaging surface S1 corresponding imaging The area 215 covers a part of the photosensitive surface 11, and the imaging area 215 corresponding to the multiple lenses 21 on the imaging surface S1 collectively covers all the photosensitive surface 11.
请参阅图1及图2,本申请实施方式的终端1000包括壳体200和上述实施方式的成像系统100。成像系统100安装在壳体200上。1 and 2, the terminal 1000 of the embodiment of the present application includes a housing 200 and the imaging system 100 of the above-mentioned embodiment. The imaging system 100 is installed on the housing 200.
请参阅图1及图2,本申请实施方式的终端1000包括壳体200和成像系统100。成像系统100安装在壳体200上。Referring to FIG. 1 and FIG. 2, the terminal 1000 in the embodiment of the present application includes a housing 200 and an imaging system 100. The imaging system 100 is installed on the housing 200.
请参阅图3,成像系统100包括图像传感器10和透镜组20。图像传感器10设置在透镜组20的像侧。Please refer to FIG. 3, the imaging system 100 includes an image sensor 10 and a lens group 20. The image sensor 10 is provided on the image side of the lens group 20.
请参阅图4和图5,本申请实施方式的图像传感器10包括超透镜16(metelenses)和像素阵列13。像素阵列13位于超透镜16的出光侧166,超透镜16用于对从超透镜16的入光侧165射入的入射光线L进行分光以形成多种波长不同的出射光线L’,不同波长的出射光线L’以不同的出射角度从出光侧166射向像素阵列13。Please refer to FIG. 4 and FIG. 5, the image sensor 10 according to the embodiment of the present application includes a metalenses 16 and a pixel array 13. The pixel array 13 is located on the light exit side 166 of the hyper lens 16, and the hyper lens 16 is used to split the incident light L from the light entrance side 165 of the hyper lens 16 to form a variety of exit light L'with different wavelengths. The emitted light L′ is emitted from the light emitting side 166 toward the pixel array 13 at different emission angles.
由于光线经过每一个CFA时,都只有一种颜色的光通过,其他的光线均被过滤损失掉,光利用率较低。As the light passes through each CFA, only one color of light passes through, and the other light is filtered and lost, so the light utilization rate is low.
本申请实施方式的图像传感器10中,超透镜16将从入光侧165射入的入射光线L分成多种波长不同的出射光线L’,且不同波长的出射光线L’以不同的出射角度射向像素阵列13以成像,光线没有被过滤,几乎没有损失,光利用率较高。In the image sensor 10 according to the embodiment of the present application, the super lens 16 divides the incident light L from the light entrance side 165 into a plurality of outgoing light rays L'of different wavelengths, and the outgoing light rays L'of different wavelengths are emitted at different exit angles. To image the pixel array 13, the light is not filtered, almost no loss, and the light utilization rate is high.
请参阅图1及图2,更具体地,终端1000可以是手机、平板电脑、显示器、笔记本电脑、柜员机、闸机、智能手表、头显设备、游戏机等。本申请实施方式以终端1000是手机为例进行说明,可以理解,终端1000的具体形式并不限于手机。Please refer to Figure 1 and Figure 2. More specifically, the terminal 1000 may be a mobile phone, a tablet computer, a monitor, a notebook computer, a teller machine, a gate, a smart watch, a head-mounted display device, a game console, and the like. The embodiments of this application are described by taking the terminal 1000 as a mobile phone as an example. It can be understood that the specific form of the terminal 1000 is not limited to a mobile phone.
壳体200可用于安装成像系统100,或者说,壳体200可作为成像系统100的安装载体。终端1000包括正面901和背面902,成像系统100可设置在正面901作为前置摄像头,成像系统100还可设置在背面902作为后置摄像头,本申请实施方式中,成像系统100设置在背面902作为后置摄像头。壳体200还可用于安装终端1000的成像系统100、供电装置、通信装置等功能模块,以使壳体200为功能模块提供防尘、防摔、防水等保护。The housing 200 can be used to install the imaging system 100, or in other words, the housing 200 can be used as an installation carrier of the imaging system 100. The terminal 1000 includes a front 901 and a back 902. The imaging system 100 can be set on the front 901 as a front camera, and the imaging system 100 can also be set on the back 902 as a rear camera. In the embodiment of the application, the imaging system 100 is set on the back 902 as a rear camera. rear camera. The housing 200 can also be used to install functional modules such as the imaging system 100, power supply device, and communication device of the terminal 1000, so that the housing 200 provides protections such as dustproof, anti-drop, and waterproof for the functional modules.
请参阅图3,更具体地,图像传感器10包括感光面11、微透镜阵列12、超透镜16和像素阵列13。感光面11位于成像面S1上。Please refer to FIG. 3. More specifically, the image sensor 10 includes a photosensitive surface 11, a micro lens array 12, a super lens 16 and a pixel array 13. The photosensitive surface 11 is located on the imaging surface S1.
感光面11呈矩形。感光面11包括多个子感光面111,例如,感光面11包括一个子感光面111、两个子感光面111、三个子感光面111、四个子感光面111、甚至更多个子感光面111等。本实施方式中,感光面11包括四个子感光面111,四个子感光面111均呈矩形,四个矩形的长均相等,四个矩形的宽均相等。在其他实施方式中,四个子感光面111可以均为圆形、菱形等,或四个子感光面111可以部分为矩形,部分为圆形、菱形等。四个子感光面111的尺寸也可以互不相同,或其中两个相同,或其中三个相同等。The photosensitive surface 11 is rectangular. The photosensitive surface 11 includes a plurality of sub photosensitive surfaces 111. For example, the photosensitive surface 11 includes one sub photosensitive surface 111, two sub photosensitive surfaces 111, three sub photosensitive surfaces 111, four sub photosensitive surfaces 111, or even more sub photosensitive surfaces 111. In this embodiment, the photosensitive surface 11 includes four sub-photosensitive surfaces 111, the four sub-photosensitive surfaces 111 are all rectangular, the lengths of the four rectangles are the same, and the widths of the four rectangles are the same. In other embodiments, the four sub-photosensitive surfaces 111 may all be circular, diamond-shaped, etc., or the four sub-photosensitive surfaces 111 may be partially rectangular, and partially circular, diamond-shaped, or the like. The sizes of the four sub-photosensitive surfaces 111 may also be different from each other, or two of them are the same, or three of them are the same.
请参阅图3至图5,微透镜阵列12位于感光面11上,且微透镜阵列12位于透镜组20和超透镜16之间,微透镜阵列12位于超透镜16的入光侧165。微透镜阵列12包括多个微透镜121。微透镜121可以是凸透镜,用于会聚从透镜组20射向微透镜121的光线,使得更多光线照射在超透镜16上。Referring to FIGS. 3 to 5, the micro lens array 12 is located on the photosensitive surface 11, and the micro lens array 12 is located between the lens group 20 and the super lens 16, and the micro lens array 12 is located on the light incident side 165 of the super lens 16. The microlens array 12 includes a plurality of microlenses 121. The micro lens 121 may be a convex lens for condensing the light emitted from the lens group 20 to the micro lens 121 so that more light is irradiated on the super lens 16.
超透镜16位于微透镜阵列12和像素阵列13之间。超透镜16包括透镜本体161和微结构阵列162。The hyper lens 16 is located between the micro lens array 12 and the pixel array 13. The super lens 16 includes a lens body 161 and a microstructure array 162.
透镜本体161包括位于超透镜16的入光侧165的入光面163及位于超透镜16的出光侧166的出光面164。其中,入光侧165为超透镜16的与微透镜阵列12相对的一侧,出光侧166为超透镜16的与微透镜阵列12相背的一侧。The lens body 161 includes a light incident surface 163 located on the light incident side 165 of the super lens 16 and a light output surface 164 located on the light output side 166 of the super lens 16. Among them, the light entrance side 165 is the side of the super lens 16 opposite to the micro lens array 12, and the light exit side 166 is the side of the super lens 16 opposite to the micro lens array 12.
透镜本体161可采用透光率较高的材料,例如透镜本体161可采用高透光率(透过率大于90%)的塑料或玻璃等。透镜本体161可以作为微结构阵列162的载体,且从入光侧165进入的光线经过透镜本体161时基本没有损失,有利于提高光利用率。The lens body 161 can be made of materials with high light transmittance. For example, the lens body 161 can be made of plastic or glass with high light transmittance (transmittance greater than 90%). The lens body 161 can be used as a carrier of the microstructure array 162, and the light entering from the light incident side 165 passes through the lens body 161 without loss, which is beneficial to improve the light utilization efficiency.
微结构阵列162设置在入射面163。微结构阵列162包括多个微结构组1621。微结构组1621和微透镜121对应。例如,微结构组1621和一个微透镜121对应,或者,微结构组1621和两个微透镜121对应,或者,微结构组1621和三个微透镜121对应,或者,微结构组1621和四个微透镜121对应等,微结构组1621还可以和更多个(大于4个)微透镜121对应,在此不一一列举。本申请实施方式中,微结构组1621和一个微透镜121对应。The microstructure array 162 is arranged on the incident surface 163. The microstructure array 162 includes a plurality of microstructure groups 1621. The microstructure group 1621 corresponds to the microlens 121. For example, the microstructure group 1621 corresponds to one microlens 121, or the microstructure group 1621 corresponds to two microlenses 121, or the microstructure group 1621 corresponds to three microlenses 121, or the microstructure group 1621 corresponds to four microlenses. The micro-lens 121 corresponds, etc. The micro-structure group 1621 can also correspond to more (more than 4) micro-lenses 121, which will not be listed here. In the embodiment of the present application, the microstructure group 1621 corresponds to one microlens 121.
微结构组1621包括多个微结构单元1622。多个微结构单元1622的形状、尺寸、排列和角度根据出射光线L’的波长及出射角度确定。微结构单元1622的形状可以是长方体、正方体、圆柱体、甚至其他不规则形状(如被截取了一部分的长方体)等。本申请实施方式中,微结构单元1622为长方体。微结构单元1622的尺寸可以相同也可以不同,例如,在一个微结构组1621中,多个微结构单元1622的尺寸均相同,或者,多个微结构单元1622分成多个部分(例如两部分、三部分等),每部分内的微结构单元1622的尺寸均相同,而不同部分的微结构单元1622的尺寸均不相同。本申请实施方式中,每个微结构组1621内的微结构单元1622的尺寸均相同。每个微结构组1621内的微结构单元1622的排列可以呈规则的图形(例如矩形、圆形、“L”型、“T型”等)排列,也可以呈不规则的图形(如被截取了一部分的矩形、圆形等)。多个微结构单元1622的角度指的是,微结构单元1622和入射面163之间的夹角,该夹角可以是区间[0度,90度]中任一角度。本申请实施方式中,每个微结构组1621内的微结构单元1622的和入射面163之间的夹角为90度,也即是说,长方体形状的微结构单元1622的长边和入射面163之间的夹角为90度。The microstructure group 1621 includes a plurality of microstructure units 1622. The shape, size, arrangement and angle of the plurality of microstructure units 1622 are determined according to the wavelength and the exit angle of the emitted light L'. The shape of the microstructure unit 1622 may be a rectangular parallelepiped, a cube, a cylinder, or even other irregular shapes (such as a rectangular parallelepiped with a portion of which is cut off). In the embodiment of the present application, the microstructure unit 1622 is a rectangular parallelepiped. The size of the microstructure units 1622 may be the same or different. For example, in a microstructure group 1621, the sizes of multiple microstructure units 1622 are all the same, or the multiple microstructure units 1622 are divided into multiple parts (for example, two parts, Three parts, etc.), the size of the microstructure unit 1622 in each part is the same, and the size of the microstructure unit 1622 in different parts is different. In the embodiment of the present application, the size of the microstructure unit 1622 in each microstructure group 1621 is the same. The arrangement of the microstructure units 1622 in each microstructure group 1621 can be arranged in a regular pattern (such as rectangular, circular, "L"-shaped, "T-shaped", etc.), or in an irregular pattern (such as intercepted Part of the rectangle, circle, etc.). The angle of the plurality of microstructure units 1622 refers to the included angle between the microstructure unit 1622 and the incident surface 163, and the included angle can be any angle in the interval [0 degree, 90 degrees]. In the embodiment of the present application, the angle between the microstructure unit 1622 in each microstructure group 1621 and the incident surface 163 is 90 degrees, that is, the long side of the rectangular parallelepiped microstructure unit 1622 and the incident surface The angle between 163 is 90 degrees.
每个微结构组1621的微结构单元1622形状、尺寸、排列和角度相同。微结构单元1622由纳米级的二氧化钛形成,使得微结构单元1622可实现高平滑度和精确的长宽高比例,有利于微结构组1621准确地将入射光线L分成多束不同波长的出射光线L’。The microstructure units 1622 of each microstructure group 1621 have the same shape, size, arrangement, and angle. The microstructure unit 1622 is formed of nano-scale titanium dioxide, so that the microstructure unit 1622 can achieve high smoothness and precise ratio of length to width, which is beneficial for the microstructure group 1621 to accurately divide the incident light L into multiple beams of outgoing light L of different wavelengths. '.
超透镜16(具体为微结构组1621)用于对从入光侧165射入的入射光线L进行分光以形成多种波长不同的出射光线L’,不同波长的出射光线L’以不同的出射角度从出光侧166射向像素阵列13。在一个例子中,入射光线L经过微结构阵列162后被分为多束不同波长的出射光线L’,分别为红光R、第一绿光G1、第二绿光G2和蓝光B,其中,第一绿光G1和第二绿光G2的波长可以相同也可以不同。The super lens 16 (specifically, the microstructure group 1621) is used to split the incident light L from the light incident side 165 to form a variety of outgoing rays L'with different wavelengths, and the outgoing rays L'with different wavelengths are emitted in different ways The angle is emitted from the light exit side 166 toward the pixel array 13. In an example, the incident light L after passing through the microstructure array 162 is divided into a plurality of outgoing light rays L'of different wavelengths, which are red light R, first green light G1, second green light G2, and blue light B, respectively. The wavelengths of the first green light G1 and the second green light G2 may be the same or different.
请参阅图4和图5,像素阵列13位于超透镜16的出光侧166。像素阵列13包括多个像素组132,像素组132、微结构组1621和微透镜121一一对应设置。Please refer to FIG. 4 and FIG. 5, the pixel array 13 is located on the light exit side 166 of the super lens 16. The pixel array 13 includes a plurality of pixel groups 132, and the pixel groups 132, the microstructure groups 1621 and the microlenses 121 are arranged in a one-to-one correspondence.
具体地,每个像素组132包括四个像素131(分别为第一像素1311、第二像素1312、第三像素1313和第四像素1314),每个微结构组165将经过微结构组165的入射光线L分为四种波长不同的出射光线L’(包括红光R、第一绿光G1、蓝光B和第二绿光G2),红光R、第一绿光G1、蓝光B和第二绿光G2分别进入到对应的像素组132内的第一像素1311、第二像素1312、第三像素1313和第四像素1314以进行光电转换。其中,红光R可包括波长处于区间[622纳米(nm),770nm]内的部分或所有光线,第一绿光R1可包括波长处于区间[492nm,500nm]内的部分或所有光线,第二绿光R2可包括波长处于区间(500nm,577nm]内的部分或所有光线,蓝光B可包括波长处于区间[455nm,492nm)内的部分或所有光线。在其他实施方式中,每个微结构组165将经过微结构组165的入射光线L分为四种波长不同的出射光线L’(包括红光R、第一黄光Y1、蓝光B和第二黄光Y2),红光R、第一黄光Y1、蓝光B和第二黄光Y2分别进入到对应的像素组132内的第一像素1311、第二像素1312、第三像素1313和第四像素1314以进行光电转换。其中,红光R可包括波长处于区间[622nm,770nm]内的部分或所有光线,第一黄光Y1可包括波长处于区间[577nm,580nm]内的部分或所有光线,第二黄光Y2可包括波长处于区间(580nm,597nm]内的部分或所有光线,蓝光B可包括波长处于区间[455nm,492nm]内的部分或所有光线。Specifically, each pixel group 132 includes four pixels 131 (respectively the first pixel 1311, the second pixel 1312, the third pixel 1313, and the fourth pixel 1314), and each microstructure group 165 will pass through the microstructure group 165. The incident light L is divided into four different wavelengths of outgoing light L'(including red light R, first green light G1, blue light B, and second green light G2), red light R, first green light G1, blue light B, and second green light G2. The two green lights G2 respectively enter the first pixel 1311, the second pixel 1312, the third pixel 1313, and the fourth pixel 1314 in the corresponding pixel group 132 for photoelectric conversion. Among them, the red light R can include part or all of the light within the wavelength range [622 nanometers (nm), 770nm], the first green light R1 can include part or all of the light within the wavelength range [492nm, 500nm], and the second The green light R2 may include part or all of the light having a wavelength in the range (500nm, 577nm), and the blue light B may include some or all of the light having a wavelength in the range of [455nm, 492nm). In other embodiments, each microstructure group 165 divides the incident light L passing through the microstructure group 165 into four types of outgoing light L′ with different wavelengths (including red light R, first yellow light Y1, blue light B, and second light). Yellow light Y2), the red light R, the first yellow light Y1, the blue light B, and the second yellow light Y2 respectively enter the first pixel 1311, the second pixel 1312, the third pixel 1313, and the fourth pixel in the corresponding pixel group 132. The pixel 1314 performs photoelectric conversion. Among them, the red light R may include part or all of the light within the wavelength range [622nm, 770nm], the first yellow light Y1 may include part or all of the light within the wavelength range [577nm, 580nm], and the second yellow light Y2 may Including part or all of the light within the wavelength range (580nm, 597nm], the blue light B may include part or all of the light within the wavelength interval [455nm, 492nm].
此时,在微透镜阵列12与像素阵列13之间可以不需要再设置滤光片,相较于传统的成像系统中通过滤光片过滤和吸收光线,以使得对应波长的光线分别进入对应的像素内而言,使用超透镜16替代滤光片的作用,光线没有被过滤吸收而是被微结构组1621直接分成不同波长的多束出射光射L’射向对应的像素131,光线几乎没有损失,光利用率较高。且微透镜121也无需像传统的图像传感器中一样,先设置微透镜和像素一一对应,再利用微透镜121将光线会聚后射向对应的像素内,而是只需要微透镜121将光会聚后射向对应的微结构组1621,然后由对应的微结构组1621将光线分为不同波长的光线后射向对应的像素131即可,由于光线没有被过滤损耗,使用更少的微透镜121也可使得像素阵列13接收的光量满足拍摄要求,且降低了微透镜阵列121的制作要求及成本。在其他实施方式中,微透镜121的尺寸可以大于传统的图像传感器中的微透镜尺寸,从而使得微透镜121可以会聚更多的光线以射向微结构组1621,从而提高到达像素阵列13的光量。At this time, there is no need to provide a filter between the microlens array 12 and the pixel array 13. Compared with the traditional imaging system, the filter is used to filter and absorb the light, so that the light of the corresponding wavelength enters the corresponding In terms of pixels, the super lens 16 is used to replace the role of the filter. The light is not filtered and absorbed but is directly divided by the microstructure group 1621 into multiple outgoing light beams of different wavelengths and shoots L'to the corresponding pixel 131. There is almost no light. Loss, the light utilization rate is higher. Moreover, the microlens 121 does not need to be set in a one-to-one correspondence between the microlens and the pixels as in the traditional image sensor, and then the microlens 121 is used to converge the light into the corresponding pixel, but only the microlens 121 is required to converge the light. The light is emitted to the corresponding microstructure group 1621, and then the light is divided into light of different wavelengths by the corresponding microstructure group 1621 and then directed to the corresponding pixel 131. Since the light is not lost by filtering, fewer microlenses 121 are used. The amount of light received by the pixel array 13 can also meet the shooting requirements, and the manufacturing requirements and costs of the microlens array 121 can be reduced. In other embodiments, the size of the microlens 121 may be larger than the size of the microlens in a conventional image sensor, so that the microlens 121 can condense more light to the microstructure group 1621, thereby increasing the amount of light reaching the pixel array 13 .
请参阅图6,在每一个子感光面111上,子感光面111的中心位置对应的微透镜121和微结构组1621对准,而非中心位置对应的微透镜121和微结构组1621互相偏移。具体地,子感光面111的中心位置是矩形的对角线的交点,以中心位置为圆心,以大于0且小于对角线长度的一半为半径的多个圆均位于非中心位置,同一个圆上分布的微结构组1621 和对应的微透镜121的偏移量相同,微结构组1621和对应的微透镜121的偏移量与半径的大小呈正相关。其中,偏移量指的是微透镜121在微结构阵列16上的正投影的中心和对应的微结构组1621的中心的距离。Referring to FIG. 6, on each sub-photosensitive surface 111, the microlens 121 corresponding to the center position of the sub-photosensitive surface 111 is aligned with the microstructure group 1621, while the microlens 121 and the microstructure group 1621 corresponding to the non-central position are offset from each other. shift. Specifically, the center position of the sub-photosensitive surface 111 is the intersection of the diagonal lines of the rectangle. The center position is the center, and the multiple circles whose radius is greater than 0 and less than half of the diagonal length are all located in non-central positions. The offset of the microstructure group 1621 and the corresponding microlens 121 distributed on the circle is the same, and the offset of the microstructure group 1621 and the corresponding microlens 121 is positively correlated with the size of the radius. The offset refers to the distance between the center of the orthographic projection of the microlens 121 on the microstructure array 16 and the center of the corresponding microstructure group 1621.
具体地,微透镜121和对应的像素131的偏移量与所处圆的半径的大小呈正相关指的是,随着微透镜121所处圆的半径的逐渐增大,微透镜121和对应的微结构组1621的偏移量也逐渐增大。例如,r1、r2和r3三个圆的半径逐渐增大,分布在r1、r2和r3的圆周上的微透镜121和对应的微结构组1621偏移量分别为X1、X2和X3,其中,X1<X2<X3。Specifically, the offset between the microlens 121 and the corresponding pixel 131 is positively correlated with the radius of the circle where the microlens 121 is located. This means that as the radius of the circle where the microlens 121 is located gradually increases, the microlens 121 and the corresponding The offset of the microstructure group 1621 also gradually increases. For example, the radii of the three circles r1, r2, and r3 gradually increase, and the offsets of the microlenses 121 and the corresponding microstructure group 1621 distributed on the circumferences of r1, r2, and r3 are X1, X2, and X3, respectively. X1<X2<X3.
如此,当微透镜121和微结构组1621完全对准而不偏移时,对于一个子感光面111而言,边缘位置的微透镜121会聚的光线中有一部分光线无法被对应微结构组1621接收,造成光线的浪费。本申请实施方式的图像传感器10为非中心位置对应的微透镜121和与其对应的微结构组1621设置合理的偏移量,可提高微透镜121的会聚效果,使得微透镜121接收的光线被会聚后均可被对应的微结构组1621接收。In this way, when the microlens 121 and the microstructure group 1621 are completely aligned without shifting, for a sub-photosensitive surface 111, part of the light condensed by the microlens 121 at the edge position cannot be received by the corresponding microstructure group 1621 , Resulting in a waste of light. In the image sensor 10 of the embodiment of the present application, a reasonable offset is set for the microlens 121 corresponding to the non-central position and the corresponding microstructure group 1621, which can improve the convergence effect of the microlens 121, so that the light received by the microlens 121 is condensed All of them can be received by the corresponding microstructure group 1621.
请参阅图7,请参阅图4,遮光件14形成在两个子感光面111的相接处。具体地,遮光件14可通过胶合等方式设置在两个子感光面111的相接处。遮光件14可以是由不透光材料制成,遮光件14也可以是由可吸收光线的材料制成。Please refer to FIG. 7 and FIG. 4, the shading member 14 is formed at the junction of the two sub-photosensitive surfaces 111. Specifically, the light-shielding member 14 may be arranged at the junction of the two sub-photosensitive surfaces 111 by gluing or the like. The shading member 14 may be made of a opaque material, and the shading member 14 may also be made of a material that can absorb light.
请再次参阅图3,透镜组20包括多组透镜21。例如,透镜组20包括一组透镜21、两组透镜21、三组透镜21、四组透镜21、甚至更多组透镜21等。本申请实施方式的透镜组20包括四组透镜21。Please refer to FIG. 3 again, the lens group 20 includes a multi-group lens 21. For example, the lens group 20 includes one group of lenses 21, two groups of lenses 21, three groups of lenses 21, four groups of lenses 21, even more groups of lenses 21, and the like. The lens group 20 of the embodiment of the present application includes four groups of lenses 21.
请结合图8,每组透镜21在成像面S1上对应的成像区域215部分覆盖感光面11。其中,每组透镜21在成像面S1上对应的成像区域215指的是经过该组透镜21后射出的光线的在成像面S1上的覆盖范围。具体地,每组透镜21在成像面S1上对应的成像区域215覆盖至少一个对应的子感光面111。四组透镜21的成像区域215共同覆盖全部感光面11,也即是说,感光面11位于四组透镜21的成像区域215共同覆盖的范围内。例如,第一组透镜211在成像面S1上对应的第一成像区域2151覆盖第一子感光面1111,第二组透镜212在成像面S1上对应的第二成像区域2152覆盖第二子感光面1112,第三组透镜213在成像面S1上对应的第三成像区域2153覆盖第三子感光面1113,第四组透镜214在成像面S1上对应的第四成像区域2154覆盖第四子感光面1114,从而使得第一成像区域2151、第二成像区域2152、第三成像区域2153、和第四成像区域2154共同覆盖整个的感光面11。Referring to FIG. 8, the imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 partially covers the photosensitive surface 11. Wherein, the imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 refers to the coverage area of the light rays emitted after passing through the group of lenses 21 on the imaging surface S1. Specifically, the corresponding imaging area 215 of each group of lenses 21 on the imaging surface S1 covers at least one corresponding sub-photosensitive surface 111. The imaging area 215 of the four groups of lenses 21 collectively covers all the photosensitive surfaces 11, that is to say, the photosensitive surface 11 is located in the range jointly covered by the imaging areas 215 of the four groups of lenses 21. For example, the first imaging area 2151 corresponding to the first group lens 211 on the imaging surface S1 covers the first sub-photosensitive surface 1111, and the second imaging area 2152 corresponding to the second group lens 212 on the imaging surface S1 covers the second sub-photosensitive surface 1112, the third imaging area 2153 corresponding to the third group lens 213 on the imaging surface S1 covers the third sub-photosensitive surface 1113, and the fourth imaging area 2154 corresponding to the fourth group lens 214 on the imaging surface S1 covers the fourth sub-photosensitive surface 1114, so that the first imaging area 2151, the second imaging area 2152, the third imaging area 2153, and the fourth imaging area 2154 collectively cover the entire photosensitive surface 11.
每组透镜21可包括一枚或多枚透镜。例如,每组透镜21可包括一枚透镜,该透镜可以是凸透镜或凹透镜;再例如,每组透镜21包括多枚透镜(大于和等于两枚),多枚透镜沿着光轴O’方向依次排列,多枚透镜可均为凸透镜或凹透镜,或部分为凸透镜,部分为凹透镜。本实施方式中,每组透镜21均包括一枚透镜。每组透镜21在成像面S1对应的成像区域215可以是圆形、矩形、菱形等,本申请实施方式中,每组透镜21均采用非球面镜,成像区域215为圆形。圆形的成像区域215正好为矩形的子感光面111的外接圆。圆形的成像区域215中的和矩形的子感光面111不重合的区域中,其中一部分对应的光线未射入到感光面11的范围内,另一部分对应的光线由于被遮光件14阻挡吸收,无法射向相邻的子感光面111内,从而防止了不同组透镜21之间的光线的相互干扰。Each lens group 21 may include one or more lenses. For example, each group of lenses 21 may include one lens, which may be a convex lens or a concave lens; for another example, each group of lenses 21 may include multiple lenses (more than and equal to two), and the multiple lenses are sequentially along the optical axis O'direction. Arrangement, multiple lenses can all be convex lenses or concave lenses, or part of convex lenses and part of concave lenses. In this embodiment, each group of lenses 21 includes one lens. The imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 may be circular, rectangular, rhombus, etc. In the embodiment of the present application, each group of lenses 21 adopts an aspheric lens, and the imaging area 215 is circular. The circular imaging area 215 is exactly the circumscribed circle of the rectangular sub-photosensitive surface 111. In the area of the circular imaging area 215 that does not overlap with the rectangular sub-photosensitive surface 111, a part of the corresponding light does not enter the range of the photosensitive surface 11, and the other part of the corresponding light is blocked and absorbed by the shading member 14. It cannot be directed into the adjacent sub-photosensitive surfaces 111, thereby preventing the light rays from different groups of lenses 21 from interfering with each other.
请参阅图8和图9,以第一子感光面1111及对应的第一成像区域2151为例进行说明,如图9所示,图9中的2155区域对应的光线未射入第一子感光面1111范围内,也未落入感光面11的范围内,无法被感光面11对应的像素131接收以成像。图9中的2156区域对应的光线会被遮光件14阻挡吸收,而无法射入到相邻的第二子感光面1112、和第四子感光面1114的范围内,也即是说,第一组透镜211的光线无法影响到第二子感光面1112对应的像素131的成像和第四子感光面1114对应的像素131的成像。同样的,第二组透镜212的光线无法影响到第一子感光面1111对应的像素131的成像和第三子感光面1113对应的像素131的成像,第三组透镜213的光线无法影响到第二子感光面1112对应的像素131的成像和第四子感光面1114对应的像素131的成像,第四组透镜214的光线无法影响到第三子感光面1113对应的像素131的成像和第一子感光面1114对应的像素131的成像,如此,经过第一组透镜211、第二组透镜212、第三组透镜213和第四组透镜214的光线互不影响,从而保证成像的准确性。Please refer to FIGS. 8 and 9, taking the first sub-photosensitive surface 1111 and the corresponding first imaging area 2151 as an example for description. As shown in FIG. 9, the light corresponding to the area 2155 in FIG. 9 does not enter the first sub-photosensitive surface. Within the range of the surface 1111, it does not fall within the range of the photosensitive surface 11, and cannot be received by the pixels 131 corresponding to the photosensitive surface 11 for imaging. The light corresponding to the area 2156 in FIG. 9 will be blocked and absorbed by the shading member 14, and cannot enter the adjacent second sub-sensing surface 1112 and the fourth sub-sensing surface 1114, that is, the first The light of the lens group 211 cannot affect the imaging of the pixels 131 corresponding to the second sub-photosensitive surface 1112 and the imaging of the pixels 131 corresponding to the fourth sub-photosensitive surface 1114. Similarly, the light from the second lens group 212 cannot affect the imaging of the pixels 131 corresponding to the first sub-photoreceptive surface 1111 and the image formation of the pixels 131 corresponding to the third sub-photoreceptive surface 1113. The light from the third lens group 213 cannot affect the The imaging of the pixels 131 corresponding to the second photosensitive surface 1112 and the imaging of the pixels 131 corresponding to the fourth sub-sensing surface 1114. The light of the fourth lens group 214 cannot affect the imaging and the first imaging of the pixels 131 corresponding to the third sub-sensing surface 1113. The imaging of the pixels 131 corresponding to the sub-photosensitive surface 1114 is such that the light passing through the first group of lenses 211, the second group of lenses 212, the third group of lenses 213, and the fourth group of lenses 214 does not affect each other, thereby ensuring the accuracy of imaging.
在其他实施方式中,每组透镜21中的至少一个透镜的至少一个表面为自由曲面。可以理解,非球面透镜由于是旋转对称设计,仅有一个对称轴,所以其对应的成像区域215一般为圆形。而包括自由曲面的透镜21为非旋转对称设计,包括多个对称轴,在成像区域215的设计上不受圆形的限制,可设计成矩形、菱形、甚至不规则形状(如“D”字形)等。本申请的每组透镜21对应的成像区域215呈矩形,和对应的子感光面111的矩形尺寸相同,此时,无需设置遮光件14,不同组透镜21之间的光线也不会相互干扰。In other embodiments, at least one surface of at least one lens in each group of lenses 21 is a free-form surface. It can be understood that the aspheric lens has only one axis of symmetry due to its rotationally symmetric design, so its corresponding imaging area 215 is generally circular. The lens 21 including a free-form surface is a non-rotationally symmetrical design, including multiple symmetry axes. The design of the imaging area 215 is not restricted by a circle, and can be designed into a rectangle, a rhombus, or even an irregular shape (such as a "D" shape). )Wait. The imaging area 215 corresponding to each group of lenses 21 of the present application is rectangular, and has the same rectangular size as the corresponding sub-photosensitive surface 111. At this time, there is no need to provide the shading member 14 and the light between different groups of lenses 21 will not interfere with each other.
请参阅图3和图10,每组透镜21的光轴O相对感光面11倾斜,多组透镜21的光轴O在透镜组20的物侧(即,透镜组20的与感光面11相背的一侧)会聚。具体地,每组透镜21的光轴O可以均与垂直感光面11并穿过感光面11中心的中心轴线O’相交,且相交在物侧。每组透镜21的光轴O和中心轴线O’的夹角α为区间(0度,15度]之间任一角度,例如夹角α为1度、2度、3度、5度、7度、10度、13度、15度等。不同组透镜21的夹角α可以相同也可以不同。例如,第一组透镜211、第二组透镜212、第三组透镜213和第四组透镜214的夹角α相同,均为10度;或 者,第一组透镜211、第二组透镜212、第三组透镜213和第四组透镜214的夹角α均不同,分别为5度、7度、10度和13度;或者,第一组透镜211、第二组透镜212的夹角α相同均为α1,第三组透镜213和第四组透镜214的夹角α相同均为α2,α1不等于α2,如α1=10度,α2=13度;等,在此不再一一列举。每组透镜21的光轴O位于对应的子感光面111的对角线和中心轴线O’所在的平面内,具体地,每组透镜21的光轴O在感光面11上的投影位于对应的子感光面111的对角线上。3 and 10, the optical axis O of each lens group 21 is inclined with respect to the photosensitive surface 11, and the optical axis O of the multi-group lens 21 is on the object side of the lens group 20 (that is, the lens group 20 is opposite to the photosensitive surface 11). Side) converge. Specifically, the optical axis O of each group of lenses 21 may intersect a central axis O'perpendicular to the photosensitive surface 11 and passing through the center of the photosensitive surface 11, and intersect on the object side. The included angle α between the optical axis O and the central axis O'of each group of lenses 21 is any angle between the interval (0 degrees, 15 degrees), for example, the included angle α is 1 degree, 2 degrees, 3 degrees, 5 degrees, 7 degrees. Degrees, 10 degrees, 13 degrees, 15 degrees, etc. The angle α of different groups of lenses 21 may be the same or different. For example, the first group of lenses 211, the second group of lenses 212, the third group of lenses 213, and the fourth group of lenses The included angle α of 214 is the same, all being 10 degrees; or, the included angle α of the first group lens 211, the second group lens 212, the third group lens 213, and the fourth group lens 214 are all different, being 5 degrees and 7 degrees respectively. Degrees, 10 degrees, and 13 degrees; or, the angle α between the first group lens 211 and the second group lens 212 is the same as α1, and the angle α between the third group lens 213 and the fourth group lens 214 is the same as α2, α1 is not equal to α2, such as α1=10 degrees, α2=13 degrees; etc., which will not be listed here. The optical axis O of each lens group 21 is located on the diagonal of the corresponding sub-photosensitive surface 111 and the central axis O' In the plane where the lens is located, specifically, the projection of the optical axis O of each group of lenses 21 on the photosensitive surface 11 is located on the diagonal of the corresponding sub-photosensitive surface 111.
在其他实施方式中,每组透镜21的光轴O相对感光面11倾斜,多组透镜21的光轴O在透镜组20的像侧会聚。具体地,每组透镜21的光轴O均与垂直感光面11并穿过感光面11中心的中心轴线O’相交,且相交在像侧。每组透镜21的光轴O和中心轴线O’的夹角α为区间(0度,15度]之间任一角度,例如夹角α为1度、2度、3度、5度、7度、10度、13度、15度等。每组透镜21的光轴O位于对应的子感光面111的对角线和中心轴线O’所在的平面内,具体地,每组透镜21的光轴O在感光面11上的投影位于对应的子感光面111的对角线上。In other embodiments, the optical axis O of each lens group 21 is inclined with respect to the photosensitive surface 11, and the optical axis O of the multi-group lens 21 converges on the image side of the lens group 20. Specifically, the optical axis O of each group of lenses 21 intersects a central axis O'perpendicular to the photosensitive surface 11 and passing through the center of the photosensitive surface 11, and intersects on the image side. The included angle α between the optical axis O and the central axis O'of each group of lenses 21 is any angle between the interval (0 degrees, 15 degrees), for example, the included angle α is 1 degree, 2 degrees, 3 degrees, 5 degrees, 7 degrees. Degrees, 10 degrees, 13 degrees, 15 degrees, etc. The optical axis O of each group of lenses 21 is located in the plane where the diagonal of the corresponding sub-photosensitive surface 111 and the central axis O'are located, specifically, the light of each group of lenses 21 The projection of the axis O on the photosensitive surface 11 is located on the diagonal of the corresponding sub-photosensitive surface 111.
每组透镜21的视场角FOV为区间[60度,80度]中任一角度,例如视场角FOV为60度、62度、65度、68度、70度、75度、78度、80度等。不同组透镜21的视场角FOV可以相同也可以不同。例如,第一组透镜211、第二组透镜212、第三组透镜213和第四组透镜214的视场角FOV相同,均为60度;或者,第一组透镜211、第二组透镜212、第三组透镜213和第四组透镜214的视场角FOV均不同,分别为60度、65度、70度和75度;或者,第一组透镜211、第二组透镜212的视场角FOV相同均为α1,第三组透镜213和第四组透镜214的夹角α相同均为α2,α1不等于α2,如α1=60度,α2=75度;等,在此不再一一列举。The field of view FOV of each lens group 21 is any angle in the interval [60 degrees, 80 degrees], for example, the field of view FOV is 60 degrees, 62 degrees, 65 degrees, 68 degrees, 70 degrees, 75 degrees, 78 degrees, 80 degrees and so on. The angle of view FOV of the lenses 21 of different groups may be the same or different. For example, the FOV of the first group lens 211, the second group lens 212, the third group lens 213, and the fourth group lens 214 are the same, 60 degrees; or, the first group lens 211, the second group lens 212 , The FOV of the third group lens 213 and the fourth group lens 214 are different, respectively 60 degrees, 65 degrees, 70 degrees and 75 degrees; or the first group lens 211, the second group lens 212 field of view The angle FOV is the same as α1, and the angle α between the third lens group 213 and the fourth lens group 214 is the same as α2, and α1 is not equal to α2, such as α1=60 degrees, α2=75 degrees; One enumerate.
多组透镜21的视场范围依次形成盲区范围a0、第一视场距离a1和第二视场距离a2。盲区范围a0、第一视场距离a1和第二视场距离a2均为距离光心平面S2的距离范围,多组透镜21的光心均在光心平面S2上。其中,盲区范围a0为多组透镜21的视场范围没有重合区域的距离范围,盲区范围a0根据多组透镜21的视场角FOV及多组透镜21的光轴O和中心轴线O’的夹角α确定,例如,多组透镜21的视场角FOV不变,则盲区范围a0和多组透镜21的光轴O和中心轴线O’的夹角α负相关;再例如,多组透镜21的光轴O和中心轴线O’的夹角α不变,盲区范围a0和多组透镜21的视场角FOV负相关。本申请实施方式中,每组透镜21的光轴O和中心轴线O’的夹角α为区间(0度,15度]之间任一角度,盲区范围a0较小。其中,盲区范围a0为[1mm,7mm],第一视场距离a1为区间(7mm,400mm],第二视场距离a2为区间(400mm,+∞)。The field of view range of the multi-group lens 21 sequentially forms a blind area range a0, a first field of view distance a1, and a second field of view distance a2. The blind zone range a0, the first field of view distance a1, and the second field of view distance a2 are all distance ranges from the optical center plane S2, and the optical centers of the multiple lenses 21 are all on the optical center plane S2. Among them, the blind zone range a0 is the distance range where the field of view of the multi-lens 21 does not overlap the area. The blind zone a0 is based on the FOV of the multi-lens 21 and the clamp between the optical axis O and the central axis O'of the multi-lens 21 The angle α is determined. For example, if the FOV of the multi-group lens 21 remains unchanged, the blind zone range a0 is negatively correlated with the angle α between the optical axis O of the multi-group lens 21 and the central axis O'; for another example, the multi-group lens 21 The included angle α between the optical axis O and the central axis O′ of the optical axis is unchanged, and the blind zone range a0 is negatively correlated with the field angle FOV of the multi-lens 21. In the embodiment of the present application, the angle α between the optical axis O and the central axis O'of each group of lenses 21 is any angle between the interval (0 degrees, 15 degrees), and the blind zone range a0 is relatively small. Among them, the blind zone range a0 is [1mm, 7mm], the first field of view distance a1 is the interval (7mm, 400mm], and the second field of view distance a2 is the interval (400mm, +∞).
第一视场距离a1位于盲区范围a0和第二视场距离a2之间,随着距离光心平面S2的距离的增大,在第一视场距离a1内时,多组透镜21的合成视场范围中的重合区域逐渐增大,然后达到第二视场距离a2和第一视场距离a1的交接处时达到最大(重合区域占整个合成视场范围的比例为100%);在第二视场距离a2内时,在自透镜21至物侧的方向上,多组透镜21的合成视场范围中的重合区域占整个合成视场范围的比例逐渐减小,然后在无穷远处达到一个极限值,本申请的成像系统100在无穷远处的合成视场范围如图11所示,重合区域711为四组透镜21的视场范围71的重合部分,本申请通过限制每组透镜21的视场角FOV、及每组透镜21的光轴O和中心轴线O’的夹角α,使得无穷远处的重合区域711占整个合成视场范围(四组透镜21的视场范围共同覆盖的范围)的比例大于25%,可保证重合区域711的图像有足够的清晰度。The first field of view distance a1 is located between the blind zone range a0 and the second field of view distance a2. As the distance from the optical center plane S2 increases, within the first field of view distance a1, the combined field of view of the multiple lenses 21 The overlap area in the field range gradually increases, and then reaches the maximum when it reaches the intersection of the second field of view distance a2 and the first field of view distance a1 (the overlap area accounts for 100% of the total composite field of view range); Within the field of view distance a2, in the direction from the lens 21 to the object side, the proportion of the overlapping area in the combined field of view of the multi-lens 21 to the entire combined field of view gradually decreases, and then reaches a value at infinity. The limit value, the combined field of view of the imaging system 100 of the present application at infinity is shown in FIG. 11, and the overlap area 711 is the overlap portion of the field of view 71 of the four groups of lenses 21. The present application limits the range of the field of view of each group of lenses 21. The angle of view FOV and the angle α between the optical axis O and the central axis O'of each group of lenses 21 make the overlap area 711 at infinity occupies the entire synthetic field of view (the field of view of the four groups of lenses 21 jointly cover The ratio of the range) is greater than 25%, which can ensure that the image in the overlapping area 711 has sufficient sharpness.
请再次参阅图7和图8,在某些实施方式中,遮光件14还可作为图像传感器10的延伸并与图像传感器10一体成型,遮光件14上同样设置有微透镜阵列12、超透镜16以及像素阵列13,使得遮光件14可接收光线以成像。Referring to FIGS. 7 and 8 again, in some embodiments, the shading member 14 can also be used as an extension of the image sensor 10 and integrally formed with the image sensor 10. The shading member 14 is also provided with a microlens array 12 and a super lens 16 And the pixel array 13, so that the shading member 14 can receive light for imaging.
请结合图8,具体地,每组透镜21射向相邻的两组透镜21对应的子感光面111的光线(即成像区域215中的区域2156内的光线)可被遮光件14接收以进行成像,例如,第一组透镜211射向第二子感光面1112和第四子感光面1114的光线可被遮光件14接收,第二组透镜212射向第一子感光面1111和第三子感光面1113的光线可被遮光件14接收,第三组透镜213射向第二子感光面1112和第四子感光面1114的光线可被遮光件14接收,第四组透镜214射向第一子感光面1111和第三子感光面1113的光线可被遮光件14接收。相较于遮光件14仅将区域2156中的光线遮挡吸收,导致区域2156的图像损失而言,每组透镜21的成像区域215中位于区域2156中的光线均被遮光件14接收以进行成像,图像的损失较小。Referring to FIG. 8, specifically, the light from each group of lenses 21 to the sub-photosensitive surface 111 corresponding to the adjacent two groups of lenses 21 (that is, the light in the area 2156 in the imaging area 215) can be received by the shading member 14 For imaging, for example, the light emitted by the first lens group 211 toward the second sub-photosensitive surface 1112 and the fourth sub-photosensitive surface 1114 can be received by the shading member 14, and the second lens group 212 shoots toward the first sub-photosensitive surface 1111 and the third sub-surface 1111. The light from the photosensitive surface 1113 can be received by the shading member 14. The light from the third lens group 213 directed to the second sub-photosensitive surface 1112 and the fourth sub-photosensitive surface 1114 can be received by the shading member 14, and the fourth lens group 214 is directed toward the first The light from the sub-photosensitive surface 1111 and the third sub-photosensitive surface 1113 can be received by the shading member 14. Compared with the shading member 14 only shielding and absorbing the light in the area 2156, resulting in image loss in the area 2156, the light in the imaging area 215 of each group of lenses 21 located in the area 2156 is received by the shading member 14 for imaging. The image loss is small.
请参阅图12,在某些实施方式中,成像系统100还可包括基板30和镜头支架40。Please refer to FIG. 12, in some embodiments, the imaging system 100 may further include a substrate 30 and a lens holder 40.
基板30可以是柔性电路板、硬质电路板或软硬结合电路板。本申请实施方式中,基板30为柔性电路板,方便安装。基板30包括承载面31。The substrate 30 may be a flexible circuit board, a rigid circuit board, or a rigid-flex circuit board. In the embodiment of the present application, the substrate 30 is a flexible circuit board, which is convenient for installation. The substrate 30 includes a carrying surface 31.
镜头支架40设置在承载面31上。镜头支架40可通过胶合等方式安装在承载面31上。镜头支架40包括镜座41和设置在镜座41上的多个镜筒42。图像传感器10(图4示)设置在承载面31上并收容在镜座41内。多个镜筒42可以是一个、两个、三个、四个、甚至更多个等。本实施方式中,镜筒42的数量为四个,四个镜筒42独立间隔设置,并分 别用于安装四组透镜21,每组透镜21安装到对应的镜筒42中,一方面,容易安装,且透镜21的制作工艺无需做变更,仍然可采用传统的镜片制作工艺;另一方面,成像时,经过每组透镜21汇聚的光线能被对应的镜筒42先进行阻隔,避免相互串光而影响成像。请参阅图13,在其他实施方式中,镜筒42的数量为一个,四组透镜21同时安装在同一个镜筒42内,此时,四组透镜21可以是分别独立制作成型并分别安装在该一个镜筒42内。四组透镜21还可以是一体成型并安装在该一个镜筒42内,此时,四组透镜21同时安装在同一个镜筒42内,一方面,镜筒42的制作工艺无需做变更,仍然可采用传统的镜筒制作工艺;另一方面,四组透镜21之间的位置关系在制作透镜21的时候通过模具精准定下,相对分别将四个透镜21分别安装在四个镜筒42内而言,可以避免因为安装误差导致四组透镜21之间的位置关系达不到要求。The lens holder 40 is arranged on the bearing surface 31. The lens holder 40 can be installed on the carrying surface 31 by gluing or the like. The lens holder 40 includes a lens holder 41 and a plurality of lens barrels 42 provided on the lens holder 41. The image sensor 10 (shown in FIG. 4) is arranged on the carrying surface 31 and is housed in the lens holder 41. The number of lens barrels 42 may be one, two, three, four, or even more. In this embodiment, the number of lens barrels 42 is four. The four lens barrels 42 are arranged at independent intervals and are used to install four groups of lenses 21. Each group of lenses 21 is installed in the corresponding lens barrel 42. On the one hand, it is easy to Installation, and the manufacturing process of the lens 21 does not need to be changed, and the traditional lens manufacturing process can still be used; on the other hand, when imaging, the light converged by each group of lenses 21 can be blocked by the corresponding lens barrel 42 first to avoid mutual crosstalk. Light affects imaging. Please refer to FIG. 13, in other embodiments, the number of the lens barrel 42 is one, and the four groups of lenses 21 are installed in the same lens barrel 42 at the same time. At this time, the four groups of lenses 21 can be separately manufactured and molded and installed in Inside the one lens barrel 42. The four groups of lenses 21 can also be integrally formed and installed in the one lens barrel 42. At this time, the four groups of lenses 21 are installed in the same lens barrel 42 at the same time. On the one hand, the manufacturing process of the lens barrel 42 does not need to be changed. The traditional lens barrel manufacturing process can be used; on the other hand, the positional relationship between the four groups of lenses 21 is precisely determined by the mold when the lens 21 is manufactured, and the four lenses 21 are installed in the four lens barrels 42 respectively. In other words, it can be avoided that the positional relationship between the four groups of lenses 21 does not meet the requirements due to installation errors.
请参阅图3、图5、图14、图15a和图15b,本申请实施方式的图像获取方法可以应用于本申请任一实施方式的成像系统100,具体地,成像系统100包括图像传感器10及透镜组20,图像传感器10包括位于成像面S1上的感光面11,图像传感器10包括超透镜16和像素阵列13,像素阵列13位于超透镜16的出光侧166,超透镜16用于对从超透镜16的入光侧165射入的入射光线L进行分光以形成多种波长不同的出射光线L’,不同波长的出射光线L’以不同的出射角度从出光侧166射向像素阵列13以进行光电转换。感光面11包括多个子感光面111,透镜组20包括多组透镜21,每组透镜21在成像面S1上对应的成像区域215覆盖部分感光面11,多组透镜21在成像面S1上对应的成像区域215共同覆盖全部感光面11,每组透镜21的至少一个表面为自由曲面,以使得每组透镜21在成像面S1上对应的成像区域215呈矩形。图像获取方法包括:Referring to FIGS. 3, 5, 14, 15a and 15b, the image acquisition method of the embodiment of the present application can be applied to the imaging system 100 of any embodiment of the present application. Specifically, the imaging system 100 includes an image sensor 10 and The lens group 20, the image sensor 10 includes a photosensitive surface 11 located on the imaging surface S1, the image sensor 10 includes a super lens 16 and a pixel array 13, the pixel array 13 is located on the light exit side 166 of the super lens 16, and the super lens 16 is used to contrast the super lens 16 The incident light L that enters the light incident side 165 of the lens 16 is split to form a variety of outgoing rays L'with different wavelengths, and the outgoing rays L'of different wavelengths are emitted from the light emitting side 166 to the pixel array 13 at different exit angles. Photoelectric conversion. The photosensitive surface 11 includes a plurality of sub-photosensitive surfaces 111, and the lens group 20 includes multiple groups of lenses 21. The imaging area 215 corresponding to each group of lenses 21 on the imaging surface S1 covers part of the photosensitive surface 11. The multiple groups of lenses 21 correspond to the imaging surface S1. The imaging area 215 covers all the photosensitive surfaces 11 together, and at least one surface of each group of lenses 21 is a free-form surface, so that the corresponding imaging area 215 of each group of lenses 21 on the imaging surface S1 is rectangular. Image acquisition methods include:
01:曝光多个子感光面111对应的像素131(图4示),以得到多个初始图像P0;及01: Expose the pixels 131 (shown in FIG. 4) corresponding to the multiple sub-photosensitive surfaces 111 to obtain multiple initial images P0; and
02:处理多个初始图像P0以得到最终图像P2。02: Process multiple initial images P0 to obtain a final image P2.
具体地,成像系统100还可包括处理器60(图1示),处理器60和图像传感器10连接。图像传感器10上的所有像素131均可单独进行曝光。处理器60可控制图像传感器10的所有像素131同时曝光,以获取第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114分别对应的第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04。Specifically, the imaging system 100 may further include a processor 60 (shown in FIG. 1 ), and the processor 60 is connected to the image sensor 10. All the pixels 131 on the image sensor 10 can be individually exposed. The processor 60 can control all the pixels 131 of the image sensor 10 to be exposed at the same time to obtain the first initial sub-surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 respectively. Image P01, second initial image P02, third initial image P03, and fourth initial image P04.
请参阅图15a,以T为一个曝光周期,在一个曝光周期内,第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114对应的像素131均完成曝光。例如第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114对应的像素131的曝光时长相同且均为T,那么第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114对应的像素131可同时开始曝光,并同时停止曝光;或者,第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114对应的像素131的曝光时长不相同,分别为1/4T、1/2T、3/4T和T,处理器60可控制第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114对应的像素131同时开始曝光,由于曝光时长不同,因此结束曝光的时刻也不同,第一子感光面1111在1/4T时刻就停止曝光,第二子感光面1112在1/2T时刻停止曝光,第三子感光面1113在3/4T时刻停止曝光,第四子感光面1114在T时刻停止曝光。如此,每个子感光面111曝光后均可得到一个对应的初始图像P0,其中,第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114曝光后分别得到第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04。Referring to FIG. 15a, taking T as an exposure period, in one exposure period, the pixels 131 corresponding to the first sub-sensing surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 are all Complete exposure. For example, the first sub-sensing surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 have the same exposure time for the pixels 131 corresponding to T, then the first sub-sensing surface 1111, the The pixels 131 corresponding to the second sub-photosensitive surface 1112, the third sub-photosensitive surface 1113, and the fourth sub-photosensitive surface 1114 can start exposure at the same time and stop exposure at the same time; or, the first sub-photosensitive surface 1111, the second sub-photosensitive surface 1112, and the second sub-photosensitive surface 1112. The exposure durations of the pixels 131 corresponding to the three sub-photosensitive surfaces 1113 and the fourth sub-photosensitive surface 1114 are different, which are 1/4T, 1/2T, 3/4T, and T, respectively. The processor 60 can control the first sub-photosensitive surface 1111. The pixels 131 corresponding to the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface 1114 start to be exposed at the same time. Because the exposure time is different, the time to end the exposure is also different. The first sub-sensing surface 1111 is at 1/ Exposure is stopped at 4T, the second sub-photosensitive surface 1112 stops at 1/2T, the third sub-photosensitive surface 1113 stops at 3/4T, and the fourth sub-photosensitive surface 1114 stops at T. In this way, after each sub-photosensitive surface 111 is exposed, a corresponding initial image P0 can be obtained. Among them, the first sub-photosensitive surface 1111, the second sub-photosensitive surface 1112, the third sub-photosensitive surface 1113, and the fourth sub-photosensitive surface 1114 are exposed to light. The first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04 are obtained respectively.
或者,处理器60可控制图像传感器10的多个区域对应的像素131依次曝光,例如,依次曝光第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114对应的像素131。请参阅图15a,以T为一个曝光周期(在一个曝光周期内,四个子感光面111依次完成曝光)为例进行说明,在[0,1/4T]内,第一子感光面1111内对应的所有像素131曝光,并在第一子感光面1111内对应的所有像素131曝光后得到一个初始图像P0(下称第一初始图像P01,第一初始图像P01包括图15a中的1、2、3和4四个图像区域),其中,第一子感光面1111内对应的所有像素131的曝光起始时刻均相同,曝光终止时刻也均相同,即,第一子感光面1111内对应的所有像素131所经历的曝光时长均相同,例如为1/4T;或者,第一子感光面1111内对应的所有像素131的曝光起始时刻可以不同,但曝光终止时刻均相同,即,第一子感光面1111内对应的所有像素131所经历的曝光时长可以不同,但在1/4T时刻,第一子感光面1111内对应的所有像素131需要全部曝完成,例如一部分像素131所经历的曝光时长为1/4T,其余部分像素131所经历的曝光时长小于1/4T,如1/5T、1/6T、1/7T、1/8T等。Alternatively, the processor 60 may control the pixels 131 corresponding to multiple regions of the image sensor 10 to be sequentially exposed, for example, sequentially exposing the first sub-sensing surface 1111, the second sub-sensing surface 1112, the third sub-sensing surface 1113, and the fourth sub-sensing surface. The pixel 131 corresponding to the surface 1114. Please refer to Figure 15a, taking T as an exposure period (within one exposure period, the four sub-photosensitive surfaces 111 are sequentially exposed) as an example. In [0,1/4T], the first sub-photosensitive surface 1111 corresponds to Expose all the pixels 131 in the first sub-photosensitive surface 1111 to obtain an initial image P0 (hereinafter referred to as the first initial image P01, the first initial image P01 includes 1, 2 in Figure 15a). 3 and 4), where the exposure start time of all pixels 131 corresponding to the first sub-photosensitive surface 1111 are the same, and the exposure end time is also the same, that is, all the corresponding pixels 131 in the first sub-photosensitive surface 1111 have the same The exposure time experienced by the pixels 131 are all the same, for example, 1/4T; or, the exposure start time of all the pixels 131 corresponding to the first sub-photosensitive surface 1111 may be different, but the exposure end time is the same, that is, the first sub-surface 1111 The exposure time experienced by all the pixels 131 corresponding to the photosensitive surface 1111 can be different, but at 1/4T, all the pixels 131 corresponding to the first sub-photosensitive surface 1111 need to be fully exposed, for example, the exposure time experienced by a part of the pixels 131 It is 1/4T, and the exposure time experienced by the remaining part of the pixels 131 is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
在(1/4T,2/4T]内,第二子感光面1112内对应的所有像素131曝光,并在第二子感光面1112内对应的所有像素131曝光后得到一个初始图像P0(下称第二初始图像P02,第二初始图像P02包括图15a中的5、6、7和8四个图像区域),第二初始图像P02仅根据(1/4T,2/4T]内曝光产生的电信号得到,其中,第二子感光面1112内对应的所有像素131的曝光起始时刻均相同,曝光终止时刻也均相同,即,第二子感光面1112内对应的所有像素131所经历的曝光时 长均相同,例如为1/4T;或者,第二子感光面1112内对应的所有像素131的曝光起始时刻可以不同,但曝光终止时刻均相同,即,第二子感光面1112内对应的所有像素131所经历的曝光时长可以不同,但在2/4T时刻,第二子感光面1112内对应的所有像素131需要全部曝完成,例如一部分像素131所经历的曝光时长为1/4T,其余部分像素131所经历的曝光时长小于1/4T,如1/5T、1/6T、1/7T、1/8T等。In (1/4T, 2/4T), all the pixels 131 corresponding to the second sub-photosensitive surface 1112 are exposed, and after all the pixels 131 corresponding to the second sub-photosensitive surface 1112 are exposed, an initial image P0 (hereinafter referred to as The second initial image P02, the second initial image P02 includes the four image areas of 5, 6, 7 and 8 in Figure 15a), the second initial image P02 is only based on the electricity generated by the (1/4T, 2/4T) internal exposure The signal is obtained. Among them, the exposure start time of all pixels 131 corresponding to the second sub-photosensitive surface 1112 are the same, and the exposure end time is also the same, that is, the exposure experienced by all the pixels 131 corresponding to the second sub-photosensitive surface 1112 The duration is the same, for example, 1/4T; or, the exposure start time of all the pixels 131 corresponding to the second sub-photosensitive surface 1112 may be different, but the exposure end time is the same, that is, the corresponding pixels in the second sub-photosensitive surface 1112 The exposure time experienced by all pixels 131 can be different, but at 2/4T, all the pixels 131 corresponding to the second sub-photosensitive surface 1112 need to be fully exposed. For example, the exposure time experienced by some pixels 131 is 1/4T, and the rest The exposure time experienced by some pixels 131 is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
在(2/4T,3/4T]内,第三子感光面1113内对应的所有像素131曝光,并在第三子感光面1113内对应的所有像素131曝光后得到一个初始图像P0(下称第三初始图像P03,第三初始图像P03包括图15a中的9、10、11和12四个图像区域),第三初始图像P03仅根据(2/4T,3/4T]内曝光产生的电信号得到,其中,第三子感光面1113内对应的所有像素131的曝光起始时刻均相同,曝光终止时刻也均相同,即,第三子感光面1113内对应的所有像素131所经历的曝光时长均相同,例如为1/4T;或者,第三子感光面1113内对应的所有像素131的曝光起始时刻可以不同,但曝光终止时刻均相同,即,第三子感光面1113内对应的所有像素131所经历的曝光时长可以不同,但在3/4T时刻,第三子感光面1113内对应的所有像素131需要全部曝完成,例如一部分像素131所经历的曝光时长为1/4T,其余部分像素131所经历的曝光时长小于1/4T,如1/5T、1/6T、1/7T、1/8T等。In (2/4T, 3/4T), all the pixels 131 corresponding to the third sub-photosensitive surface 1113 are exposed, and all the pixels 131 corresponding to the third sub-photosensitive surface 1113 are exposed to obtain an initial image P0 (hereinafter referred to as The third initial image P03, the third initial image P03 includes the four image areas of 9, 10, 11 and 12 in Figure 15a), the third initial image P03 is only based on the electricity generated by the (2/4T, 3/4T) internal exposure The signal is obtained, where the exposure start time of all pixels 131 corresponding to the third sub-photosensitive surface 1113 are the same, and the exposure end time is also the same, that is, the exposure experienced by all the pixels 131 corresponding to the third sub-photosensitive surface 1113 The time length is the same, for example, 1/4T; or, the exposure start time of all the pixels 131 corresponding to the third sub-photosensitive surface 1113 may be different, but the exposure end time is the same, that is, corresponding to the third sub-photosensitive surface 1113 The exposure duration experienced by all pixels 131 can be different, but at 3/4T, all pixels 131 corresponding to the third sub-photosensitive surface 1113 need to be fully exposed. For example, the exposure duration experienced by some pixels 131 is 1/4T, and the remaining The exposure time experienced by some pixels 131 is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
在(3/4T,T]内,第四子感光面1114内对应的所有像素131曝光,并在第四子感光面1114内对应的所有像素131曝光后得到一个初始图像P0(下称第四初始图像P04,第四初始图像P04包括图15a中的13、14、15和16四个图像区域),第四初始图像P04仅根据(3/4T,T]内曝光产生的电信号得到,其中,第四子感光面1114内对应的所有像素131的曝光起始时刻均相同,曝光终止时刻也均相同,即,第四子感光面1114内对应的所有像素131所经历的曝光时长均相同,例如为1/4T;或者,第四子感光面1114内对应的所有像素131的曝光起始时刻可以不同,但曝光终止时刻均相同,即,第四子感光面1114内对应的所有像素131所经历的曝光时长可以不同,但在4/4T时刻,第四子感光面1114内对应的所有像素131需要全部曝完成,例如一部分像素131所经历的曝光时长为1/4T,其余部分像素131所经历的曝光时长小于1/4T,如1/5T、1/6T、1/7T、1/8T等。In (3/4T, T), all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 are exposed, and all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 are exposed to obtain an initial image P0 (hereinafter referred to as the fourth The initial image P04, the fourth initial image P04 includes four image areas 13, 14, 15 and 16 in Fig. 15a), the fourth initial image P04 is obtained only based on the electrical signal generated by the (3/4T, T] internal exposure, where , The exposure start time of all pixels 131 corresponding to the fourth sub-photosensitive surface 1114 are the same, and the exposure end time is also the same, that is, all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 have the same exposure time, For example, it is 1/4T; or, the exposure start time of all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 may be different, but the exposure end time is the same, that is, all the pixels 131 corresponding to the fourth sub-photosensitive surface 1114 are located The exposure time experienced can be different, but at 4/4T, all pixels 131 corresponding to the fourth sub-photosensitive surface 1114 need to be fully exposed. For example, the exposure time experienced by some pixels 131 is 1/4T, and the remaining pixels 131 The exposure time experienced is less than 1/4T, such as 1/5T, 1/6T, 1/7T, 1/8T, etc.
可以理解,每组透镜21中心区域出射的光线一般较强,而边缘区域出射的光线相对较弱,因为,为了防止中心区域过曝,而将中心区域对应的一部分像素131的曝光时长设置的较小(如1/8),而将边缘区域对应的另一部分像素131的曝光时长设置为1/4,既可以防止中心区域对应的一部分像素131过曝,又可以防止边缘区域对应的另一部分像素131曝光量不足,从而提高成像质量。如此,在一个曝光周期内依次曝光可得到成像质量较好的四张初始图像P0(分别为第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04)。It can be understood that the light emitted from the central area of each group of lenses 21 is generally strong, while the light emitted from the edge area is relatively weak, because in order to prevent the central area from overexposing, the exposure time of a part of the pixels 131 corresponding to the central area is set to be relatively long. Small (such as 1/8), and the exposure time of another part of the pixels 131 corresponding to the edge area is set to 1/4, which can prevent a part of the pixels 131 corresponding to the central area from overexposing, but also prevent another part of the pixels corresponding to the edge area 131 insufficient exposure, thereby improving image quality. In this way, four initial images P0 (respectively the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04) with better imaging quality can be obtained by sequentially exposing in one exposure period.
请参阅图15b,处理器60根据第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04得到最终图像P2。请参阅图9,由于四组透镜21的视场范围存在重合区域,因此,只要物体处于盲区范围a0外,第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04就会存在场景相同的区域(即图9中的重合区域711),且任意相邻的两组透镜21也会存在场景相同的区域(即,图9中的区域712)。处理器60可识别第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04中场景相同的区域(下称第一重合区M1,第一重合区M1的图像和图9中重合区域711对应),可以理解,第一重合区M1有四个(分别为图15a中的3、8、9和14四个区域),3、8、9和14四个区域分别与第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04对应。然后处理器60仅保留任一初始图像P0的第一重合区M1(如第一初始图像P01的第一重合区M1,即,区域3),而将其他初始图像P0的第一重合区M1(即,区域8、9和14)删除。Referring to FIG. 15b, the processor 60 obtains the final image P2 according to the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04. Please refer to FIG. 9, due to the overlapping area of the field of view of the four groups of lenses 21, as long as the object is outside the blind zone range a0, the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04 will have an area with the same scene (ie, the overlapping area 711 in FIG. 9), and any adjacent two sets of lenses 21 will also have an area with the same scene (ie, the area 712 in FIG. 9). The processor 60 can identify the same scene area in the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04 (hereinafter referred to as the first overlapping area M1, the image and the image of the first overlapping area M1) The coincidence area 711 in FIG. 9 corresponds), it can be understood that there are four first coincidence areas M1 (respectively the four areas 3, 8, 9 and 14 in FIG. 15a), and the four areas 3, 8, 9 and 14 respectively It corresponds to the first initial image P01, the second initial image P02, the third initial image P03, and the fourth initial image P04. Then the processor 60 only reserves the first overlap area M1 of any initial image P0 (for example, the first overlap area M1 of the first initial image P01, that is, area 3), and replaces the first overlap area M1 ( That is, areas 8, 9 and 14) are deleted.
请参阅图15a,处理器60识别相邻的两个初始图像P0中场景相同的区域(下称第二重合区M2,第二重合区M2为仅在相邻的两个子感光面111曝光得到的两个初始图像P0中场景相同的区域,第二重合区M2和图9中的区域712对应)。可以理解,每个初始图像P0分别与两个初始图像P0相邻,故每个初始图像P0对应两个第二重合区M2,即,第二重合区M2的数量为八个,其中,第一初始图像P01和第二初始图像P02中场景相同的第二重合区M2分别为区域2和区域5,第二初始图像P02和第三初始图像P03中场景相同的第二重合区M2分别为区域7和区域10,第三初始图像P03和第四初始图像P04中场景相同的第二重合区M2分别为区域12和区域15,第四初始图像P04和第一初始图像P01中场景相同的第二重合区M2分别为区域13和区域4。Referring to FIG. 15a, the processor 60 recognizes areas of the same scene in two adjacent initial images P0 (hereinafter referred to as the second overlapping area M2, the second overlapping area M2 is obtained by only exposing the two adjacent sub-photosensitive surfaces 111). In the area with the same scene in the two initial images P0, the second overlapping area M2 corresponds to the area 712 in FIG. 9). It can be understood that each initial image P0 is adjacent to two initial images P0, so each initial image P0 corresponds to two second overlapping areas M2, that is, the number of second overlapping areas M2 is eight, where the first The second overlapping area M2 with the same scene in the initial image P01 and the second initial image P02 is area 2 and area 5, respectively, and the second overlapping area M2 with the same scene in the second initial image P02 and the third initial image P03 is area 7 respectively And area 10, the second overlapping area M2 with the same scene in the third initial image P03 and the fourth initial image P04 is area 12 and area 15, respectively, and the second overlapping area with the same scene in the fourth initial image P04 and the first initial image P01 Area M2 is area 13 and area 4, respectively.
请参阅图15b,由于相邻的两个初始图像P0的第二重合区M2的场景相同,处理器60可保留相邻的两个初始图像P0的第二重合区M2中的任意一个,并删除另外一个,例如,保留第一初始图像P01中与第二初始图像P02场景相同的第二重合区M2(即,区域2),而删除第二初始图像P02中仅与第一初始图像P01场景相同的第二重合区M2(即,区域5);保留第二初始图像P02中与第三初始图像P03场景相同的第二重合区M2(即,区域7),而删除第三初始图像P03仅中与第二初始图像P02场景相同的第二重合区M2(即,区域10);保留第三初始图像P03中与第四初始图像P04场景相同的第二重合区M2(即,区域12),而删除第四初始图像P04中仅与第三初始图像P03场景相同的第二重 合区M2(即,区域15);保留第四初始图像P04中与第一初始图像P01场景相同的第二重合区M2(即,区域13),而删除第一初始图像P01中仅与第四初始图像P04场景相同的第二重合区M2(即,区域4)。如此,最终保留一个第一重合区M1和四个第二重合区M2。最后,处理器60拼接一个第一重合区M1(即,区域3)、四个第二重合区M2(即,区域2、7、12和13)、及四个初始图像P0中除去第一重合区M1和第二重合区M2的区域(即,区域1、6、11和16),以生成最终图像P2。Referring to FIG. 15b, since the scenes of the second overlapping area M2 of the two adjacent initial images P0 are the same, the processor 60 may retain any one of the second overlapping areas M2 of the two adjacent initial images P0 and delete The other one, for example, keep the second overlapping area M2 (ie, area 2) in the first initial image P01 that is the same scene as the second initial image P02, while deleting the second initial image P02 only has the same scene as the first initial image P01 The second overlapping area M2 (ie, area 5) of the second initial image P02 and the second overlapping area M2 (ie, area 7) with the same scene as the third initial image P03 in the second initial image P02 are retained, and the third initial image P03 is deleted only in the The second overlapping area M2 (ie, area 10) that is the same scene as the second initial image P02; the second overlapping area M2 (ie, area 12) that is the same scene as the fourth initial image P04 in the third initial image P03 is retained, and Delete the second coincidence area M2 (ie, area 15) in the fourth initial image P04 that is only the same scene as the third initial image P03; retain the second coincidence area M2 in the fourth initial image P04 that is the same scene as the first initial image P01 (Ie, area 13), and delete the second overlapping area M2 (ie, area 4) in the first initial image P01 that is only the same scene as the fourth initial image P04. In this way, one first overlapping area M1 and four second overlapping areas M2 are finally reserved. Finally, the processor 60 stitches a first overlap area M1 (ie, area 3), four second overlap areas M2 (ie, areas 2, 7, 12, and 13), and four initial images P0 to remove the first overlap. The area M1 and the area of the second overlapping area M2 (ie, areas 1, 6, 11, and 16) are used to generate the final image P2.
本申请实施方式的图像获取方法通过多个子感光面111分时曝光以获取多个初始图像P0,并根据多个初始图像P0可快速生成最终图像P2。透镜组20分为多组透镜21,每组透镜21在成像面S1上对应的成像区域215都覆盖部分图像传感器10的感光面11,且多组透镜21的成像区域215共同覆盖全部感光面11,相较于一组透镜21与全部感光面11对应而言,每组透镜21与部分感光面11对应时的总长(沿中心轴线O’方向的长度)较短,使得透镜组20的整体长度(沿中心轴线O’方向的长度)较短,成像系统100较容易安装到终端1000上。In the image acquisition method of the embodiment of the present application, multiple sub-photosensitive surfaces 111 are time-divisionally exposed to acquire multiple initial images P0, and the final image P2 can be quickly generated based on the multiple initial images P0. The lens group 20 is divided into multiple groups of lenses 21. The imaging area 215 of each group of lenses 21 on the imaging surface S1 covers part of the photosensitive surface 11 of the image sensor 10, and the imaging area 215 of the multiple groups of lenses 21 collectively covers all the photosensitive surfaces 11 Compared with a group of lenses 21 corresponding to all photosensitive surfaces 11, the total length (length along the central axis O'direction) of each group of lenses 21 corresponding to part of the photosensitive surface 11 is shorter, so that the overall length of the lens group 20 (The length in the direction of the central axis O′) is short, and the imaging system 100 is easier to install on the terminal 1000.
请参阅图3、图4和图16,在某些实施方式中,成像系统100还包括多个光阑70。多个光阑70分别用于控制多组透镜21的入光量。Referring to FIGS. 3, 4 and 16, in some embodiments, the imaging system 100 further includes a plurality of diaphragms 70. The multiple diaphragms 70 are respectively used to control the amount of incident light of the multiple lenses 21.
具体地,光阑70设置在每组透镜21的与图像传感器10相背的一侧,光阑70的数量可以是两个、三个、四个甚至更多个等,光阑70的数量可根据透镜21的组数确定,本申请实施方式中,光阑70的数量和透镜21的组数相同,为四个(下称第一光阑、第二光阑、第三光阑和第四光阑,第一光阑、第二光阑、第三光阑和第四光阑分别设置在四组透镜21上,并分别用于控制到达第一子感光面1111、第二子感光面1112、第三子感光面1113和第四子感光面1114的光量)。多个光阑70可被驱动结构驱动从而改变光阑70的进光口的大小,从而控制对应的一组透镜21的入光量。处理器60(图1示)与驱动结构连接,处理器60控制图像传感器10分时曝光。在第一子感光面1111对应的像素131曝光时,处理器60控制驱动结构驱动第二光阑、第三光阑和第四光阑关闭以使得光线无法到达第二子感光面1112、第三子感光面1113和第四子感光面1114;在第二子感光面1112对应的像素131曝光时,处理器60控制驱动结构驱动第一光阑、第三光阑和第四光阑关闭以使得光线无法到达第一子感光面1111、第三子感光面1113和第四子感光面1114;在第三子感光面1113对应的像素131曝光时,处理器60控制驱动结构驱动第一光阑、第二光阑和第四光阑关闭以使得光线无法到达第一子感光面1111、第二子感光面1112和第四子感光面1114;在第四子感光面1114对应的像素131曝光时,处理器60控制驱动结构驱动第一光阑、第二光阑和第三光阑关闭以使得光线无法到达第一子感光面1111、第二子感光面1112和第三子感光面1113。如此,处理器60通过控制驱动结构驱动对应的光阑70关闭以控制图像传感器10分时曝光,可保证不同组透镜21不会产生光线干扰,且无需在图像传感器10上设置遮光件14,减小了遮光件14所占的面积,可减小图像传感器10的面积。Specifically, the diaphragm 70 is arranged on the side of each group of lenses 21 opposite to the image sensor 10, the number of the diaphragm 70 can be two, three, four or more, and the number of the diaphragm 70 can be Determined according to the number of groups of the lens 21, in the embodiment of the present application, the number of diaphragms 70 is the same as the number of groups of the lens 21, which is four (hereinafter referred to as the first diaphragm, the second diaphragm, the third diaphragm and the fourth diaphragm). The aperture, the first aperture, the second aperture, the third aperture and the fourth aperture are respectively arranged on the four groups of lenses 21, and respectively used to control reaching the first sub-photoreceptive surface 1111 and the second sub-photoreceptive surface 1112 , The amount of light of the third sub-sensing surface 1113 and the fourth sub-sensing surface 1114). The plurality of apertures 70 can be driven by the driving structure to change the size of the light inlet of the aperture 70, thereby controlling the amount of light incident by the corresponding set of lenses 21. The processor 60 (shown in FIG. 1) is connected to the driving structure, and the processor 60 controls the time-sharing exposure of the image sensor 10. When the pixel 131 corresponding to the first sub-photosensitive surface 1111 is exposed, the processor 60 controls the driving structure to drive the second diaphragm, the third diaphragm, and the fourth diaphragm to close so that the light cannot reach the second sub-photosensitive surface 1112, the third diaphragm. The sub-photosensitive surface 1113 and the fourth sub-photosensitive surface 1114; when the pixel 131 corresponding to the second sub-photosensitive surface 1112 is exposed, the processor 60 controls the driving structure to drive the first diaphragm, the third diaphragm, and the fourth diaphragm to close so that The light cannot reach the first sub-photoreceptive surface 1111, the third sub-photoreceptive surface 1113, and the fourth sub-photoreceptive surface 1114; when the pixel 131 corresponding to the third sub-photoreceptive surface 1113 is exposed, the processor 60 controls the driving structure to drive the first diaphragm, The second diaphragm and the fourth diaphragm are closed so that the light cannot reach the first sub-photosensitive surface 1111, the second sub-photosensitive surface 1112, and the fourth sub-photosensitive surface 1114; when the pixel 131 corresponding to the fourth sub-photosensitive surface 1114 is exposed, The processor 60 controls the driving structure to drive the first diaphragm, the second diaphragm, and the third diaphragm to close so that light cannot reach the first sub-photoreceptive surface 1111, the second sub-photoreceptive surface 1112, and the third sub-photoreceptive surface 1113. In this way, the processor 60 drives the corresponding diaphragm 70 to close by controlling the driving structure to control the time-sharing exposure of the image sensor 10, which can ensure that different groups of lenses 21 will not cause light interference, and there is no need to provide a light shielding member 14 on the image sensor 10. The area occupied by the light shielding member 14 is reduced, and the area of the image sensor 10 can be reduced.
请参阅图15a、图15b和图17,在某些实施方式中,步骤02包括:021:旋转多个初始图像P0;Please refer to FIG. 15a, FIG. 15b and FIG. 17. In some embodiments, step 02 includes: 021: Rotate multiple initial images P0;
022:依据多个初始图像P0获取第一重叠图像N1及第二重叠图像N2,第一重叠图像N1为所有初始图像P0中场景相同的部分图像,第二重叠图像N2为仅在相邻的两个子感光面111曝光得到的两个初始图像P0中场景相同的部分图像;及023:拼接第一重叠图像N1、第二重叠图像N2、及多个初始图像P0中与第一重叠图像N1和第二重叠图像N2的场景均不同的部分图像。022: Obtain the first overlapping image N1 and the second overlapping image N2 according to a plurality of initial images P0. The first overlapping image N1 is a partial image of the same scene in all the initial images P0, and the second overlapping image N2 is only in two adjacent images. The partial images of the same scene in the two initial images P0 obtained by the exposure of the sub-photosensitive surfaces 111; and 023: splicing the first overlapping image N1, the second overlapping image N2, and the first overlapping image N1 and the first overlapping image N1 and the first overlapping image N1 and the first overlapping image N2 among the multiple initial images P0. The two overlapping images N2 have different partial images in different scenes.
具体地,由于每组透镜21形成的初始图像P0为实际场景的倒像,因此,在进行图像处理前,要将初始图像P0进行旋转,具体为旋转180度,使得初始图像P0的方向和实际场景的方向一致。从而保证后续拼接多个初始图像P0以生成最终图像P2时,图像中的场景的方向的准确性。处理器60(图1示)依据多个初始图像P0获取第一重叠图像N1和第二重叠图像N2时,首先识别第一初始图像P01、第二初始图像P02、第三初始图像P03和第四初始图像P04中的第一重合区M1,然后根据四个第一重合区M1获取第一重叠图像N1,例如,处理器60可将任一初始图像P0的第一重合区M1(如第一初始图像P01的第一重合区M1,即区域3)的图像作为第一重叠图像N1。然后处理器60识别相邻的两个初始图像P0中的第二重合区M2,然后根据相邻的两个初始图像P0中的第二重合区M2获取一个第二重叠图像N2,例如,处理器60可以将相邻的两个初始图像P0的第二重合区M2的图像中的任意一个作为第二重叠图像N2,从而可获取四个第二重叠图像N2(如分别为区域2、7、12和13)。其中,第一重叠图像N1为所有初始图像P0中场景相同的部分图像,第二重叠图像N2为仅在相邻的两个子感光面111曝光得到的两个初始图像P0中场景相同的部分图像。Specifically, since the initial image P0 formed by each group of lenses 21 is an inverted image of the actual scene, before the image processing, the initial image P0 should be rotated, specifically by 180 degrees, so that the direction of the initial image P0 is the same as the actual scene. The direction of the scene is the same. This ensures the accuracy of the orientation of the scene in the image when the multiple initial images P0 are subsequently spliced to generate the final image P2. When the processor 60 (shown in FIG. 1) acquires the first overlapping image N1 and the second overlapping image N2 according to a plurality of initial images P0, it first identifies the first initial image P01, the second initial image P02, the third initial image P03, and the fourth The first overlapping area M1 in the initial image P04, and then the first overlapping image N1 is acquired according to the four first overlapping areas M1. For example, the processor 60 may change the first overlapping area M1 of any initial image P0 (such as the first initial The image of the first overlapping area M1 of the image P01, that is, the area 3) is taken as the first overlapping image N1. Then the processor 60 identifies the second overlapping area M2 in the two adjacent initial images P0, and then obtains a second overlapping image N2 according to the second overlapping area M2 in the two adjacent initial images P0, for example, the processor 60. Any one of the images of the second overlapping area M2 of the two adjacent initial images P0 can be used as the second overlapping image N2, so that four second overlapping images N2 (such as areas 2, 7, 12, respectively) can be obtained. And 13). The first overlapping image N1 is a partial image with the same scene in all the initial images P0, and the second overlapping image N2 is a partial image with the same scene in the two initial images P0 obtained by exposing only two adjacent sub-photosensitive surfaces 111.
最后处理器60拼接第一重叠图像N1、第二重叠图像N2、及多个初始图像P0中与第一重叠图像N1和第二重叠图像N2的场景均不同的部分图像(即,多个初始图像P0中除去对应的第一重合区M1和第二重合区M2的区域的图像),以生成最终图像P2。如此,仅需识别第一重合区M1和第二重合区M2,计算量较小,可快速生成最终图像P2。Finally, the processor 60 stitches the first overlapping image N1, the second overlapping image N2, and the partial images of the multiple initial images P0 that are different from the scenes of the first overlapping image N1 and the second overlapping image N2 (ie, multiple initial images). Remove the corresponding images of the first overlapping area M1 and the second overlapping area M2 from P0 to generate the final image P2. In this way, only the first overlapping area M1 and the second overlapping area M2 need to be identified, the calculation amount is small, and the final image P2 can be quickly generated.
请参阅图15a、图15b、图18a、图18b和图19,在某些实施方式中,多个初始图像P0中场景相同的区域定义为第一重合区M1,每个第一重合区M1包括多个子区域,多个第一重合区M1包括多个场景相同的子区域;相邻的两个初 始图像P0中场景相同的区域定义为第二重合区M2,每个第二重合区M2包括多个子区域,相邻的两个第二重合区M2包括多个场景相同的子区域;步骤022包括:0221:比较多个第一重合区M1中相同场景的子区域,以获取每个第一重合区M1中非边缘位置的子区域以作为第一拼接区N3;0222:比较相邻的第二重合区M2中相同场景的子区域,以获取每个第二重合区M2中非角落位置的子区域以作为第二拼接区N4;0223:拼接多个第一拼接区N3以得到第一重叠图像N1;及0224:拼接相邻的两个初始图像P0对应的两个第二拼接区N4以得到多个第二重叠图像N2。Referring to Figure 15a, Figure 15b, Figure 18a, Figure 18b and Figure 19, in some embodiments, regions with the same scene in the multiple initial images P0 are defined as the first overlap area M1, and each first overlap area M1 includes Multiple sub-regions, multiple first overlapping areas M1 include multiple sub-regions with the same scene; areas with the same scene in two adjacent initial images P0 are defined as second overlapping areas M2, and each second overlapping area M2 includes multiple The two adjacent second overlapping areas M2 include multiple sub-regions with the same scene; step 022 includes: 0221: comparing the sub-regions of the same scene in the multiple first overlapping areas M1 to obtain each first overlap The sub-regions at the non-edge positions in the area M1 are used as the first splicing area N3; 0222: compare the sub-areas of the same scene in the adjacent second overlapping area M2 to obtain the sub-regions at the non-corner positions in each second overlapping area M2 The area is used as the second stitching area N4; 0223: stitching multiple first stitching areas N3 to obtain a first overlapping image N1; and 0224: stitching two second stitching areas N4 corresponding to two adjacent initial images P0 to get A plurality of second overlapping images N2.
具体地,处理器60比较多个第一重合区M1中相同场景的子区域,以获取第一重合区M1中非边远位置的子区域作为第一拼接区N3。可以理解,每组透镜21在成像时,边缘区域的图像的清晰度及准确度一般低于中心区域的图像,如图18a所示,例如,第一初始图像P01中的第一重合区M1分为A1、A2、A3、A4四个子区域,第二初始图像P02中的第一重合区M1分为B1、B2、B3、B4四个子区域,第三初始图像P03中的第一重合区M1分为C1、C2、C3、C4四个子区域,第四初始图像P04中的第一重合区M1分为D1、D2、D3、D4四个子区域。其中,A1、B1、C1、D1四个子区域表示的场景相同,A2、B2、C2、D2四个子区域表示的场景相同,A3、B3、C3、D3四个子区域表示的场景相同,A4、B4、C4、D4四个子区域表示的场景相同。Specifically, the processor 60 compares the sub-regions of the same scene in the plurality of first overlapping areas M1 to obtain the sub-regions at non-distant locations in the first overlapping area M1 as the first splicing area N3. It can be understood that when each group of lenses 21 is imaging, the sharpness and accuracy of the image in the edge area are generally lower than that of the image in the central area, as shown in FIG. 18a, for example, the first overlap area M1 in the first initial image P01 is divided into There are four sub-regions A1, A2, A3, and A4. The first overlapping area M1 in the second initial image P02 is divided into four sub-regions B1, B2, B3, and B4. The first overlapping area M1 in the third initial image P03 is divided into four sub-regions. There are four sub-regions C1, C2, C3, and C4. The first overlapping area M1 in the fourth initial image P04 is divided into four sub-regions D1, D2, D3, and D4. Among them, the four sub-areas A1, B1, C1, D1 represent the same scene, the four sub-areas A2, B2, C2, D2 represent the same scene, the four sub-areas A3, B3, C3, and D3 represent the same scene, A4, B4 The four sub-areas of, C4 and D4 represent the same scene.
处理器60选取多个场景相同的子区域中处于非边缘位置的子区域作为第一拼接区N3,然后拼接多个第一拼接区N3以得到第一重叠图像N1。由于A1靠近第一初始图像P01的中心,B2靠近第二初始图像P02的的中心,C3靠近第三初始图像P03的的中心,D4靠近第四初始图像P04的中心,A1、B2、C3和D4四个子区域均为非边缘位置,清晰度和准确度较高,与A1子区域场景相同的B1、C1和D1三个子区域在边缘位置,清晰度和准确度较低;与B2子区域场景相同的A2、C2和D2三个子区域在边缘位置,清晰度和准确度较低;与C3子区域场景相同的A3、B3和D3三个子区域在边缘位置,清晰度和准确度较低;与C4子区域场景相同的A4、B4和C4三个子区域在边缘位置,清晰度和准确度较低。因此,处理器60可选取A1、B2、C3和D4四个子区域作为四个第一拼接区N3,然后将四个第一拼接区N3拼接起来即可得到第一重叠图像N1,拼接时可根据每个第一拼接区N3对应的场景的位置去拼接,保证拼接后的第一重叠图像N1的准确性。如此,相较于选取四个第一重合区M1的图像的其中一个作为第一重叠图像N1而言,第一重叠图像N1的四个第一拼接区N3(A1、B2、C3和D4四个子区域)的图像均为场景相同的图像中最为清晰和准确的图像,第一重叠图像N1的清晰度和准确度较高。The processor 60 selects a sub-areas at a non-edge position among multiple sub-areas with the same scene as the first stitching area N3, and then stitches the multiple first stitching areas N3 to obtain the first overlapping image N1. Since A1 is close to the center of the first initial image P01, B2 is close to the center of the second initial image P02, C3 is close to the center of the third initial image P03, D4 is close to the center of the fourth initial image P04, A1, B2, C3, and D4 The four sub-regions are all non-edge locations, with high definition and accuracy. The three sub-regions B1, C1 and D1, which are the same as the A1 sub-region scene, are at the edge position, and the definition and accuracy are low; the same as the B2 sub-region scene The three sub-areas of A2, C2 and D2 are at the edge position, and the definition and accuracy are lower; the three sub-areas of A3, B3 and D3, which are the same as the C3 sub-area scene, are at the edge position, and the definition and accuracy are lower; and C4 The three sub-regions A4, B4, and C4 with the same sub-region scene are at the edge position, and the definition and accuracy are low. Therefore, the processor 60 can select the four sub-regions A1, B2, C3, and D4 as the four first splicing areas N3, and then splicing the four first splicing areas N3 together to obtain the first overlapping image N1. The positions of the scenes corresponding to each first splicing area N3 are spliced to ensure the accuracy of the spliced first overlapping image N1. In this way, compared to selecting one of the images of the four first overlapping areas M1 as the first overlapping image N1, the four first splicing areas N3 (A1, B2, C3, and D4) of the first overlapping image N1 The images of the region) are the clearest and most accurate images among the images with the same scene, and the definition and accuracy of the first overlapping image N1 are relatively high.
请再次参阅图18a,处理器60比较相邻的第二重合区M2中相同场景的子区域,以获取每个第二重合区M2中非角落位置的子区域以作为第二拼接区N4。例如,第一初始图像P01中与第二初始图像P02场景相同的第二重合区M2包括E1和E2两个子区域,第二初始图像P02中与第一初始图像P01场景相同的第二重合区M2包括F1和F2两个子区域。其中,E1和F1的场景相同,E2和F2的场景相同,但E1子区域靠近第一初始图像P01的中心,为非角落位置,清晰度和准确度比位于角落位置的F1子区域的清晰度和准确度更高,同样的,位于非角落位置的F2子区域的清晰度和准确度比位于角落位置的E2子区域的清晰度和准确度更高。与上述描述类似的,在相邻的第二初始图像P02和第三初始图像P03中的第二重合区M2中,H1子区域的清晰度和准确度比I1子区域的清晰度和准确度更高,I2子区域的清晰度和准确度比H2子区域的清晰度和准确度更高;在相邻的第三初始图像P03和第四初始图像P04中的第二重合区M2中,J1子区域的清晰度和准确度比K1子区域的清晰度和准确度更高,K2子区域的清晰度和准确度比J2子区域的清晰度和准确度更高;在相邻的第四初始图像P04和第一初始图像P01中的第二重合区M2中,L1子区域的清晰度和准确度比Q1子区域的清晰度和准确度更高,Q2子区域的清晰度和准确度比L2子区域的清晰度和准确度更高。Referring to FIG. 18a again, the processor 60 compares the sub-regions of the same scene in the adjacent second overlapping areas M2 to obtain the sub-regions at non-corner positions in each second overlapping area M2 as the second splicing area N4. For example, the second overlapping area M2 in the first initial image P01 that has the same scene as the second initial image P02 includes two sub-areas E1 and E2, and the second overlapping area M2 in the second initial image P02 is the same scene as the first initial image P01. Including two sub-regions F1 and F2. Among them, the scenes of E1 and F1 are the same, and the scenes of E2 and F2 are the same, but the E1 sub-region is close to the center of the first initial image P01, which is a non-corner position, and the definition and accuracy are higher than that of the F1 sub-region located at the corner. And the accuracy is higher. Similarly, the definition and accuracy of the F2 sub-region located in the non-corner position is higher than that of the E2 sub-region located in the corner position. Similar to the above description, in the second overlapping area M2 in the adjacent second initial image P02 and the third initial image P03, the definition and accuracy of the H1 sub-region is higher than that of the I1 sub-region. High, the definition and accuracy of the I2 sub-region is higher than that of the H2 sub-region; in the second overlapping area M2 in the adjacent third initial image P03 and the fourth initial image P04, the J1 sub-region The definition and accuracy of the area is higher than that of the K1 sub-area, and the definition and accuracy of the K2 sub-area is higher than that of the J2 sub-area; in the adjacent fourth initial image In the second overlapping area M2 in P04 and the first initial image P01, the definition and accuracy of the L1 sub-region is higher than that of the Q1 sub-region, and the definition and accuracy of the Q2 sub-region is higher than that of the L2 sub-region. The clarity and accuracy of the area is higher.
请再次参阅图18b,处理器60可将第一初始图像P01中的E1子区域和第二初始图像P02的F2子区域作为第一个第二重叠图像N2的两个第二拼接区域N4,将第二初始图像P02中的H1子区域和第三初始图像P03的I2子区域作为第二个第二重叠图像N2的两个第二拼接区域N4,第三初始图像P03中的J1子区域和第四初始图像P04的K2子区域作为第三个第二重叠图像N2的两个第二拼接区域N4,第四初始图像P04中的L1子区域和第一初始图像P01的Q2子区域作为第四个第二重叠图像N2的两个第二拼接区域N4。处理器60将相邻的两个初始图像P0对应的两个第二拼接区N4按照对应的场景位置拼接在一起,以分别得到四个第二重叠图像N2。具体地,拼接第一初始图像P01与第二初始图像P02形成的两个第二拼接区域N4(即E1子区域和F2子区域)以得到第一个第二重叠图像N2,拼接第二初始图像P02与第三初始图像P03形成的两个第二拼接区域N4(即H1子区域和I2子区域)以得到第二个第二重叠图像N2,拼接第三初始图像P03与第四初始图像P04形成的两个第二拼接区域N4(即J1子区域和K2子区域)以得到第三个第二重叠图像N2,及拼接第四初始图像P04与第一初始图像P01形成的两个第二拼接区域N4(即L1子区域和Q2子区域)以得到第四个第二重叠图像N2。由于四个第二重叠图像N2的两个第二拼接区N4的图像,分别为相邻的两个初始图像P0中的第二重合区M2中场景相同的区域中清晰度和准确度较高的区域的图像,相较于选取相邻的两个初始图像P0中任意一个的第二重合区M2的图像作为第二重叠图像N2而言,第二重叠图像N2的清晰度和准确度较高。最后, 处理器60拼接第一重叠图像N1、四个第二重叠图像N2和四个初始图像中除去第一重合区M1和第二重合区M2的部分,形成如图18b所示的最终图像P2,拼接时可根据第一重叠图像N1、四个第二重叠图像N2和四个初始图像中除去第一重合区M1和第二重合区M2的部分对应的场景的位置去拼接,保证拼接后的最终图像P2的准确性。Referring to FIG. 18b again, the processor 60 may use the E1 sub-region in the first initial image P01 and the F2 sub-region in the second initial image P02 as the two second stitching regions N4 of the first second overlapping image N2, and The H1 subarea in the second initial image P02 and the I2 subarea of the third initial image P03 are used as the two second stitching areas N4 of the second second overlapping image N2, and the J1 subarea and the first subarea in the third initial image P03 The K2 subregion of the four initial image P04 is used as the two second stitching regions N4 of the third second overlapping image N2, and the L1 subregion in the fourth initial image P04 and the Q2 subregion of the first initial image P01 are used as the fourth Two second stitching areas N4 of the second overlapping image N2. The processor 60 stitches the two second stitching areas N4 corresponding to the two adjacent initial images P0 together according to the corresponding scene positions to obtain four second overlapping images N2 respectively. Specifically, the two second stitching regions N4 (ie, the E1 sub-region and the F2 sub-region) formed by the first initial image P01 and the second initial image P02 are stitched to obtain the first second overlapping image N2, and the second initial image is stitched P02 and the third initial image P03 form two second splicing areas N4 (ie H1 sub-area and I2 sub-area) to obtain the second second overlapping image N2, which is formed by splicing the third initial image P03 and the fourth initial image P04 The two second splicing areas N4 (ie, the J1 sub-area and the K2 sub-area) to obtain the third second overlapping image N2, and the two second splicing areas formed by splicing the fourth initial image P04 and the first initial image P01 N4 (ie L1 sub-region and Q2 sub-region) to obtain the fourth second overlapping image N2. Since the images in the two second splicing areas N4 of the four second overlapping images N2 are respectively the areas with the same scene in the second overlapping area M2 in the two adjacent initial images P0 with higher definition and accuracy Compared with the image of the second overlapping area M2 of any one of the two adjacent initial images P0 selected as the second overlapping image N2, the second overlapping image N2 has higher definition and accuracy. Finally, the processor 60 splices the first overlapping image N1, the four second overlapping images N2, and the four initial images to remove the parts of the first overlapping area M1 and the second overlapping area M2 to form the final image P2 as shown in FIG. 18b. When stitching, stitching can be performed according to the position of the scene corresponding to the part of the first overlapping image N1, the four second overlapping images N2 and the four initial images excluding the first overlapping area M1 and the second overlapping area M2 to ensure the stitching The accuracy of the final image P2.
请参阅图15a、图15b、图18a、图18b和图20,在某些实施方式中,步骤022包括:0225:获取多个第一重合区中每个像素131的第一像素值;0226:取多个第一重合区中每个相同场景对应的像素131的第一像素值的第一均值,并根据多个第一均值生成第一重叠图像;0227:获取多个第二重合区中每个像素131的第二像素值;及0228:获取相邻两个第二重合区中每个相同场景对应的像素131的第二像素值的第二均值,并根据多个第二均值生成多个第二重叠图像。Please refer to FIG. 15a, FIG. 15b, FIG. 18a, FIG. 18b, and FIG. 20. In some embodiments, step 022 includes: 0225: Obtain the first pixel value of each pixel 131 in the plurality of first overlap regions; 0226: Take the first average value of the first pixel value of the pixel 131 corresponding to the same scene in each of the multiple first overlapping areas, and generate the first overlapping image according to the multiple first average values; 0227: Obtain each of the multiple second overlapping areas The second pixel value of each pixel 131; and 0228: Obtain the second average value of the second pixel value of each pixel 131 corresponding to the same scene in two adjacent second overlapping areas, and generate a plurality of second pixel values according to the plurality of second average values. The second overlay image.
具体地,处理器60获取多个初始图像P0中,多个第一重合区M1中每个像素131的第一像素值,并可根据多个第一重合区M1中每个相同场景对应的像素131的第一像素值计算得到第一均值。例如,假设每个子区域对应一个像素131,如图18a所示,第一初始图像P01至第四初始图像P04中,A1、B1、C1、D1四个子区域的场景相同,A1、B1、C1、D1四个子区域的像素131一一对应,将A1、B1、C1、D1四个区域对应的像素131的第一像素值相加后取均值即可得到第一像素值。同样的,A2、B2、C2、D2四个子区域对应的像素131一一对应,A3、B3、C3、D3四个子区域对应的像素131一一对应、及A4、B4、C4、D4四个子区域对应的像素131一一对应,对A2、B2、C2、D2四个子区域、A3、B3、C3、D3四个子区域、及A4、B4、C4、D4四个子区域重复上述过程,可将四个第一重合区M1中每个相同场景对应的的像素131的第一像素值求和后取均值后得到四个第一均值,然后根据四个第一均值生成第一重叠图像N1,例如,将四个第一均值作为第一重叠图像N1的四个像素131的像素值从而生成第一重叠图像N1。需要指出的是,上述表述中,每个子区域对应一个像素131是为了方便描述获取第一重叠图像N1的原理,不能理解为每个子图像仅可以对应一个像素131,每个子区域可以对应多个像素131,如2个、3个、5个、10个、100个、1000个、甚至10万个、百万个等。Specifically, the processor 60 obtains the first pixel value of each pixel 131 in the plurality of first overlapping areas M1 in the plurality of initial images P0, and can determine the pixel value corresponding to each of the same scenes in the plurality of first overlapping areas M1. The first pixel value of 131 is calculated to obtain the first average value. For example, assuming that each sub-region corresponds to a pixel 131, as shown in FIG. 18a, in the first initial image P01 to the fourth initial image P04, the scenes of the four sub-regions A1, B1, C1, and D1 are the same, and the scenes of A1, B1, C1, and C1 are the same. The pixels 131 in the four sub-regions of D1 correspond one-to-one, and the first pixel values of the pixels 131 corresponding to the four regions A1, B1, C1, and D1 are added together and the average value is taken to obtain the first pixel value. Similarly, the pixels 131 corresponding to the four sub-regions A2, B2, C2, D2 correspond one-to-one, the pixels 131 corresponding to the four sub-regions A3, B3, C3, and D3 correspond one-to-one, and the four sub-regions A4, B4, C4, and D4. The corresponding pixels 131 are in one-to-one correspondence. Repeat the above process for the four sub-regions A2, B2, C2, D2, the four sub-regions A3, B3, C3, and D3, and the four sub-regions A4, B4, C4, and D4. The first pixel value of each pixel 131 corresponding to the same scene in the first overlap area M1 is summed and the average value is taken to obtain four first average values, and then the first overlapping image N1 is generated according to the four first average values, for example, The four first average values are used as the pixel values of the four pixels 131 of the first overlap image N1 to generate the first overlap image N1. It should be pointed out that in the above expression, each sub-region corresponds to a pixel 131 to facilitate the description of the principle of obtaining the first overlapping image N1. It cannot be understood that each sub-image can only correspond to one pixel 131, and each sub-region can correspond to multiple pixels. 131, such as 2, 3, 5, 10, 100, 1,000, or even 100,000, millions, etc.
然后处理器60获取多个初始图像P0中,第二重合区N2中每个像素131的第二像素值,并根据多个第二重合区N2中每个相同场景对应的像素131的第二像素值计算得到第二均值。例如,如图18a所示,第一初始图像P01的E1区域和和第二初始图像P02的F1区域的场景相同,E1和F1两个区域的像素131一一对应,将E1和F1两个区域对应像素131的第二像素值求和后取平均值以得到一个第二平均值,同样地,可将E2和F2两个区域对应像素131的第二像素值求和后取平均值以得到一个第二平均值,根据两个第二平均值生成第二重叠图像N2。例如,将两个第一均值作为第二重叠图像N2的两个像素131的像素值从而生成第二重叠图像N2。可以理解,其他三个第二重叠图像N2的获取方式与上述方式基本相同,在此不再赘述。如此,相较于选取其中一个第一重合区M1的图像作为第一重叠图像N1,选取其中一个第二重合区M2的图像作为第二重叠图像N2第一重合区M1的图像或第二重合区M2的图像存在清晰度和准确度较低的边缘区域而言,处理器60通过四个第一重合区M1的对应像素131的第一像素值计算第一均值,将第一均值作为第一重叠图像N1对应像素的像素值,通过相邻的两个初始图像P0的第二重合区M2对应像素131的第二像素值计算第二均值,以作为第二重叠图像N2对应像素的像素值,得到的第一重叠图像N1和第二重叠图像N2更为清晰。Then the processor 60 obtains the second pixel value of each pixel 131 in the second overlapping area N2 in the plurality of initial images P0, and according to the second pixel value of each pixel 131 corresponding to the same scene in the plurality of second overlapping areas N2 The value is calculated to obtain the second mean value. For example, as shown in Figure 18a, the scenes of the E1 area of the first initial image P01 and the F1 area of the second initial image P02 are the same. The pixels 131 of the two areas E1 and F1 correspond to each other, and the two areas E1 and F1 The second pixel values of the corresponding pixels 131 are summed and then averaged to obtain a second average value. Similarly, the second pixel values of the corresponding pixels 131 in the two regions E2 and F2 can be summed and then averaged to obtain a second average value. The second average value is used to generate a second overlapping image N2 based on the two second average values. For example, the two first average values are used as the pixel values of the two pixels 131 of the second overlap image N2 to generate the second overlap image N2. It can be understood that the methods for acquiring the other three second overlapping images N2 are basically the same as those described above, and will not be repeated here. In this way, instead of selecting the image of one of the first overlapping areas M1 as the first overlapping image N1, selecting the image of one of the second overlapping areas M2 as the second overlapping image N2 the image of the first overlapping area M1 or the second overlapping area As far as the image of M2 has an edge region with lower definition and accuracy, the processor 60 calculates the first average value by using the first pixel values of the corresponding pixels 131 of the four first overlapping areas M1, and uses the first average value as the first overlap. The pixel value of the pixel corresponding to the image N1 is calculated from the second pixel value of the pixel 131 corresponding to the second overlapping area M2 of the two adjacent initial images P0, and the second average value is calculated as the pixel value of the pixel corresponding to the second overlapping image N2 to obtain The first overlapping image N1 and the second overlapping image N2 are more clear.
在本说明书的描述中,参考术语“某些实施方式”、“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。In the description of this specification, reference is made to the terms “certain embodiments”, “one embodiment”, “some embodiments”, “exemplary embodiments”, “examples”, “specific examples”, or “some examples”. The description means that a specific feature, structure, material, or characteristic described in combination with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representation of the above-mentioned terms does not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials or characteristics can be combined in any one or more embodiments or examples in a suitable manner.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with "first" and "second" may explicitly or implicitly include at least one feature. In the description of the present application, "a plurality of" means at least two, for example two, three, unless otherwise specifically defined.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型,本申请的范围由权利要求及其等同物限定。Although the embodiments of the present application have been shown and described above, it can be understood that the above-mentioned embodiments are exemplary and should not be construed as limiting the present application. A person of ordinary skill in the art can comment on the foregoing within the scope of the present application. The embodiments are subject to changes, modifications, substitutions and modifications, and the scope of this application is defined by the claims and their equivalents.

Claims (20)

  1. 一种图像传感器,其特征在于,所述图像传感器包括:An image sensor, characterized in that the image sensor comprises:
    超透镜;和Hyperlens; and
    像素阵列,所述像素阵列位于所述超透镜的出光侧,所述超透镜用于对从所述超透镜的入光侧射入的入射光线进行分光以形成多种波长不同的出射光线,不同波长的所述出射光线以不同的出射角度从所述出光侧射向所述像素阵列。Pixel array, the pixel array is located on the light exit side of the hyper lens, and the hyper lens is used to split the incident light from the light entrance side of the hyper lens to form multiple outgoing light rays with different wavelengths. The emitted light of the wavelength is emitted to the pixel array from the light emitting side at different emission angles.
  2. 根据权利要求1所述的图像传感器,其特征在于,所述超透镜包括:The image sensor according to claim 1, wherein the hyper lens comprises:
    透镜本体,所述透镜本体包括位于所述入光侧的入光面及位于所述出光侧的出光面;和A lens body, the lens body including a light-incident surface on the light-incident side and a light-emitting surface on the light-exit side; and
    微结构阵列,所述微结构阵列设置在所述入光面上。The microstructure array is arranged on the light incident surface.
  3. 根据权利要求2所述的图像传感器,其特征在于,所述微结构阵列包括多个微结构组,所述微结构组包括多个微结构单元,所述像素阵列包括多个像素组,所述像素组和所述微结构组一一对应。The image sensor according to claim 2, wherein the microstructure array comprises a plurality of microstructure groups, the microstructure group comprises a plurality of microstructure units, the pixel array comprises a plurality of pixel groups, and the The pixel groups correspond to the microstructure groups one-to-one.
  4. 根据权利要求3所述的图像传感器,其特征在于,所述微结构组的多个所述微结构单元的形状、尺寸、排列和角度根据所述出射光线的波长及出射角度确定。3. The image sensor according to claim 3, wherein the shape, size, arrangement and angle of the plurality of microstructure units of the microstructure group are determined according to the wavelength and the exit angle of the emitted light.
  5. 根据权利要求3所述的图像传感器,其特征在于,所述像素组包括第一像素、第二像素、第三像素和第四像素,多种波长不同的所述出射光线包括红光、第一绿光、第二绿光和蓝光,所述第一像素用于接收所述红光,所述第二像素用于接收所述第一绿光,所述第三像素用于接收所述蓝光,所述第四像素用于接收所述第二绿光。The image sensor according to claim 3, wherein the pixel group includes a first pixel, a second pixel, a third pixel, and a fourth pixel, and the multiple types of emitted light with different wavelengths include red light, first Green light, second green light and blue light, the first pixel is used to receive the red light, the second pixel is used to receive the first green light, and the third pixel is used to receive the blue light, The fourth pixel is used to receive the second green light.
  6. 根据权利要求3所述的图像传感器,其特征在于,所述像素组包括第一像素、第二像素、第三像素和第四像素,多种波长不同的所述出射光线包括红光、第一黄光、第二黄光和蓝光,所述第一像素用于接收所述红光,所述第二像素用于接收所述第一黄光,所述第三像素用于接收所述蓝光,所述第四像素用于接收所述第二黄光。The image sensor according to claim 3, wherein the pixel group includes a first pixel, a second pixel, a third pixel, and a fourth pixel, and the multiple types of emitted light with different wavelengths include red light, first Yellow light, second yellow light and blue light, the first pixel is used to receive the red light, the second pixel is used to receive the first yellow light, and the third pixel is used to receive the blue light, The fourth pixel is used to receive the second yellow light.
  7. 根据权利要求3所述的图像传感器,其特征在于,所述图像传感器包括微透镜阵列,所述微透镜阵列设置在所述入光侧,所述微透镜阵列包括多个微透镜,所述微透镜、所述像素组和所述微结构组一一对应。The image sensor according to claim 3, wherein the image sensor comprises a microlens array, the microlens array is arranged on the light incident side, the microlens array comprises a plurality of microlenses, and the microlens The lens, the pixel group and the microstructure group have a one-to-one correspondence.
  8. 根据权利要求7所述的图像传感器,其特征在于,所述图像传感器包括位于成像面上的感光面,所述感光面包括多个子感光面,在每一个所述子感光面上,所述子感光面的中心位置对应的所述微透镜和所述微结构组对准,非中心位置对应的所述微透镜和所述微结构组互相偏移。The image sensor according to claim 7, wherein the image sensor includes a photosensitive surface on an imaging surface, the photosensitive surface includes a plurality of sub-sensing surfaces, and on each of the sub-sensing surfaces, the sub-sensing surface The micro lens corresponding to the center position of the photosensitive surface is aligned with the micro structure group, and the micro lens corresponding to the non-central position and the micro structure group are offset from each other.
  9. 根据权利要求8所述的图像传感器,其特征在于,所述子感光面中,以所述中心位置为圆心的多个圆均位于所述非中心位置,随着所述微透镜所处圆的半径的逐渐增大,所述微透镜和对应的所述微结构组的偏移量也逐渐增大。The image sensor according to claim 8, wherein, in the sub-photosensitive surface, a plurality of circles centered on the center position are all located at the non-central positions, and as the microlens is located, the circle As the radius gradually increases, the offset between the microlens and the corresponding microstructure group also gradually increases.
  10. 一种成像系统,其特征在于,包括:An imaging system, characterized in that it comprises:
    透镜组;和Lens group; and
    图像传感器,所述图像传感器设置在所述透镜组的像侧;An image sensor, the image sensor is arranged on the image side of the lens group;
    所述图像传感器包括:超透镜;和像素阵列,所述像素阵列位于所述超透镜的出光侧,所述超透镜用于对从所述超透镜的入光侧射入的入射光线进行分光以形成多种波长不同的出射光线,不同波长的所述出射光线以不同的出射角度从所述出光侧射向所述像素阵列。The image sensor includes: a super lens; and a pixel array, the pixel array is located on the light exit side of the super lens, and the super lens is used to split the incident light from the light entrance side of the super lens. A plurality of outgoing rays of different wavelengths are formed, and the outgoing rays of different wavelengths are emitted from the light emitting side to the pixel array at different emission angles.
  11. 根据权利要求10所述的成像系统,其特征在于,所述超透镜包括:The imaging system of claim 10, wherein the super lens comprises:
    透镜本体,所述透镜本体包括位于所述入光侧的入光面及位于所述出光侧的出光面;和A lens body, the lens body including a light-incident surface on the light-incident side and a light-emitting surface on the light-exit side; and
    微结构阵列,所述微结构阵列设置在所述入光面上。The microstructure array is arranged on the light incident surface.
  12. 根据权利要求11所述的成像系统,其特征在于,所述微结构阵列包括多个微结构组,所述微结构组包括多个微结构单元,所述像素阵列包括多个像素组,所述像素组和所述微结构组一一对应。The imaging system according to claim 11, wherein the microstructure array comprises a plurality of microstructure groups, the microstructure group comprises a plurality of microstructure units, the pixel array comprises a plurality of pixel groups, and the The pixel groups correspond to the microstructure groups one-to-one.
  13. 根据权利要求12所述的成像系统,其特征在于,所述微结构组的多个所述微结构单元的形状、尺寸、排列和角度根据所述出射光线的波长及出射角度确定。The imaging system according to claim 12, wherein the shape, size, arrangement and angle of the plurality of microstructure units of the microstructure group are determined according to the wavelength and the exit angle of the emitted light.
  14. 根据权利要求12所述的成像系统,其特征在于,所述像素组包括第一像素、第二像素、第三像素和第四像素,多种波长不同的所述出射光线包括红光、第一绿光、第二绿光和蓝光,所述第一像素用于接收所述红光,所述第二像素用于接收所述第一绿光,所述第三像素用于接收所述蓝光,所述第四像素用于接收所述第二绿光。The imaging system according to claim 12, wherein the pixel group includes a first pixel, a second pixel, a third pixel, and a fourth pixel, and the emitted light rays with different wavelengths include red light, first Green light, second green light and blue light, the first pixel is used to receive the red light, the second pixel is used to receive the first green light, and the third pixel is used to receive the blue light, The fourth pixel is used to receive the second green light.
  15. 根据权利要求12所述的成像系统,其特征在于,所述像素组包括第一像素、第二像素、第三像素和第四像素,多种波长不同的所述出射光线包括红光、第一黄光、第二黄光和蓝光,所述第一像素用于接收所述红光,所述第二像素用于接收所述第一黄光,所述第三像素用于接收所述蓝光,所述第四像素用于接收所述第二黄光。The imaging system according to claim 12, wherein the pixel group includes a first pixel, a second pixel, a third pixel, and a fourth pixel, and the multiple types of emitted light with different wavelengths include red light, first Yellow light, second yellow light and blue light, the first pixel is used to receive the red light, the second pixel is used to receive the first yellow light, and the third pixel is used to receive the blue light, The fourth pixel is used to receive the second yellow light.
  16. 根据权利要求12所述的成像系统,其特征在于,所述图像传感器包括微透镜阵列,所述微透镜阵列设置在所述入光侧,所述微透镜阵列包括多个微透镜,所述微透镜、所述像素组和所述微结构组一一对应。The imaging system according to claim 12, wherein the image sensor comprises a microlens array, the microlens array is arranged on the light incident side, the microlens array comprises a plurality of microlenses, and the microlens The lens, the pixel group and the microstructure group have a one-to-one correspondence.
  17. 根据权利要求16所述的成像系统,其特征在于,所述图像传感器包括位于成像面上的感光面,所述感光面包括多个子感光面,在每一个所述子感光面上,所述子感光面的中心位置对应的所述微透镜和所述微结构组对准,非中心位置对应的所述微透镜和所述微结构组互相偏移。The imaging system of claim 16, wherein the image sensor includes a photosensitive surface on an imaging surface, the photosensitive surface includes a plurality of sub-photosensitive surfaces, and on each of the sub-photosensitive surfaces, the sub-photosensitive surface The micro lens corresponding to the center position of the photosensitive surface is aligned with the micro structure group, and the micro lens corresponding to the non-central position and the micro structure group are offset from each other.
  18. 根据权利要求17所述的成像系统,其特征在于,所述子感光面中,以所述中心位置为圆心的多个圆均位于所述非中心位置,随着所述微透镜所处圆的半径的逐渐增大,所述微透镜和对应的所述微结构组的偏移量也逐渐增大。The imaging system according to claim 17, wherein in the sub-photosensitive surface, a plurality of circles centered on the center position are all located at the non-central positions, and follow the circle where the micro lens is located. As the radius gradually increases, the offset between the microlens and the corresponding microstructure group also gradually increases.
  19. 根据权利要求10所述的成像系统,其特征在于,所述图像传感器包括位于成像面上的感光面,所述透镜组包括多组透镜,每组所述透镜在所述成像面的成像区域覆盖部分所述感光面,多组所述透镜在所述成像面的成像区域共同覆盖全部所述感光面。The imaging system of claim 10, wherein the image sensor includes a photosensitive surface on an imaging surface, the lens group includes a plurality of groups of lenses, and each group of the lenses covers an imaging area of the imaging surface For part of the photosensitive surface, the imaging area of the multiple groups of lenses on the imaging surface collectively covers all the photosensitive surface.
  20. 一种终端,其特征在于,包括:A terminal, characterized in that it comprises:
    壳体;和Shell; and
    权利要求19所述的成像系统,所述成像系统安装在所述壳体上。The imaging system of claim 19, which is mounted on the housing.
PCT/CN2020/106985 2019-08-29 2020-08-05 Image sensor, imaging system, and terminal WO2021036721A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910809194.2A CN110493504B (en) 2019-08-29 2019-08-29 Image sensor, imaging system and terminal
CN201910809194.2 2019-08-29

Publications (1)

Publication Number Publication Date
WO2021036721A1 true WO2021036721A1 (en) 2021-03-04

Family

ID=68555160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/106985 WO2021036721A1 (en) 2019-08-29 2020-08-05 Image sensor, imaging system, and terminal

Country Status (2)

Country Link
CN (1) CN110493504B (en)
WO (1) WO2021036721A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023154946A1 (en) * 2022-02-14 2023-08-17 Tunoptix, Inc. Systems and methods for high quality imaging using a color-splitting meta-optical computation camera

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493504B (en) * 2019-08-29 2021-07-30 Oppo广东移动通信有限公司 Image sensor, imaging system and terminal
CN110954966B (en) * 2019-12-06 2021-06-15 中国科学院长春光学精密机械与物理研究所 Planar photoelectric detection system based on superlens array
WO2022104629A1 (en) * 2020-11-19 2022-05-27 华为技术有限公司 Image sensor, light splitting and color filtering device, and image sensor manufacturing method
US11373431B2 (en) 2020-01-20 2022-06-28 Visual Sensing Technology Co., Ltd. Electronic device
CN113140578A (en) * 2020-01-20 2021-07-20 胜薪科技股份有限公司 Electronic device
CN112804427A (en) * 2021-01-04 2021-05-14 广州立景创新科技有限公司 Image acquisition module
CN113345925B (en) * 2021-05-31 2024-04-12 北京京东方技术开发有限公司 Pixel unit, image sensor and spectrometer
CN113484939A (en) * 2021-06-08 2021-10-08 南京大学 Wide-view-angle imaging method based on planar lens

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799589A (en) * 2009-02-09 2010-08-11 财团法人工业技术研究院 Color split optical element and image panel device
CN109164518A (en) * 2018-10-11 2019-01-08 业成科技(成都)有限公司 Super lens, preparation method and the optical module using it
US20190025464A1 (en) * 2017-05-24 2019-01-24 Uchicago Argonne, Llc Ultrathin, polarization-independent, achromatic metalens for focusing visible light
CN110049261A (en) * 2019-04-23 2019-07-23 Oppo广东移动通信有限公司 A kind of dot structure, imaging sensor and terminal
CN110445974A (en) * 2019-08-29 2019-11-12 Oppo广东移动通信有限公司 Imaging system, terminal and image acquiring method
CN110493504A (en) * 2019-08-29 2019-11-22 Oppo广东移动通信有限公司 Imaging sensor, imaging system and terminal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5200349B2 (en) * 2006-08-31 2013-06-05 ソニー株式会社 Projection device and image display device
JP2010160313A (en) * 2009-01-08 2010-07-22 Sony Corp Imaging element and imaging apparatus
CN102547080B (en) * 2010-12-31 2015-07-29 联想(北京)有限公司 Camera module and comprise the messaging device of this camera module
CN105812625B (en) * 2014-12-30 2019-03-19 深圳超多维科技有限公司 Microlens array imaging device and imaging method
AU2016278201B2 (en) * 2015-06-15 2021-08-12 Agrowing Ltd Multispectral imaging apparatus
US20170184291A1 (en) * 2015-12-23 2017-06-29 Everready Precision Ind. Corp. Optical device
CN207094226U (en) * 2017-08-30 2018-03-13 京东方科技集团股份有限公司 Light guide plate, backlight module and display device
CN108650341A (en) * 2018-03-30 2018-10-12 联想(北京)有限公司 A kind of electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799589A (en) * 2009-02-09 2010-08-11 财团法人工业技术研究院 Color split optical element and image panel device
US20190025464A1 (en) * 2017-05-24 2019-01-24 Uchicago Argonne, Llc Ultrathin, polarization-independent, achromatic metalens for focusing visible light
CN109164518A (en) * 2018-10-11 2019-01-08 业成科技(成都)有限公司 Super lens, preparation method and the optical module using it
CN110049261A (en) * 2019-04-23 2019-07-23 Oppo广东移动通信有限公司 A kind of dot structure, imaging sensor and terminal
CN110445974A (en) * 2019-08-29 2019-11-12 Oppo广东移动通信有限公司 Imaging system, terminal and image acquiring method
CN110493504A (en) * 2019-08-29 2019-11-22 Oppo广东移动通信有限公司 Imaging sensor, imaging system and terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023154946A1 (en) * 2022-02-14 2023-08-17 Tunoptix, Inc. Systems and methods for high quality imaging using a color-splitting meta-optical computation camera
US20230262307A1 (en) * 2022-02-14 2023-08-17 Tunoptix, Inc. Systems and methods for high quality imaging using a color-splitting meta-optical computation camera

Also Published As

Publication number Publication date
CN110493504A (en) 2019-11-22
CN110493504B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
WO2021036721A1 (en) Image sensor, imaging system, and terminal
CN103037180B (en) Imageing sensor and picture pick-up device
CN103119516B (en) Light field camera head and image processing apparatus
WO2017202323A1 (en) Photosensitive image element, image collector, fingerprint collection device, and display device
CN100427971C (en) Light-absorbing member
CN107991838B (en) Self-adaptive three-dimensional imaging system
CA2403094A1 (en) High acuity lens system
CN101446679A (en) Solid-state imaging device
CN102955216A (en) Lens module
CN102959434B (en) Color separation filtering array, solid-state imager, camera head and display device
CN104270555A (en) Surface CMOS image sensor camera shooting module
CN110620861B (en) Image sensor, camera module and terminal
JPH01189685A (en) Liquid crystal light valve and video projector with liquid crystal light valve
CN110505387B (en) Imaging system, terminal and image acquisition method
CN110505384B (en) Imaging system, terminal and image acquisition method
CN110505385B (en) Imaging system, terminal and image acquisition method
CN107728417B (en) Liquid crystal projection screen
KR20090128103A (en) Led flash lens
CN106454018A (en) Plenoptic camera and method of controlling the same
CN110728184A (en) Multi-light-source iris image acquisition device capable of eliminating light and shadow in imaging area
CN110445974B (en) Imaging system, terminal and image acquisition method
CN203587870U (en) Multi-view camera shooting lens module
JPH01281426A (en) Liquid crystal light valve and projector having liquid crystal light valve
US10134791B1 (en) Backside illumination global shutter sensor and pixel thereof
CN103982857B (en) Optical lens, image pickup device and optical touch system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20859551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20859551

Country of ref document: EP

Kind code of ref document: A1