CN112335049B - Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method - Google Patents

Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method Download PDF

Info

Publication number
CN112335049B
CN112335049B CN201880095111.1A CN201880095111A CN112335049B CN 112335049 B CN112335049 B CN 112335049B CN 201880095111 A CN201880095111 A CN 201880095111A CN 112335049 B CN112335049 B CN 112335049B
Authority
CN
China
Prior art keywords
light
light guide
photoelectric converter
spacer
guide channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880095111.1A
Other languages
Chinese (zh)
Other versions
CN112335049A (en
Inventor
陈振宇
周凯伦
蒋伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Publication of CN112335049A publication Critical patent/CN112335049A/en
Application granted granted Critical
Publication of CN112335049B publication Critical patent/CN112335049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation

Abstract

The application provides an imaging assembly including a spacer and a photoelectric converter. The spacer is opaque and a light guide channel is formed therein. The photoelectric converter is parallel to and spaced apart from the spacer and positioned to correspond to the light guide channel. Light emitted by an object to be imaged passes through the light guide channel and then reaches the photoelectric converter. The application also provides a method for manufacturing the imaging component, a touch screen, a camera module, an intelligent terminal, a multi-view depth camera, a light field camera and a distance measurement method.

Description

Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method
Technical Field
The present application relates to imaging assemblies, and in particular to optoelectronic imaging assemblies that utilize light guide channels for light confinement.
Background
With the development and popularization of mobile terminal devices, related technologies of an imaging assembly applied to a mobile terminal device to assist a user in acquiring an image (e.g., video or image) have been rapidly developed and advanced, and in recent years, the imaging assembly has been widely used in various fields such as medical, security, industrial production, etc.
One of the important trends in mobile terminal devices is now that the size of mobile terminal devices is getting smaller and smaller. In order to meet the increasingly wide market demands, small-size and large aperture are irreversible development trends of the existing camera modules. In addition, the market is providing higher and higher demands on the imaging quality of the camera module.
Under the premise that the size requirement is higher and higher, the structure of the conventional camera device cannot meet the requirement of people on the size of electronic products.
In particular, conventional imaging apparatuses often employ a lens imaging system. In a lens imaging system, there are necessarily various problems of aberration, loss of brightness, and the like after light passes through a lens. After the light passes through the lens, the brightness is also necessarily lost.
In addition, since the structure of the lens imaging system is complex, further downsizing inevitably leads to increased cost, and the demand of people for thinning electronic products is not satisfied.
In addition, if the lens imaging system includes a plurality of lenses and barrels, manufacturing tolerances of the respective components thereof are continuously accumulated during the assembly process, and the assembly process may also generate assembly tolerances. These tolerances limit further improvements in lens performance.
The maximum effective size of a chip (i.e., the area over which the chip can be illuminated) of a conventional lens imaging optical system is limited by the lens aperture size. The space for increasing the aperture size of the lens is limited in optical design.
Disclosure of Invention
The present invention aims to provide a solution that overcomes at least one of the above-mentioned drawbacks of the prior art.
According to one aspect of the present invention, there is provided an imaging assembly, which may include:
a spacer, which is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter can be parallel to and spaced apart from the spacer and can be arranged in one-to-one correspondence with the light guide channels respectively, so that light emitted by an object to be imaged can reach the photoelectric converter after passing through the light guide channels.
Wherein the spacer forms a plurality of light guide channels, which may form an array of light guide channels in the spacer.
Wherein, the size of the light guide channel can be set to be more than 800 nm.
The size of the light guide channel can be set to diffract specific wavelengths in the light passing through the light guide channel so as to split light, so that the light of the specific wave band reaches the preset photoelectric converter.
Wherein the spacer may be made of a light absorbing material.
The photoelectric converter receives all light from the corresponding light guide channel, and the light of the corresponding light guide channel irradiates the whole light receiving surface of the corresponding photoelectric converter.
Wherein the spacer may be coated with a light blocking layer.
Wherein, the light blocking layer can be a diffuse reflection coating or a light absorption coating.
According to one aspect of the present application, a method of making an imaging assembly is also provided. The method may include:
at least one light guide channel may be formed in the light-tight spacer;
at least one of the photoelectric converters may be disposed parallel to and spaced apart from the spacer, and the photoelectric converters may be respectively corresponding to the light guide channels such that light emitted from the object to be imaged may pass through the light guide channels and reach the photoelectric converter.
According to one aspect of the present application, there is also provided a touch screen. The touch screen may include:
a spacer, which is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, which can be parallel to and spaced apart from the spacer and can be arranged in one-to-one correspondence with the light guide channels respectively, so that the light emitted by the object to be imaged can reach the photoelectric converter after passing through the light guide channels; and
a streamer positionable over the spacer, comprising:
The fluid body can comprise a total reflection plate;
a light input part which is positioned in the fluid light body and can output light which is angled with the total reflection plate;
and
A light output section for outputting the light,
wherein, the light emitted by the light input part can be totally reflected in the fluid light body and can be output from the light output part.
According to one aspect of the present application, a touch screen is provided. The touch screen may include:
a spacer, which is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, which can be parallel to and spaced apart from the spacer and can be arranged in one-to-one correspondence with the light guide channels respectively, so that the light emitted by the object to be imaged can reach the photoelectric converter after passing through the light guide channels;
the transparent elastic mechanism can be positioned above the spacer; and
the light source may be located on a side of the spacer facing the transparent elastic mechanism and emit light toward the transparent elastic mechanism.
Wherein, transparent elastic means can be transparent film.
According to one aspect of the present application, there is also provided a touch screen. The touch screen includes:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, the photoelectric converter is parallel to the spacer and spaced apart from the spacer, and is respectively arranged in one-to-one correspondence with the light guide channels, so that the light emitted by the object to be imaged reaches the photoelectric converter after passing through the light guide channels;
And the transparent elastic mechanism can be positioned above the spacer, and an opaque blocking piece is arranged in the transparent elastic mechanism.
According to an aspect of the present application, there is also provided a camera module. The camera module can comprise the imaging component and a display screen. Wherein the imaging component is located below the display screen.
The display screen is one of an OLED screen, an LCD screen and an LED screen.
Wherein the substrate in the OLED screen may form a spacer.
Wherein the cathode layer in the OLED screen may form a spacer.
Wherein the anode layer in the OLED screen may form a spacer.
Wherein, each light guide channel in the spacer of the imaging assembly may be provided with an optical element converging light.
Wherein the optical element may be a convex lens.
Wherein, a superlens for converging light rays can be arranged above each photoelectric converter of the imaging component.
Wherein, the light path turning element can be arranged above the light guide channel.
Wherein the optical path turning element may comprise a MEMS device and a mirror.
The camera module can be positioned on a substrate in the OLED screen, a driving piece can be arranged on the substrate, and the driving piece can adjust the distance between the photoelectric converter of the imaging component and the spacer.
Wherein the color filters in the LCD screen may be integrated as color filters of the imaging assembly.
Wherein the aperture of the light guide channel may be set to a specific wavelength.
According to one aspect of the application, an intelligent terminal is also provided. The intelligent terminal can comprise the camera module.
According to one aspect of the present application, a method of distance measurement is also provided. The method may include:
a plurality of light guide channels may be formed in the light-tight spacer;
arranging a plurality of photoelectric converters in parallel with the spacer at intervals and corresponding to the light guide channels one by one respectively so that light emitted by an object to be imaged can reach the photoelectric converters after passing through the light guide channels;
obtaining a plurality of images of an object to be imaged, which are formed by a plurality of light guide channels, according to the electric signals output by the photoelectric converter; and
the distance to the object to be imaged is calculated from the repetition degree of the plurality of images.
The repetition degree may be the total or a certain partial repeated pixel area of the object to be photographed.
According to one aspect of the present application, there is also provided a light field camera. The light field camera may have a microlens array, and may further include:
a spacer, the spacer being opaque and having at least one light guide channel formed therein; and
At least one photoelectric converter which can be parallel to and spaced apart from the spacer and can be in one-to-one correspondence with the light guide channels respectively,
the micro lens array can be positioned between the spacer and the photoelectric converter, and light emitted by an object to be imaged passes through the light guide channel and the micro lens array and then reaches the photoelectric converter.
According to one aspect of the present application, there is also provided a light field camera. The light field camera may have a main lens, and may further include:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter parallel to and spaced apart from the spacers and respectively corresponding to the light guide channels one by one,
the spacer can be positioned between the main lens and the photoelectric converter, and light emitted by the object to be imaged passes through the main lens and the light guide channel and then reaches the photoelectric converter.
According to one aspect of the present application, there is also provided a multi-view depth camera. The multi-view depth camera may include:
a spacer, which is opaque and in which a plurality of light guide channels are formed; and
the photoelectric converters can be parallel to the spacer and spaced apart from the spacer and can be in one-to-one correspondence with the light guide channels respectively, so that light emitted by an object to be imaged can reach the photoelectric converters after passing through the light guide channels;
Wherein the central axes of the light guide channels may be staggered with respect to each other.
According to one aspect of the present application, there is also provided a pixel color filter array device. The pixel color filter array may include:
a substrate;
a dielectric layer attached to the substrate; and
a plurality of pixel color filters attached to the dielectric layer and forming an array.
The dielectric layer is one of a photoelectric converter and a display screen.
According to one aspect of the present application, a method of forming a pixel color filter array device is also provided. The method of forming a pixel color filter array device may include the steps of:
setting a substrate;
attaching a dielectric layer to a substrate;
disposing a first color filter array on a first carrier plate;
transferring the first color filter array from the first carrier plate to the second carrier plate by a pad print head to form a second color filter array, wherein the pad print head is over-inflated at the transfer so that gaps between the color filters fit with gaps between the color filters in the second color filter array;
coating transparent adhesive materials on the substrate; and
the second color filter array on the second carrier plate is integrally bonded to the dielectric layer.
The dielectric layer is one of a photoelectric converter and a display screen.
Compared with the prior art, the invention has at least one of the following technical effects:
1. no aberration problem exists, and the brightness loss is smaller.
2. The size is smaller.
3. The structure is simple, and assembly tolerance items are fewer.
4. The light guide channels are arranged on the screen at intervals, so that the area of the chip can be illuminated by the maximum effective size of the chip, and the area of the chip can be increased by increasing the distribution area of the light guide channels on the screen, therefore, the area of the chip is not limited by the size of the lens aperture, and the adjustable range is large. The light guide channels are arranged at intervals with the imaging pixels in the projection direction and are not on the same horizontal plane. The number of pixels constituting the chip x the size of the pixels=the area of the chip, the size of the pixels positively correlates with the sensitivity, and the number positively correlates with the resolution.
5. In the case of use as a proactive, the apertures alternate with the display screen imaging pixels, thereby enhancing the screen duty cycle of the intelligent terminal.
6. In the case of using as a post-camera, the overall thickness of the mobile phone is reduced, wherein the post-camera is the largest term of the thickness of the intelligent terminal, and the reduction of the post-camera thickness only has the possibility of reducing the overall thickness of the intelligent terminal.
7. Because the imaging does not involve a lens, the phenomenon of short-distance defocusing does not occur, and the micro-distance imaging can be realized.
Drawings
Exemplary embodiments are illustrated in referenced figures. The embodiments and figures disclosed herein are to be regarded as illustrative rather than restrictive.
FIGS. 1a to 1d show schematic views of an embodiment of an imaging assembly according to the present invention;
FIG. 2 shows a detailed schematic diagram illustrating a single light guide channel in an embodiment of an imaging assembly according to the present invention;
FIG. 3 illustrates a flow chart of a method of manufacturing an imaging assembly according to the present invention;
fig. 4a to 4b show schematic views of an embodiment of a touch screen according to the present invention;
FIG. 5 shows a schematic view of a streamer in an embodiment of a touch screen according to the invention;
FIGS. 6 a-6 c show schematic diagrams of another embodiment of a touch screen according to the present invention;
fig. 7a to 7b show schematic views of another embodiment of a touch screen according to the present invention;
FIG. 8 shows a schematic diagram of an embodiment of a camera module according to the present invention;
fig. 9a to 9b show schematic views of another embodiment of an imaging module according to the present invention;
FIG. 10 shows a schematic view of the above embodiment of an imaging module according to the present invention;
FIG. 11 shows a schematic diagram of another embodiment of a camera module in accordance with the present invention;
FIG. 12 shows a schematic view of another embodiment of a camera module according to the present invention;
FIG. 13 shows a schematic view of another embodiment of a camera module according to the present invention;
FIG. 14 shows a flow chart of a method of distance measurement according to the present invention;
15 a-15 d show schematic diagrams of embodiments of a method of distance measurement according to the present invention;
FIG. 16 shows a schematic diagram of another embodiment of a camera module in accordance with the present invention;
FIG. 17 shows a schematic diagram of a prior art light field camera;
18 a-18 b show schematic diagrams of prior art light field cameras;
FIG. 19 shows a refocusing schematic diagram of a prior art light field camera;
FIG. 20 shows a refocusing effect diagram of a prior art light field camera;
FIG. 21 shows a schematic diagram of an embodiment of a light field camera according to the invention;
FIG. 22 shows a refocusing effect diagram of an embodiment of a light field camera according to the present invention;
FIG. 23 shows a schematic diagram of another embodiment of a light field camera according to the invention;
FIG. 24 shows a schematic diagram of an embodiment of a multi-view depth camera according to the present invention;
FIG. 25 shows a flow chart of a prior art photolithography process; and
Fig. 26 shows a flow chart of a pad printing process.
Detailed Description
For a better understanding of the present application, various aspects of the present application will be described in more detail with reference to the accompanying drawings. It should be understood that these detailed description are merely illustrative of exemplary embodiments of the application and are not intended to limit the scope of the application in any way. Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items.
It should be noted that in this specification, the expressions first, second, etc. are only used to distinguish one feature from another feature, and do not represent any limitation of the feature. Thus, a first body discussed below may also be referred to as a second body without departing from the teachings of the present application.
In the drawings, the thickness, size and shape of the object have been slightly exaggerated for convenience of explanation. The figures are merely examples and are not drawn to scale.
It will be further understood that the terms "comprises," "comprising," "includes," "including," "having," "containing," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, when a statement such as "at least one of the following" appears after a list of features that are listed, the entire listed feature is modified instead of modifying a separate element in the list. Furthermore, when describing embodiments of the present application, the use of "may" means "one or more embodiments of the present application. Also, the term "exemplary" is intended to refer to an example or illustration.
As used herein, the terms "substantially," "about," and the like are used as terms of a table approximation, not as terms of a table level, and are intended to illustrate inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1a to 1d show schematic views of an embodiment of an imaging assembly according to the invention. As shown in fig. 1a to 1d, the imaging assembly 1 includes a spacer 2 and a plurality of photoelectric converters 3. The spacer 2 is opaque and a plurality of light guide channels 21 are formed therein. The photoelectric converters 3 are parallel to and spaced apart from the spacers 2 and are respectively in one-to-one correspondence with the light guide channels 21, so that light emitted from an object to be imaged passes through the light guide channels 21 and reaches the photoelectric converters 3.
Fig. 2 shows a detailed schematic diagram illustrating a single light guide channel 21 in an embodiment of the imaging assembly 1 according to the invention. As shown in fig. 2, according to the principle of linear propagation of light, light from an object located on the object side can be received by the photoelectric converter 3 located on the other side of the spacer 2 through the light guide channel 21.
The spacer 2 including the light guide channel 21 and the photoelectric converter 3 constitute the imaging assembly 1. Around the light guide channel 21 are spacers that serve to block light that is irradiated to the spacers, i.e., the light guide channel 21 restricts light from passing through. The spacers may be made of a light absorbing material, such as a ferrous metal. In addition, in other embodiments, the spacer 2 may be coated with a light blocking layer, which may be a diffuse reflective coating or a light absorbing coating.
The size of the light guide channel 21 is preferably a size that does not cause significant diffraction, i.e., the size of the light guide channel 21 is preferably 800nm or more.
In some embodiments, the dimensions of the light guide channel 21 are preferably such that light passing within the light guide channel 21 is diffracted, i.e. only certain wavelengths are diffracted, thereby achieving a color filtering function.
Specifically, the light guide channel 21 is sized to diffract a specific wavelength in the incident light to achieve light splitting, thereby distributing the light of each wavelength band over the pre-arranged photoelectric converters, i.e., so that light of a desired wavelength band reaches the photoelectric converters and light of an undesired wavelength band reaches the non-photosensitive region. After the photoelectric converter receives the light of the corresponding wave band, the electric signals provided by the photoelectric converter can be processed by an algorithm to synthesize a color image. The above-described process realizes a bayer-array-like function, and thus, the bayer array on the photoelectric converter can be eliminated in the embodiment according to the present invention, thereby further reducing the size.
As shown in fig. 2, the light guide channel 21 has a height h and a width d, and the maximum angle of the range of the light on the object side passing through the light guide channel 21 is defined as 2α.
Wherein tanα=d/h, and thus α=arc Tan (d/h)
As shown, in the case where the collection angle of the light guide channel 21 is defined as 2α, the light guide channel 21 having the height h and the width d restricts part of the object side light. The constraint range is defined in the present invention as the collection angle of the light guide channel 21, wherein the object-side light can be transmitted to the image side through the light guide channel 21 only in the region of the collection angle. Object side rays outside this range are blocked by the spacers. In addition, the object side region is divided into an acquisition region and a non-acquisition region. The relation between the acquisition region and the image-side receiving region is constrained by the light guide channel 21 on the one hand and the size of the photoelectric converter 3 on the other hand.
A photoelectric converter 3 is provided in the image-side receiving area so as to receive the object-side light. On the basis of this, the photosensitive surface of the imaging side is set by one or more imaging modules 1. The light on the object side is transmitted to the photosensitive surface through the light guide channel 21, and finally received by the photoelectric converter 3.
In the schematic diagrams shown in fig. 1a to 1d, only one cross section of an embodiment of an imaging assembly 1 according to the invention is shown. As can be seen from this cross section, the spacer 2 has a plurality of light guide channels 21 arranged uniformly therein. In this embodiment, the imaging assembly 1 may have a plurality of cross sections similar to the cross section, and thus, the light guide channels 21 may form an array of the light guide channels 21 in the spacer 2, and correspondingly, the photoelectric converters 3 respectively correspond to the positions of the light guide channels 21, and thus, the array of the photoelectric converters 3 is also formed.
Fig. 1a to 1d also show the relation of the position at which the photoelectric converter 3 according to the invention is arranged to the acquisition region on the object side. In this manner, the photoelectric converter 3 of the present invention does not have a lens to confine the received light, but receives light in all directions through the photoelectric converter 3.
The side of the photoelectric converter 3 facing the spacer 2 defines a light-sensing surface located at an imaginary first dividing receiving surface formed on the image side by the collection range of each light guide channel 21.
The relationship between the position at which the photoelectric converter 3 is disposed and the acquisition region on the object side shown in fig. 1a to 1d includes the following:
in fig. 1a to 1c, the photosurfaces are respectively located above, coincident with and below the first demarcation receiving surface, but each of the photoelectric converters 3 receives light from one of the light guide channels 21; and
in fig. 1d, the light passing through the light guide channels 21 is partially overlapped, and one photosensor receives the light from a plurality of light guide channels 21. In this case, the received overlapping ray information needs to be reconstructed into a complete image by a software algorithm.
In fig. 1b and 1c, some of the light information is not received by the photoelectric converter 3.
In fig. 1a, there is no overlapping area of the light received by the photoelectric converter 3 from the object side through the light guide channel 21, and the area of the light-sensitive surface is the largest. In this case, the photoelectric converter receives all light from the corresponding light guide channel, and the light of the corresponding light guide channel irradiates the entire light receiving surface of the corresponding photoelectric converter. Therefore, it is preferable that the photoelectric converter 3 is provided at this position, wherein both the black bars and the striped bars are the photoelectric converter 3 in the drawing.
It is noted that the acquisition angle α is similar to that calculated by the above. In the actual design process, several basic parameters, such as the height h, the width d, the collection angle α, the distance between the light guide channels 21, etc. of the light guide channels 21 may be set, and then the basic parameters are used as the design reference values, so that the positional relationship between the first boundary receiving surface of the image side and the light guide channels 21 may be calculated in a step-by-step manner.
In this case, in order for the photoelectric converter to receive all the light from the corresponding light guide channel, and the light of the corresponding light guide channel irradiates the entire light receiving surface of the corresponding photoelectric converter, the vertical interval H between the first boundary receiving surface of the image side and the lower surface of the light guide channel 21 (i.e., the lower surface of the spacer) may be determined according to the size D1 of the corresponding photoelectric converter by the following formula:
H=0.5*D1/Tanα-0.5*h
For example, in fig. 1a, the collection angle α is 45 °, i.e. the height H and the width d of the light guiding channel 21 are equal, h=0.5×d1-0.5×h.
In addition, for example, after the size of the photoelectric converter 3 is selected, the size of the light guide channel 21 is further determined, and it is preferable that the size does not diffract. After satisfying the position on the second sub-interface, the distance between the photoelectric converters 3, and thus the dimensions between the light-guiding channels 21, is determined, which is advantageous in that the photoelectric converters 3 are preferably arranged not to receive light from the side light-guiding channels 21.
Fig. 3 shows a flow chart of a method of manufacturing an imaging assembly 1 according to the invention.
The manufacturing method of the imaging assembly 1 includes the steps of:
s1: at least one light guide channel 21 is formed in the light-impermeable spacer 2;
s2: at least one photoelectric converter 3 is disposed parallel to and spaced apart from the spacers 2 and in one-to-one correspondence with the light guide channels 21, respectively, so that light emitted from an object to be imaged passes through the light guide channels 21 and reaches the photoelectric converter 3.
In this method, the light guide channel 21 is surrounded by a spacer which serves to block light irradiated to the spacer, i.e., the light guide channel 21 restricts the passage of light. The spacers may be made of a light absorbing material, such as a ferrous metal. In addition, in other embodiments, the spacer 2 may be coated with a light blocking layer, which may be a diffuse reflective coating or a light absorbing coating. The size of the light guide channel 21 is preferably a size in which no light diffraction occurs, that is, the size of the light guide channel 21 is preferably 800nm or more.
In some embodiments, the dimensions of the light guide channel 21 are preferably such that light passing within the light guide channel 21 is diffracted, i.e. only certain wavelengths are diffracted, thereby achieving a color filtering function.
Fig. 4a to 4b show schematic views of an embodiment of a touch screen 4 according to the present invention, wherein fig. 4b is an enlarged schematic view of the portion A-A in fig. 4 a.
As shown in fig. 4a to 4b, the touch screen 4 includes at least one spacer 2, at least one photoelectric converter 3, and a streamer 5. The spacer 2 is opaque and at least one light guide channel 21 is formed therein. The photoelectric converters 3 are parallel to and spaced apart from the spacers 2 and may be disposed in one-to-one correspondence with the light guide channels 21, respectively, so that light emitted from an object to be imaged may reach the photoelectric converters 3 after passing through the light guide channels 21. The streamer 5 is located above the spacer 2.
In this embodiment, the light guide channel 21 is surrounded by a spacer which serves to block light irradiated to the spacer, i.e., the light guide channel 21 restricts light from passing therethrough. The spacers may be made of a light absorbing material, such as a ferrous metal. In addition, in other embodiments, the spacer 2 may be coated with a light blocking layer, which may be a diffuse reflective coating or a light absorbing coating. The size of the light guide channel 21 is preferably a size in which no light diffraction occurs, that is, the size of the light guide channel 21 is preferably 800nm or more.
In some embodiments, the dimensions of the light guide channel 21 are preferably such that light passing within the light guide channel 21 is diffracted, i.e. only certain wavelengths are diffracted, thereby achieving a color filtering function.
A detailed schematic of the streamer 5 according to the invention is shown in fig. 5. As shown in fig. 5, the streamer 5 includes a streamer 6, a light input section 7, and a light output section 8. The fluid body 6 comprises a total reflection plate 9. The light input is located within the fluid 6 and can output light at an angle to the total reflection plate. The light emitted from the light input section is totally reflected in the fluid 6 and can be output from the light output section.
In the present embodiment, the fluid 6 is implemented as a total reflection panel. A fully reflective panel is defined herein as a panel capable of total reflection. The fluid 6 can thus be a fully reflective panel for light. In this way, the light can be reflected inside the light-flowing member 5 continuously, so as to achieve the effect of flowing the light, and thus the light-flowing member 5 has a light-flowing region inside.
In addition, in the present embodiment, the light input portion of the streamer 5, that is, the light source, is located at the side of the streamer 5, and the light source on the side of the streamer 5 serves as the light input end. The other side of the streamer 5 serves as the output for the light.
The light in the streamer 5 may be invisible light such as near infrared light or visible light. So long as the incident angle is controlled, the total reflection is not affected.
In this embodiment, the outside of the streamer 5 is preferably the external environment, i.e. ambient air. Therefore, the refractive index of the streamer 5 is preferably larger than that of air to satisfy the condition that diffuse reflection occurs.
On the input area of the streamer 5, i.e. on the upper surface of the streamer 5 shown in fig. 5, when there is a substance that replaces the original external environment, i.e. the condition of total reflection of the streamer 5 is not established, the condition of total reflection of light in the streamer 5 is not established, so that the light passes through the upper surface (input area) of the streamer 5 to reach the user's finger, because, for example, sweat on the surface of the user's finger, even the refractive index of the texture of the user's finger skin itself, the refractive index of sweat being higher than that of air.
The finger surfaces themselves are not uniform in height. Generally, the finger surface is divided into ridges and valleys, wherein the ridges are skin textures that are higher than the valleys. During finger contact with the touch screen 4, ridges on the finger surface are in contact with the streamer 5 surface and valleys are not in contact with the streamer 5 surface, wherein the streamer 5 surface is preferably a transparent medium, such as glass. Therefore, light irradiated on the glass surface of the portion where the fingerprint ridge line contacts is diffusely reflected, and light irradiated on the glass surface corresponding to the fingerprint valley line is totally reflected. Total reflection still occurs because the valleys do not contact the glass surface, air is present. Therefore, in the information captured by the photoelectric converter 3, the partial light intensity corresponding to the fingerprint ridge line is high, and the partial light intensity corresponding to the fingerprint valley line is low.
In an embodiment, the photoelectric converter 3 is preferably a CCD or CMOS sensor. For example, the output image is illustrated, and after the light signal is received, the color of the portion corresponding to the ridge line of the fingerprint is darker, and the color of the portion corresponding to the valley line of the fingerprint is lighter.
In this embodiment, the photoelectric converter 3 outputs an excited signal at a corresponding position of the photoelectric converter 3 when the user presses his hand with reference to the original photoelectric converter 3 due to the received signal of the change in the intensity of the light after the finger is pressed. The light intensity of the natural light of the external environment is taken as an example as a reference signal, and the reference signal can be set by people according to different use scenes.
Based on the change of the excited signal of the photoelectric converter 3 with respect to the reference signal, it can be known that there is an object pressing on the fluid 6.
Further, the magnitude of the pressing force of the object can be obtained from the change in the magnitude of the value of the signal. Specifically, taking a fingerprint unit divided into relatively small areas on a finger as an example, the size of the contact area indicates the amount of force of contact. When the pressing force is increased, the finger muscles are deformed, resulting in an increase in the contact area of the skin texture with the upper surface.
In addition, since the blood vessels are present on the finger, the blood vessels may jump over time, and thus the contact between the finger and the touch screen 4 may vary slightly, for example, the ridges on the texture of the finger may have different effects on the light due to the jumping of the blood vessels. Thus, living body detection can be performed using the touch panel 4 in this way.
Further, sweat pores on the finger texture correspond to valleys in the finger texture, and the total reflection is not affected similarly when the finger is in contact with the touch panel 4, and therefore the brightness is low, and thus the finger texture can be used as a living body detection method.
Fig. 6a to 6c show schematic diagrams of another embodiment of a touch screen 4 according to the invention. In this embodiment of the touch screen 4, the pressure is measured by means of an elastic structure 9 arranged above the corresponding light guide channel 21. In particular, as shown in fig. 6a, the elastic structure 9 is preferably a transparent material, such as a film.
As shown in fig. 6b to 6c, after the user presses the film with his finger, the position and shape of the film change accordingly with the user pressing, i.e. the user presses the elastic structure 9, which changes the optical properties of the film. In this way, the light information will also change correspondingly when pressing with different forces. In the present embodiment, the elastic structure 9 is preferably disposed above the imaging assembly 1.
As can be seen from fig. 6b to 6c, the light emitting units 10 are alternately arranged with the unit imaging assemblies 1, specifically, referring to the arrangement manner of the light emitting elements of the OLED screen. Thus, when the user's finger is pressed against the elastic film, the elastic film is bent or deformed, and thus, the optical properties of the elastic film are changed. In this case, a part of the light originally emitted from the light emitting unit to the elastic film is reflected to the photoelectric converter 3. As the pressing force increases, the amount of light received by the photoelectric converter 3 also increases, and thus a change in the pressing force can be obtained. Further, the pressing force can be measured by linking the pressing force with the amount of light information.
Fig. 7a to 7b show schematic diagrams of a further embodiment of a touch screen 4 according to the invention. In this embodiment, the pressing force against the touch panel 4 is measured in another way. This embodiment is similar to the embodiment shown in fig. 6b to 6c, also with a resilient structure 9 above the imaging assembly 1. In contrast, as shown in fig. 7a, the light emitting unit is not included in this embodiment, but a filling stop 11 is provided in the elastic structure 9.
Specifically, in an embodiment, a fill-stop 11 is provided in the elastomeric film, and a single imaging assembly 1 receives an image of the stop. The size of the image formed by the stopper varies with the pressing force. At the same time, the size of the blocking piece in the image imaged in each unit imaging assembly 1 is correlated with the pressure to enable measurement of the force application point and force area. In particular, when used in conjunction with an OLED screen, a partial region may be provided specifically for measuring pressure.
The application also comprises a camera module. The camera module comprises the imaging assembly 1 and a display screen. Wherein the imaging assembly 1 is located below the display screen.
In this embodiment, a plurality of imaging assemblies 1 form a device capable of photographing and displaying in combination with one of the existing OLED screen, LCD screen, and LED screen.
The embodiment meets the development trend of the comprehensive screen of the mobile terminal such as a mobile phone at present, and front cameras are saved, so that the screen occupation ratio of the display screen can be further improved.
Taking an OLED screen as an example, the OLED screen includes a substrate, a cathode layer, an anode layer, and a light emitting layer (organic light emitting diode, OLED). There is now a new technology based on OLEDs-Flexible organic light emitting display technology (FOLED). This technology may in the future enable highly portable, foldable display technologies to be adapted to the present invention.
In this embodiment, the imaging assembly 1 is used in combination with an OLED screen, and includes a structure for carrying and packaging a substrate, a cover plate for protection, a photoelectric converter 3 for receiving light and transmitting information, and the above-described OLED screen. The spacer is used for blocking light.
When used in combination with a FOLED, the structure in this way is a substrate, a cathode, an anode, and a light-emitting layer, since the FOLED technology can produce a flexible body. Specifically, in this manner, the flexible substrate is manufactured with a soft material. In this way, the package is portable and bendable. For example, a metal foil is used as a substrate of the FOLED, the ITO film layer in the anode is replaced by conductive polymer (flexibility), and the whole structure of the FOLED is realized by adopting a multi-layer film packaging mode.
In addition, in a practical structure, the substrate may be a transparent material or an opaque material as a substrate of the package, and thus the substrate may be preferable as a spacer.
In addition, it is noted that the anode layer is generally implemented as a light-transmitting ITO film as a structure for emitting light. In this manner, the cathode layer may also act as a spacer.
In addition, in the inverted structure (IOLED), since the cathode layer serves as the outgoing light layer, the anode layer may also serve as a spacer in this manner.
By bending the flexible body, the photographing range can be increased, and thus a change in multiple angles can be achieved. The imaging assembly 1 may be disposed on a substrate while in a circuit, the anode and cathode of the OLED may be utilized as power sources. The OLED light-emitting layer may be disposed between every two light-guiding channels 21.
It should be understood that although in the present embodiment, the imaging assembly 1 is combined with an OLED screen, in a similar scheme, for example, the imaging assembly 1 may be combined with an LED screen, an LCD screen, without limiting the present invention.
Fig. 8 shows a schematic diagram of an embodiment of an imaging module according to the invention. Specifically, when combined with an LED screen or an LCD screen, the liquid crystal layer 12 in the screen can act as a propagation channel for turning on or off light. Thus, in this manner, the function of the imaging assembly 1 for photographing the outside can be realized. The liquid crystal layer functions like an aperture in a camera. Here, the liquid crystal layer controls the light ray entering amount of the light guide channel 21 to realize the control of the circle of confusion of the light ray on the image plane, so that the combination mode in this embodiment can adjust the light entering amount to blur the background. Thus, when the aperture becomes large (when the amount of light input increases), the diameter of the circle of confusion becomes large, thereby reducing the depth of field, that is, the background is not easy to image clearly, and when the aperture becomes small, that is, when the amount of light input decreases, the diameter of the circle of confusion becomes small, thereby increasing the depth of field, thereby making the background easy to image clearly.
Fig. 8 shows the adjustment of the depth of field range by controlling the liquid crystal layer. In this way, different effects of background blurring can be achieved according to different depth ranges.
In addition, in such a manner of blurring the background, the effect of further expanding the reception range can be achieved by controlling the interval of the photoelectric converters 3. For example, by operating the first photoelectric converter and the third photoelectric converter, the second photoelectric converter and the fourth photoelectric converter are turned off, and the intersecting light is no longer received, thereby realizing an expansion of the operating range of the photoelectric converters, that is, a clear imaging object distance becomes large.
In this embodiment, the light guide channel 21 is preferably a circular hole. Of course, since the light guide channel 21 may be extremely small compared to the object, other shapes of holes may be selected in cases where pinhole imaging is satisfactory. However, a symmetrical pattern is preferable, so that the light information collected by the photoelectric converter 3 through the light guide channel 21 can be symmetrical for later calculation and processing.
Light from the minute detail of the object located on the object side passes through the light guide channel 21 in diffuse reflection, and since the light guide channel 21 restricts the range through which the light passes, a channel-like shape is formed on the image side when the diffusely reflected light passes through the light guide channel 21, and thus, as if the light formed by passing the minute detail of the object through the light guide channel 21 is superimposed. Thus, under this theory, selecting an aperture with a axisymmetric shape can increase the resolution of the image of the light ray superposition complex of fine details described above during the image.
When the object is square and the light guide channel 21 is circular, the boundary of the image is circular, and the resolution is not high.
When the object is square, the light guide channel 21 is square, and the boundary of the image is square, the resolution is high.
In this embodiment, the light on the image side passing through the light guide channel 21 with a minute detail on the object side is taken as the minimum resolution.
In addition, by utilizing the light emitting function of the OLED screen itself, when light is irradiated onto an object, the light diffusely reflected by the object can be received after being received by the photoelectric converter 3 through the light guide channel 21. Thus, in this way, for example, fingerprint recognition, self-timer shooting can be achieved.
A fingerprint recognition function may also be implemented in this embodiment. The imaging device 1 described above is also similar, and specifically, the determination is made based on the blocked image, and the photographing object and the size or the size of the contact surface can be determined.
Fig. 9a to 9b show schematic views of another embodiment of an imaging module according to the invention. In particular, fig. 9a to 9b show an imaging process of the camera module according to the present invention. As shown in fig. 9a to 9b, in this embodiment, the object itself emits light or emits light by diffuse reflection, and the light passes through the light guide channel 21 in a straight direction, and the unit imaging assembly 1 performs aperture imaging.
The light is received by the photoelectric converter 3 after passing through the light guide channel 21, and the photoelectric converter 3 then processes and outputs an image after receiving the signal, thereby outputting an object image.
Referring to fig. 9b, when the object is at the parting line, it is between the first imaging assembly 1 and the second imaging assembly 1, so that the first imaging assembly 1 and the second imaging assembly 1 acquire complete information of the object. Therefore, only the information received by the first imaging assembly 1 and the second imaging assembly 1 needs to be overlapped to obtain a complete picture of the object.
Referring to fig. 9a, when the object is outside the boundary, for example, when the object is within the acquisition angle of the first to sixth imaging assemblies 1 to 1, the information acquired by the first to sixth imaging assemblies 1 is overlapped a plurality of times, and the images of the overlapped object are combined and the non-overlapped information is overlapped, the image of the complete object can be output. In this method, a large number of photoelectric converters 3 are used, but the imaging range is large.
The imaging module adopts the mode to shoot an object. For example, in this way, it is possible to perform moving scanning of the screen against the surface of a business card or an object of a person, and thus it is possible to closely photograph surface information of the object, that is, it is possible to closely photograph with high accuracy.
Fig. 10 shows a schematic diagram of another embodiment of an imaging module according to the invention. Unlike the above-described embodiments of the image pickup module, an optical element that condenses parallel light rays is provided above each of the imaging modules 1 in this embodiment. The optical element is for example a convex lens 13, thereby further constraining the acquisition angle of the imaging assembly 1.
As shown in fig. 10, the imaging of the pinhole is performed in such a manner that the collection angle of the imaging module 1 receives only parallel light rays. In this way, therefore, the collection angle of the imaging assembly 1 is a fixed angle, i.e., the width of the collected parallel light rays is achieved. In this way, since the problem of overlapping of information of the object outside the boundary line in the above-described embodiment is reduced, in this way, the image can be imaged with a simple superimposing process after the image is acquired. In this way, an object at a distance can be photographed with high accuracy.
Fig. 10 shows an ideal case, in which the photoelectric converter 3 is optically designed to collect parallel light rays having a fixed width passing through the light guide channel 21, thereby improving the utilization rate of the photoelectric converter 3. In addition, alternatively, a different convex lens 13 may be provided, and although the overlapping area is increased in this way, the collection angle becomes large, so that the photographing range becomes large.
In addition, it should also be appreciated that light rays may be concentrated by either the convex lens 13 corresponding to the photoelectric converter 3 or the concave lens facing away from the photoelectric converter 3.
As shown in fig. 11, this embodiment collects only the light of the parallel region of the part of the object side, thereby eliminating the interference of the excessive stray light. For the same reason, information concatenation of each photoelectric converter 3 is facilitated at the time of the operation of calculating the resulting image or the like at a later stage, and thus calculation is simpler.
Fig. 12 shows a schematic view of another embodiment of an imaging module according to the invention. Unlike the above-described embodiments of the image pickup module, each of the imaging assemblies 1 in this embodiment is provided with a superlens 14 on the side close to the photoelectric converter 3 to converge light.
The superlens 14 may concentrate the light. In particular, nanostructures of a size smaller than the wavelength of light in the superlens 14 are utilized to concentrate light. These structures may have different shapes, sizes and arrangements to block, absorb, enhance, refract photons so that the superlens 14 may achieve focusing of light. Such a superlens 14 is arranged at the single imaging assembly 1, preferably above the photoelectric converter 3. This has the advantage that the light passing through the light guide channel 21 can be converged to a smaller extent, thereby increasing the brightness of the light. This approach is particularly suitable for the case where the number of photoelectric converters 3 in the imaging assembly 1 is small, and ensures that the light information received by each photoelectric converter 3 is sufficiently large, so that photographing has high brightness compared to the case where there is no superlens 14. The superlens 14 utilizes the effect of canceling part of light after light diffraction to improve the convergence degree and remove part of stray light.
The superlens 14 may also implement optical filtering. The light of different wavelengths is diffracted according to the size of the superlens 14, so that light can be collected and only light in the wavelength range can be received. By this arrangement, the bayer filter in the image pickup module can be eliminated. The later RGB algorithm is defined in terms of the location of the diffraction of the different wavelengths in the design, while the size is further reduced by removing the filters.
In this embodiment, the photoelectric converter 3 in the imaging device 1 can perform color photographing when RGB pixels are selected, and is easy to manufacture when monochrome pixels are selected, and is suitable for photographing modes with low requirements such as fingerprint recognition.
Of course, it should be understood that variations in this manner may also be combined with an LCD.
Fig. 13 shows a schematic view of another embodiment of an imaging module according to the invention. Unlike the previous embodiment of the camera module, each of the imaging assemblies 1 in this embodiment is provided with an optical path turning element 15 on the light guide channel 21 to turn the optical path.
In particular, the manner of adding a mirror by means of a MEMS device is preferred. The MEMS micro-electromechanical system can also realize the movement of the reflecting surface, thereby realizing the large-angle shooting, and realizing the shooting mode of scanning without moving the whole imaging device. As the imaging device increases in shooting distance, there is a superposition of images shot by the single imaging assembly 1. The partially overlapping images have repeated information that is suitable for processing between the two images. Therefore, compared with a shooting mode of rotating the mobile phone to carry out panorama, the image shot by the shooting module is more stable, and the pictures have no splicing trace.
In addition, in this embodiment, the overlapping center region information is subjected to the superimposition processing, and thus the resolution of the image center region is high. Meanwhile, the peripheral range of the image is relatively unclear because only a few imaging components 1 receive the image, so that the blurring of the edge is realized.
The present application also provides a method of distance sensing, i.e., long range imaging. Fig. 14 shows a flow chart of a method of distance measurement according to the invention.
The method comprises the following steps:
s1: a plurality of light guide channels 21 are formed in the light-impermeable spacer 2;
s2: the plurality of photoelectric converters 3 are arranged parallel to and spaced apart from the spacers 2 and are respectively in one-to-one correspondence with the light guide channels 21, so that light emitted from an object to be imaged passes through the light guide channels 21 and reaches the photoelectric converters 3;
s3: obtaining a plurality of images of the object to be imaged formed by the plurality of light guide channels 21 from the electric signals output from the photoelectric converter 3; and
s4: the distance to the object to be imaged is calculated from the repetition degree of the plurality of images.
Specifically, in the manner of matching the imaging assembly 1 with the OLED screen in the above embodiment, the OLED screen emits light, and the light emitted by the screen is diffusely reflected by the photographed object, and then received by the imaging device, and the image of the photographed object is output after being received. In the above process, in the imaging mode mentioned in the present application, the image obtained by capturing the object may have different degrees of repetition at different boundaries. The repetition degree refers to the repeated pixel area of the whole or a part of the object, and the distance between the object and the image pickup module can be judged through the repeated pixel area.
Fig. 15a to 15d show schematic diagrams of embodiments of a method of distance measurement according to the invention.
In this embodiment, a single imaging assembly 1 captures an object and outputs an image. Then, the degree of repetition of the object or the local repetition of the object in the images output by the different individual imaging assemblies 1 is determined.
Referring to fig. 15a to 15d, after different images are identified, the degree of repetition is identified based on different boundaries, and the degree of repetition of the images is calculated by calibrating the boundaries.
For example, as shown in fig. 15a, the subject or a subject part is photographed only between the first boundary line and the second boundary line. In actual use, in this way, a user may take an object near the imaging assembly 1 for a pre-calibration. For example, the pre-calibration may be performed after the user has placed the object at a predetermined distance, for example, by photographing the object at 20cm, which is assumed to be the first dividing line.
In this way, it is necessary that the object surface has a color different from the external environment to be a feature of the object to be judged, so that the distance of the object is judged according to the degree of repetition of the information of the object on the image information of the later stage.
This is also preset by calibration, since the repetition rate is different between the different demarcations. The pre-calibration is beneficial in that the distance can be detected in real time after the object moves. When shooting, the corresponding focal length can be changed after the distance is identified, so that the object can be shot in time. Therefore, by the method, dynamic focusing and even real-time focusing can be realized. Of course, when photographing an object commonly seen in daily life, a quick and clear object output can be realized with the continuous change of distance when performing live broadcast or video photographing on an image for pre-storage, such as a user himself.
Fig. 16 shows a schematic view of another embodiment of an imaging module according to the invention. It was mentioned hereinabove that OLED screens can be implemented as flexible screens. As shown in fig. 16, in this embodiment, a driving member 16 is provided on the substrate of the OLED screen to adjust the distance, the degree of curvature, of the photoelectric converter 3 and the light guide channel 21 in the single imaging assembly 1. The driving member 16 is preferably provided on a substrate on the lower side of the photoelectric converter 3.
In this embodiment, the distance of the photosurface from the light guide channels 21 may be adjusted or the depth of field of different individual imaging assemblies 1 may be made different.
In addition, the curvature of the photosurface can realize the focusing of the adjustment part photosurface.
As shown, this way a background blurred image can be captured.
In another embodiment, similar to the OLED approach, the color filter in the LCD structure may also be integrated as a color filter in the imaging device according to the present application, i.e. the imaging screen and the camera module share a color filter structure. Color stitching may also be implemented in this manner. Of course, lenses may be used instead of filters.
In another embodiment, the aperture of the light guide channel 21 is controlled. When the aperture is close to a certain wavelength, diffraction occurs in the light of that wavelength through the light guide channel 21. By utilizing the selectivity of the diffraction holes to the light in a specific wavelength band, the color filtering function in a certain wavelength range can be realized, so that the color filter can be omitted.
The application also provides a light field camera. The light field camera may have a microlens array, and may further include: a spacer 2, the spacer 2 being light-impermeable and having at least one light-guiding channel 21 formed therein; and at least one photoelectric converter 3, the photoelectric converters 3 may be parallel to and spaced apart from the spacers 2, and may be in one-to-one correspondence with the light guide channels 21, respectively.
The microlens array may be located between the spacer 2 and the photoelectric converter 3, and light emitted from the object to be imaged passes through the light guide channel 21 and the microlens array to reach the photoelectric converter 3.
Fig. 17 shows a schematic diagram of a prior art light field camera. As shown, the conventional light field camera records light rays by adding a micro lens array at the focal length of a common lens, and then digital refocusing is realized by a post-algorithm.
The light corresponding to the shot object is imaged after passing through the main lens, passes through the micro lens array, and is imaged again on the pixels of the photoelectric converter 3 behind the micro lens array. After passing through the lens, the shot object can be imaged on different photoelectric sensor pixel areas through each micro lens in the micro lens array.
Any light ray passes through the micro-element of the lens, the micro-element of the micro-lens array and the photoelectric sensor on the optical path of the light ray to form a conjugate relation. Through this relationship, direction information of the light can be obtained. In fig. 17, a planar space is taken as an example, and a stereoscopic space may be the same.
Fig. 18a to 18b show schematic diagrams of prior art light field cameras.
Since the focal length of the microlenses is much smaller than that of the primary lenses, the primary lenses can be considered to be located at an infinite distance from the microlens mirrors. Thus, for example, one vertical bar area on the main lens in fig. 18a can be considered to be focused on a pixel just behind the micro lens after passing through a micro lens, and since the micro lens is several percent of the main lens, the one light sensing chip pixel can be considered to collect all light information inside another color line. In this way a ray of light inside the camera is recorded. Similarly, other pixels also correspond to a light ray.
Similarly, as shown in FIG. 18b, the pixels of the photo-sensor chip behind each microlens can also be regarded as light transmitted from different regions of the lens. Since the position of each pixel is fixed, the position of the corresponding microlens of each pixel is also fixed. Since the light ray is propagated straight, the direction information of the light ray can be obtained.
Light field cameras are also often required to have refocusing functionality. Fig. 19 shows a refocusing schematic diagram of a prior art light field camera.
Since the direction information and intensity information of all the light rays are already obtained through the microlens array and the photoelectric converter 3, focusing of the image on different planes can be achieved by matching algorithms corresponding to the arrangement positions of the microlens array 17 and the pixels of the photosensitive chip through simple similar triangle transformation of all the light rays.
Fig. 20 shows a refocusing effect diagram of a prior art light field camera. As shown in the figure, during shooting, the camera is focused on the rear shutter, and the focus is transferred to the image through refocusing.
Fig. 21 shows a schematic diagram of an embodiment of a light field camera according to the invention. In this embodiment, an array of light guide channels 21 is employed instead of the main lens of a conventional light field camera.
Because of refocusing function of the light field camera, the camera module does not need to be accurately calibrated during assembly, and refocusing of the defocused image can be realized through an algorithm.
The requirement of the camera module on the assembly precision is reduced by carrying the light field camera, so that the production cost is reduced.
Fig. 22 shows a refocusing effect diagram of an embodiment of a light field camera according to the invention. An f/4 aperture is used in the image shown in the left part of fig. 22. Because the depth of field is small, when people in the middle of the picture are focused at this time, the people in the lower picture cannot be clearly imaged.
With the f/22 smaller aperture in the image shown in the middle of fig. 22, the depth of field becomes large and most people are clearly imaged. Meanwhile, due to insufficient light quantity, more noise points appear, and the imaging quality is deteriorated.
Sufficient image depth information is acquired at the time of shooting in the image shown in the right part of fig. 22. And (3) obtaining a plurality of refocusing images with different focal lengths through a refocusing algorithm in the later stage, traversing the sub-images received by the pixels of each photosensitive chip through the refocusing images, taking the depth of the sub-pixel in a refocusing image which enables the sub-pixel to be clearest as the depth of the sub-image, and refocusing the sub-image. Then, the refocused sub-images are spliced, so that the whole image has a better imaging effect, the large depth of field is realized, meanwhile, the brightness is not sacrificed, and noise is not generated.
The above is the situation when the light field camera is used in combination with the traditional lens imaging system, and the depth (object distance) is different, which affects the imaging definition. However, the sub-pixels traverse the depth of the refocused image to obtain the clearest depth, so that the accurate depth cannot be obtained.
The depth accuracy of the sub-pixel is affected by the depth distribution of the refocused image sequence and also by the definition evaluation algorithm.
In this embodiment, the main lens is replaced by an array of light guide channels 21, and the imaging depth (object distance) of the array of light guide channels 21 is changed only to change the image size and the image acquisition range, so that the images with different depths are clear images. In terms of algorithm, depth information can be obtained by taking an image with a larger acquisition range (or a smaller magnification, preferably including an image with a depth to be acquired), and comparing each sub-pixel with a corresponding region in the image to obtain a size proportion relation.
Embodiments according to the invention have the advantage over light field cameras employing a main lens that:
the main lens is replaced by an array of light guide channels 21, all images have consistent definition, all photosensitive pixel sub-images can be traversed by only taking one image with a certain depth, and the calculation amount is small. In other words, the depth of each sub-pixel obtained by the main lens is the depth in the pre-fetched refocus image sequence, and the accuracy also depends on the step length of the refocus image depth value, but the more the acquisition is, the larger the calculation amount is.
In other words, the scheme of adopting the main lens is to prepare a set of answer library firstly, then use the sub-image of the photosensitive chip to follow the answer ratio, take the depth of the nearest answer, and whether the depth calculation is accurate depends on whether the answer library preparation is complete or not. The solution of using the array of light guide channels 21 is that all solutions are obtained by a solution of the question with the largest information content, and all sub-pixel images of the photosensitive chip can be used for solving the depth by means of the question.
The depth in this embodiment is calculated as follows: compared with the contrast of the definition of the traditional light field camera, the comparison of the corresponding image size multiplying power is more accurate in judgment basis, and the obtained depth is more accurate.
In addition, if the object distance is smaller than one time of the focal length of the main lens, a virtual image is formed, and the light rays which form real images with other parts simultaneously penetrate through the micro lens array and then fall on the chip. Because the virtual image and the real image need to be refocused by different algorithms, that is, the virtual image needs to be processed by additional algorithms, the two light rays are difficult to be resolved by a chip and processed differently, so that the light field camera adopting the traditional lens imaging system is difficult to realize macro shooting.
In the embodiment, the lens in front of the traditional light field camera is replaced by a light-transmitting channel, the lens is a real image regardless of the relation between the object distance and the focal length, and the lens can be refocused by adopting a unified image algorithm, so that the effect of shooting with a large micro-distance and a large depth of field is realized.
In addition, the light field camera has a disadvantage in that spatial resolution is insufficient. With the same number of pixels, the conventional camera records a two-dimensional image, and the number of pixels is fully used. The light field camera records the four-dimensional image and then integrates the four-dimensional image to generate a two-dimensional image, information is lost in the integrating process, namely, the plane lattice is changed into a line lattice, the number of pixels of the two-dimensional image is reduced, and the result is insufficient spatial resolution.
The spatial resolution is proportional to the number of microlens arrays. If the conventional mobile phone camera module is mounted with the array imaging system of the light guide channel 21, the maximum number of micro lenses is limited by the light flux, that is, the aperture of the diaphragm, which has very limited lifting space on the optical design of the lens. The light passing amount of the optical system imaged by the light guide channel 21 can be realized by expanding the distribution of the light passing channels, and the light passing amount is greatly increased. And the defect of the spatial resolution of the light field camera can be greatly overcome.
Fig. 23 shows a schematic diagram of another embodiment of a light field camera according to the invention. In this embodiment, an advanced lens and an outsole light-sensing chip are collocated. This embodiment differs from the above-described embodiment in that the above-described embodiment replaces the main lens of a conventional light field camera with an array of light guide channels 21, whereas the present embodiment replaces the microlens array with an array of light guide channels 21.
After the micro lens array of the light field camera is changed into the array of the light guide channels 21, the expansibility of the matching of the array of the light guide channels 21 and the screen enables the carried photosensitive chip to be designed larger so as to adapt to the performance of the advanced lens, so that the lens design is not limited by the photosensitive area of the chip.
The invention provides a multi-view depth camera, which realizes the effect of carrying out depth recognition on objects in an overlapping range by correspondingly overlapping acquired images in a mode of staggering light guide channels 21. Fig. 24 shows a schematic diagram of an embodiment of a multi-view depth camera according to the present invention. As shown, the multi-view depth camera includes: a spacer 2, the spacer 2 being opaque and having a plurality of light guide channels 21 formed therein; and a plurality of photoelectric converters 3, wherein the photoelectric converters 3 may be parallel to and spaced apart from the spacers 2 and may be in one-to-one correspondence with the light guide channels 21, respectively, so that light emitted from the object to be imaged may reach the photoelectric converters 3 after passing through the light guide channels 21. Wherein the central axes of the light guiding channels 21 may be staggered with respect to each other.
Because of the manner in which the included angles are provided between the light guide channels 21, images of objects at different angles are superimposed upon each other during imaging to form a depth image of the object. The triangle theorem can be used for measuring the distance of an object, and can also further measure the characteristic information of the surface of the object. In this way, it is also necessary that the object surface has a difference from the external environment to be used as a criterion for judgment.
The application also provides a pixel color filter array element matched with the light guide channel for use. The pixel color filter array may include: a substrate; a dielectric layer attached to the substrate; and a plurality of pixel color filters attached to the dielectric layer and forming an array. The dielectric layer is one of the photoelectric converter 3 and the display screen.
The application also provides a method for forming the pixel color filter array. The method of forming a pixel color filter array device may include the steps of: setting a substrate; attaching a dielectric layer to a substrate; transferring the three-color filters to a carrier plate according to an RGB array in a pad printing mode to form a color filter array; coating transparent adhesive materials on the substrate; and integrally bonding the color filter array on the carrier plate to the dielectric layer. Wherein the dielectric layer is one of the photoelectric converter 3 and the display screen.
Fig. 24 shows a flow chart of a prior art photolithography process.
It is known in the art that the fabrication of photomasks is a critical part of the process recipe, which is the most expensive part of the photolithography process recipe, and one of the bottlenecks that limit the minimum line width. Conventional photolithography processes require large area masks in the fabrication of large area chip arrays.
The imaging principle of the photosensitive chip requires arranging a bayer array color filter array on each pixel, and since the color filters have three types of RGB (RGB channels are taken as an example), the color filters need to be applied to the pixels of the photosensitive chip through three photolithography processes. As shown in fig. 24, after 6 processes, a color filter material is applied in the photoresist gap with a specific pattern, then the photoresist is removed by light, and after the process of patterning, another color filter material is applied, and finally another color filter material is processed.
Because of the three similar sets of processes, there are certain limitations in the choice of color filter materials and processing techniques, the subsequent photolithographic process cannot have an effect on the previously shaped color filter areas, such as non-selectable thermoplastic materials (thermally curable reversible materials); the color filter is obtained by a non-selectable solvent volatilizing process, and the solute obtained by the previous solvent volatilizing process can be redissolved by the subsequent solvent.
In addition, the utilization rate of the RGB color filter material is high. In the traditional photoetching process, a layer of color filter material is required to be evaporated or sputtered on the whole surface of a photosensitive chip paved with photoresist with a specific shape, then the photoresist is removed by a certain photoresist removing process, and the color filter material attached to the surface of the photoresist is taken away, so that the color filter material evaporated in a photoresist groove is left. Therefore, the color filter material attached to the surface of the photoresist is wasted.
In this embodiment of the present invention, since a pad printing method is adopted, the color filter material is first manufactured into a whole plate by vapor deposition or the like, and in the pad printing process, the color filter material is required to be placed on an elastic carrier plate with a certain elasticity and cut into required units by laser cutting or the like. The pad is pressed down, the color filter units are attached to the pad, and the interval between the color filter units is increased. The pad is then further expanded by inflation or mechanical support, etc., so that the spacing between the filter elements is further increased. If this is not done, the color filter unit is transferred from the pad to the intermediate transfer plate and the gap is restored to the gap on the elastic carrier plate.
A specific flow of the pad printing process is shown in fig. 25. Specifically, in this flow, first the first color filter array 31 is disposed on the intermediate carrier plate 30, and then, three color filters are transferred from the intermediate carrier plate 30 to the elastic carrier plate 33 in a desired arrangement of RGB arrays, respectively, by the three sets of transfer processes using the transfer heads 32, so as to form the second color filter array 34, wherein the transfer heads 32 are inflated during the transfer so that gaps between the color filters of the first color filter array 31 are adapted to gaps between the color filters of the second color filter array 34. And then, the transparent adhesive material is coated on the photosensitive chip in a spin mode, the whole RGB color filter array on the carrier plate is adhered to the photosensitive chip array, a complex photoetching process, auxiliary components and equipment are not needed, and meanwhile, the material and the forming process of the color filter are selected freely.
In this embodiment, the degree and shape of expansion of the pad printing head is made by reasonable setting of mechanical support or inflation, so that the arrangement of the color filter units when transferred onto the transfer carrier plate is equal to the arrangement of the same-color filter units required on the photosensitive chip.
In addition, through rationally setting up the region that has adhesive force to the color filter material on the pad printing head for the screed color filter unit can all be utilized, and the loss of material only laser cutting loss when blocking.
The three-color filter is transferred to a carrier plate according to the required arrangement of the RGB array through three groups of pad printing procedures, and then the RGB filter array on the carrier plate is integrally bonded to the photosensitive chip array through spin coating transparent adhesive materials on the photosensitive chip. The reason why the transfer printing is not performed directly on the photosensitive chip is that the adhesive needs to be spun in order to uniformly distribute the adhesive.
The adhesion of the color filter material is as follows: transfer carrier plate > pad printing head > elastic carrier plate
It is easily conceivable by those skilled in the art that similar processes can be used in situations where the array arrangement of color filter materials or chips is required, such as LEDs, where the photolithographic process is originally required.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (57)

1. An imaging assembly, comprising:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, the photoelectric converters are parallel to the spacer and spaced apart from the spacer, and are respectively arranged in one-to-one correspondence with the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the corresponding photoelectric converters;
the spacer forms a plurality of light guide channels, the light guide channels form a light guide channel array in the spacer, the size of the light guide channels is set to diffract specific wavelengths in light passing through the light guide channels so as to split light, so that the light of the specific wave band reaches a preset photoelectric converter to realize the color filtering function similar to a Bayer array, each photoelectric converter receives the light from one light guide channel, the distance H between the light guide channel and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channel and the maximum angle 2α of the light emitted by an object passing through the light guide channels satisfy the following conditions: h=0.5×d1/tanα -0.5×h.
2. The imaging assembly of claim 1, wherein the spacer is made of a light absorbing material.
3. The imaging assembly of claim 1, wherein the photoelectric converter receives all light from the corresponding light guide channel, and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
4. The imaging assembly of claim 1, wherein the spacer is coated with a light blocking layer.
5. The imaging assembly of claim 4, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
6. A method of making an imaging assembly, comprising the steps of:
forming at least one light guide channel in the opaque spacer;
at least one photoelectric converter is arranged in parallel with the spacer and at intervals, and corresponds to the light guide channels one by one respectively, so that light emitted by an object to be imaged reaches the photoelectric converter after passing through the light guide channels;
the spacer forms a plurality of light guide channels, the light guide channels form a light guide channel array in the spacer, the size of the light guide channels is set to diffract specific wavelengths in light passing through the light guide channels so as to split light, so that the light of the specific wave band reaches a preset photoelectric converter to realize the color filtering function similar to a Bayer array, each photoelectric converter receives the light from one light guide channel, the distance H between the light guide channel and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channel and the maximum angle 2α of the light emitted by an object passing through the light guide channels satisfy the following conditions: h=0.5×d1/tanα -0.5×h.
7. The method of claim 6, wherein the spacer is made of a light absorbing material.
8. The method of claim 6, wherein the photoelectric converter receives all light from the corresponding light guide channel, and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
9. The method of claim 6, wherein the spacer is coated with a light blocking layer.
10. The method of claim 9, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
11. A touch screen, comprising:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, the photoelectric converter is parallel to the spacer and spaced apart from the spacer, and is respectively arranged in one-to-one correspondence with the light guide channels, so that the light emitted by the object to be imaged reaches the photoelectric converter after passing through the light guide channels; and
a streamer located above the spacer, comprising:
a fluid body including a total reflection plate;
a light input part which is positioned in the streamer and outputs light which is angled with the total reflection plate; and
A light output section that totally reflects light emitted from the light input section within the fluid light and outputs the light from the light output section;
the size of the light guide channels is set to be that specific wavelengths in light passing through the light guide channels are diffracted so as to split light, so that light of specific wavebands reaches a preset photoelectric converter to achieve a color filtering function similar to a Bayer array, each photoelectric converter receives light from one light guide channel, the distance H between the light guide channels and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channels and the maximum angle 2 alpha of the light emitted by an object passing through the light guide channels meet the following conditions: h=0.5×d1/tanα -0.5×h.
12. The touch screen of claim 11, wherein the spacers are made of a light absorbing material.
13. The touch screen of claim 11, wherein the photoelectric converter receives all light from the corresponding light guide channel, and the corresponding light guide channel irradiates the entire light receiving surface of the corresponding photoelectric converter.
14. The touch screen of claim 11, wherein the spacer is coated with a light blocking layer.
15. The touch screen of claim 14, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
16. A touch screen, comprising:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, the photoelectric converter is parallel to the spacer and spaced apart from the spacer, and is respectively arranged in one-to-one correspondence with the light guide channels, so that the light emitted by the object to be imaged reaches the photoelectric converter after passing through the light guide channels;
a transparent elastic mechanism located above the spacer; and
a light source on a side of the spacer facing the transparent elastic mechanism and emitting light to the transparent elastic mechanism;
the size of the light guide channels is set to be that specific wavelengths in light passing through the light guide channels are diffracted so as to split light, so that light of specific wavebands reaches a preset photoelectric converter to achieve a color filtering function similar to a Bayer array, each photoelectric converter receives light from one light guide channel, the distance H between the light guide channels and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channels and the maximum angle 2 alpha of the light emitted by an object passing through the light guide channels meet the following conditions: h=0.5×d1/tanα -0.5×h.
17. The touch screen of claim 16, wherein the spacers are made of a light absorbing material.
18. The touch screen of claim 16, wherein the photoelectric converter receives all light from the corresponding light guide channel, and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
19. The touch screen of claim 16, wherein the spacer is coated with a light blocking layer.
20. The touch screen of claim 19, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
21. The touch screen of claim 16, wherein the transparent elastic mechanism is a transparent film.
22. A touch screen, comprising:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter, the photoelectric converter is parallel to the spacer and spaced apart from the spacer, and is respectively arranged in one-to-one correspondence with the light guide channels, so that the light emitted by the object to be imaged reaches the photoelectric converter after passing through the light guide channels;
the transparent elastic mechanism is positioned above the spacer, and an opaque blocking piece is arranged in the transparent elastic mechanism;
The size of the light guide channels is set to be that specific wavelengths in light passing through the light guide channels are diffracted so as to split light, so that light of specific wavebands reaches a preset photoelectric converter to achieve a color filtering function similar to a Bayer array, each photoelectric converter receives light from one light guide channel, the distance H between the light guide channels and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channels and the maximum angle 2 alpha of the light emitted by an object passing through the light guide channels meet the following conditions: h=0.5×d1/tanα -0.5×h.
23. The touch screen of claim 22, wherein the spacers are made of a light absorbing material.
24. The touch screen of claim 22, wherein the photoelectric converter receives all light from the corresponding light guide channel, and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
25. The touch screen of claim 22, wherein the spacer is coated with a light blocking layer.
26. The touch screen of claim 25, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
27. A camera module, comprising:
the imaging assembly of any of claims 1-5; and
the display screen is provided with a display screen,
wherein the imaging assembly is located below the display screen.
28. The camera module of claim 27, wherein the display screen is one of an OLED screen, an LCD screen, and an LED screen.
29. The camera module of claim 28, wherein the substrate in the OLED screen forms a spacer.
30. The camera module of claim 28, wherein the cathode layer in the OLED screen forms a spacer.
31. The camera module of claim 28, wherein the anode layer in the OLED screen forms a spacer.
32. The camera module of claim 27, wherein an optical element that concentrates light is disposed over each light guide channel in the spacer of the imaging assembly.
33. The camera module of claim 32, wherein the optical element is a convex lens.
34. The camera module of claim 28, wherein a superlens that concentrates light is disposed over each of the photoelectric converters of the imaging assembly.
35. The camera module of claim 34, wherein an optical path turning element is disposed above the light guide channel.
36. The camera module of claim 35, wherein the optical path turning element comprises a MEMS device and a mirror.
37. The camera module of claim 28, wherein the camera module is located on a substrate in the OLED screen, the substrate having a driver disposed thereon, the driver adjusting a distance between a photoelectric converter of the imaging assembly and the spacer.
38. The camera module of claim 28, wherein the color filters in the LCD screen are integrated as color filters of the imaging assembly.
39. The camera module of claim 27, wherein the aperture of the light guide channel is set to a specific wavelength.
40. An intelligent terminal comprising a camera module as claimed in any one of claims 27 to 39.
41. A method of distance measurement, comprising:
forming a plurality of light guide channels in the opaque spacers;
arranging a plurality of photoelectric converters in parallel with the spacers and at intervals, wherein the photoelectric converters are respectively in one-to-one correspondence with the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
Obtaining a plurality of images of the object to be imaged, which are formed by the plurality of light guide channels, according to the electric signals output by the photoelectric converter; and
calculating a distance from the object to be imaged according to the repetition degree of the plurality of images;
the light guide channels form a light guide channel array in the spacer, the size of the light guide channels is set to diffract specific wavelengths in passing light so as to split light, so that the light of the specific wave band reaches a preset photoelectric converter to realize a color filtering function similar to a Bayer array, each photoelectric converter receives the light from one light guide channel, the distance H between the light guide channel and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channel and the maximum angle 2α of the light emitted by an object passing through the light guide channel satisfy the following conditions: h=0.5×d1/tanα -0.5×h.
42. The method of distance measurement according to claim 41, wherein the degree of repetition is a repeated pixel area of the whole or a part of the object to be imaged.
43. A light field camera having a microlens array, further comprising:
A spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter parallel to and spaced apart from the spacers and respectively corresponding to the light guide channels one by one,
the micro lens array is positioned between the spacer and the photoelectric converter, and light emitted by an object to be imaged passes through the light guide channel and the micro lens array and then reaches the photoelectric converter;
the spacer forms a plurality of light guide channels, the light guide channels form a light guide channel array in the spacer, the size of the light guide channels is set to diffract specific wavelengths in light passing through the light guide channels so as to split light, so that the light of the specific wave band reaches a preset photoelectric converter to realize the color filtering function similar to a Bayer array, each photoelectric converter receives the light from one light guide channel, the distance H between the light guide channel and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channel and the maximum angle 2α of the light emitted by an object passing through the light guide channels satisfy the following conditions: h=0.5×d1/tanα -0.5×h.
44. A light field camera as recited in claim 43 wherein the spacer is made of a light absorbing material.
45. A light field camera according to claim 43 wherein the photoelectric converter receives all light from the corresponding light guide channel and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
46. A light field camera as recited in claim 43, wherein the spacer is coated with a light blocking layer.
47. The light field camera of claim 46 wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
48. A light field camera having a main lens, further comprising:
a spacer that is opaque and in which at least one light guide channel is formed; and
at least one photoelectric converter parallel to and spaced apart from the spacers and respectively corresponding to the light guide channels one by one,
the spacer is positioned between the main lens and the photoelectric converter, and light emitted by an object to be imaged passes through the main lens and the light guide channel and then reaches the photoelectric converter;
the spacer forms a plurality of light guide channels, the light guide channels form a light guide channel array in the spacer, the size of the light guide channels is set to diffract specific wavelengths in light passing through the light guide channels so as to split light, so that the light of the specific wave band reaches a preset photoelectric converter to realize the color filtering function similar to a Bayer array, each photoelectric converter receives the light from one light guide channel, the distance H between the light guide channel and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channel and the maximum angle 2α of the light emitted by an object passing through the light guide channels satisfy the following conditions: h=0.5×d1/tanα -0.5×h.
49. A light field camera as recited in claim 48 wherein the spacer is made of a light absorbing material.
50. A light field camera according to claim 48 wherein the photoelectric converter receives all light from the corresponding light guide channel and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
51. A light field camera as recited in claim 48, wherein the spacer is coated with a light blocking layer.
52. A light field camera according to claim 51 wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
53. A multi-view depth camera, comprising:
a spacer that is opaque to light and in which a plurality of light guide channels are formed; and
the photoelectric converters are parallel to the spacers and spaced apart from each other, and are respectively in one-to-one correspondence with the light guide channels, so that light emitted by an object to be imaged passes through the light guide channels and then reaches the photoelectric converters;
wherein the central axes of the light guide channels are staggered with each other;
the size of the light guide channels is set to diffract specific wavelengths in light passing through the light guide channels so as to split light, so that the light of the specific wavebands reaches a preset photoelectric converter to realize a color filtering function similar to a Bayer array, each photoelectric converter receives the light from one light guide channel, the distance H between the light guide channel and the photoelectric converter, the size D1 of the photoelectric converter, the height H of the light guide channel and the maximum angle 2α of the light emitted by an object passing through the light guide channels satisfy the following conditions: h=0.5×d1/tanα -0.5×h.
54. The multiple view depth camera of claim 53, wherein the spacer is made of a light absorbing material.
55. The multi-view depth camera of claim 53, wherein the photoelectric converter receives all light from the corresponding light guide channel, and the corresponding light guide channel illuminates the entire light receiving surface of the corresponding photoelectric converter.
56. The multiple view depth camera of claim 53, wherein the spacer is coated with a light blocking layer.
57. A multiple depth camera according to claim 56, wherein the light blocking layer is a diffuse reflective coating or a light absorbing coating.
CN201880095111.1A 2018-08-24 2018-08-24 Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method Active CN112335049B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/102244 WO2020037650A1 (en) 2018-08-24 2018-08-24 Imaging assembly, touch screen, camera module, smart terminal, cameras, and distance measuring method

Publications (2)

Publication Number Publication Date
CN112335049A CN112335049A (en) 2021-02-05
CN112335049B true CN112335049B (en) 2024-03-22

Family

ID=69592183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880095111.1A Active CN112335049B (en) 2018-08-24 2018-08-24 Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method

Country Status (2)

Country Link
CN (1) CN112335049B (en)
WO (1) WO2020037650A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115914804A (en) * 2021-09-29 2023-04-04 宁波舜宇光电信息有限公司 Imaging assembly, manufacturing method thereof, camera module and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014039096A (en) * 2012-08-13 2014-02-27 Fujifilm Corp Multi-eye camera photographing system and control method of the same
CN104182727A (en) * 2014-05-16 2014-12-03 深圳印象认知技术有限公司 Ultra-thin fingerprint and palm print collection device, and fingerprint and palm print collection method
CN105760808A (en) * 2014-11-14 2016-07-13 深圳印象认知技术有限公司 Imaging plate, image collector and terminal
CN107515435A (en) * 2017-09-11 2017-12-26 京东方科技集团股份有限公司 Display panel and display device
WO2018110570A1 (en) * 2016-12-13 2018-06-21 Sony Semiconductor Solutions Corporation Imaging element, manufacturing method of imaging element, metal thin film filter, and electronic device
CN108369135A (en) * 2015-12-03 2018-08-03 辛纳普蒂克斯公司 Optical sensor for being integrated in display

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07114018A (en) * 1993-10-15 1995-05-02 Rohm Co Ltd Color liquid crystal display device
EP1248121A1 (en) * 2000-10-12 2002-10-09 Sanyo Electric Co., Ltd. Method for forming color filter, method for forming light emitting element layer, method for manufacturing color display device comprising them, or color display device
TWI425629B (en) * 2009-03-30 2014-02-01 Sony Corp Solid state image pickup device, method of manufacturing the same, image pickup device, and electronic device
CN104303302A (en) * 2012-03-16 2015-01-21 株式会社尼康 Imaging element and imaging device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014039096A (en) * 2012-08-13 2014-02-27 Fujifilm Corp Multi-eye camera photographing system and control method of the same
CN104182727A (en) * 2014-05-16 2014-12-03 深圳印象认知技术有限公司 Ultra-thin fingerprint and palm print collection device, and fingerprint and palm print collection method
CN105760808A (en) * 2014-11-14 2016-07-13 深圳印象认知技术有限公司 Imaging plate, image collector and terminal
CN108369135A (en) * 2015-12-03 2018-08-03 辛纳普蒂克斯公司 Optical sensor for being integrated in display
WO2018110570A1 (en) * 2016-12-13 2018-06-21 Sony Semiconductor Solutions Corporation Imaging element, manufacturing method of imaging element, metal thin film filter, and electronic device
CN107515435A (en) * 2017-09-11 2017-12-26 京东方科技集团股份有限公司 Display panel and display device

Also Published As

Publication number Publication date
WO2020037650A1 (en) 2020-02-27
CN112335049A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
EP2380345B1 (en) Improving the depth of field in an imaging system
CN103037180B (en) Imageing sensor and picture pick-up device
KR101721455B1 (en) Multi-spectral imaging
EP2315448B1 (en) Thin camera having sub-pixel resolution
JP6016396B2 (en) Imaging device and imaging apparatus
US9531963B2 (en) Image capturing device and image capturing system
US20070081200A1 (en) Lensless imaging with controllable apertures
JP5435996B2 (en) Proximity imaging device and imaging filter
CN108513047B (en) Image sensor and image pickup apparatus
CN110636277B (en) Detection apparatus, detection method, and image pickup apparatus
JPWO2005081020A1 (en) Optics and beam splitters
US20130188026A1 (en) Depth estimating image capture device and image sensor
JP2009225064A (en) Image input device, authentication device, and electronic apparatus having them mounted thereon
WO2014129968A1 (en) Optical imaging apparatus, in particular for computational imaging, having further functionality
WO2006046396A1 (en) Camera module
US20140071322A1 (en) Image pickup apparatus with image pickup device and control method for image pickup apparatus
US8654224B2 (en) Composite imaging element and imaging device equipped with same
CN110312957B (en) Focus detection apparatus, focus detection method, and computer-readable storage medium
JP4532968B2 (en) Focus detection device
CN112335049B (en) Imaging assembly, touch screen, camera module, intelligent terminal, camera and distance measurement method
WO2010119447A1 (en) Imaging system and method
CN212160750U (en) Sensor module for fingerprint authentication and fingerprint authentication device
EP3550348A1 (en) Image sensor and method of manufacturing image sensor
US11889186B2 (en) Focus detection device, focus detection method, and image capture apparatus
US20240125591A1 (en) Wide field-of-view metasurface optics, sensors, cameras and projectors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant