WO2022267645A1 - 摄像装置、方法、电子设备及存储介质 - Google Patents

摄像装置、方法、电子设备及存储介质 Download PDF

Info

Publication number
WO2022267645A1
WO2022267645A1 PCT/CN2022/087239 CN2022087239W WO2022267645A1 WO 2022267645 A1 WO2022267645 A1 WO 2022267645A1 CN 2022087239 W CN2022087239 W CN 2022087239W WO 2022267645 A1 WO2022267645 A1 WO 2022267645A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
infrared
path
reflected
receiver
Prior art date
Application number
PCT/CN2022/087239
Other languages
English (en)
French (fr)
Inventor
张永亮
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP22827147.4A priority Critical patent/EP4344186A4/en
Priority to JP2023566723A priority patent/JP2024524813A/ja
Publication of WO2022267645A1 publication Critical patent/WO2022267645A1/zh
Priority to US18/393,437 priority patent/US20240127566A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/10Beam splitting or combining systems
    • G02B27/1006Beam splitting or combining systems for splitting or combining different wavelengths
    • G02B27/1013Beam splitting or combining systems for splitting or combining different wavelengths for colour or multispectral image sensors, e.g. splitting an image into monochromatic image components on respective sensors

Definitions

  • the embodiments of the present application relate to the technical field of imaging, and in particular, to an imaging device, method, electronic equipment, and storage medium.
  • 3D imaging technology structured light measurement, laser scanning, ToF and other technologies are becoming more mature, and 3D recognition functions are gradually equipped in mobile terminals, for example, to realize face recognition, so that face recognition has higher security .
  • the terminal display screen needs to reserve a place for the three-dimensional identification device. Since the three-dimensional identification device includes at least three lenses, the non-display area in the terminal display screen is at least The projection area of the three lenses will cause the terminal display to form a "notch screen", which will affect the appearance and reduce the user experience.
  • the 3D recognition device can also be installed under the terminal display screen.
  • the terminal display screen needs to perform special light-transmitting treatment on the location of the 3D recognition device.
  • the special area of the light-transmitting treatment is different from other normal display areas. When the number is large, the special area of light transmission treatment is larger, and it will be more obvious from other normal display areas, which will reduce the user experience.
  • the area of the abnormal display area (including non-display area and special area requiring light-transmitting treatment) in the terminal display screen should be at least the area of the projection area of the three lenses.
  • the area is relatively large, which will affect the user experience.
  • An embodiment of the present application provides an imaging device, including: an optical emitting lens for emitting dot matrix projection light and compensating infrared light to a target object; an optical receiving lens for receiving the first reflected infrared light and visible light reflected by the target object ; Wherein, the visible light and the first reflected infrared light are used for the three-dimensional recognition of the target object;
  • the optical receiving lens includes a first infrared receiver, a visible light receiver and an optical filter;
  • the optical filter is arranged in the light incident path of the optical receiving lens , used to filter and separate the incident light to obtain the first reflected infrared light and visible light propagating along the light incident path and along the separated path, the separated path is perpendicular to the light incident path;
  • the first infrared receiver is used to receive the first reflected infrared light, visible light
  • the receiver is used to receive visible light;
  • the optical emitting lens includes a first flood illuminator, an infrared dot matrix projector
  • An embodiment of the present application provides another imaging device, including: an optical emitting lens, used to emit infrared light to a target object; an optical receiving lens, used to receive the first reflected infrared light and visible light reflected by the target object; wherein, the visible light and the visible light The first reflected infrared light is used for three-dimensional recognition of the target object; the optical receiving lens includes the first infrared receiver, visible light receiver and optical filter; the optical filter is arranged in the light incident path of the optical receiving lens, and is used to convert the incident Optical filtering and separation to obtain the first reflected infrared light and visible light propagating along the light incident path and along the separated path, the separated path is perpendicular to the light incident path; the first infrared receiver is used to receive the first reflected infrared light, and the visible light receiver is used to receive visible light ; Infrared light includes infrared continuously modulated pulsed light; the optical emission lens includes a second flood light illuminator for emitting inf
  • the embodiment of the present application also provides an imaging method, including: controlling the optical emitting lens to emit dot matrix projection light and compensating infrared light to the target object; controlling the optical receiving lens to receive the first reflected infrared light and visible light reflected by the target object; wherein, The visible light and the first reflected infrared light are used for three-dimensional identification of the target object;
  • the optical receiving lens includes a first infrared receiver, a visible light receiver and a filter; the filter is arranged in the light incident path of the optical receiving lens for The incident light is filtered and separated to obtain the first reflected infrared light and visible light propagating along the light incident path and the separated path, and the separated path is perpendicular to the light incident path;
  • the first infrared receiver is used to receive the first reflected infrared light, and the visible light receiver is used to receive Visible light;
  • an optical emission lens including a first flood irradiator, an infrared dot matrix projector and a reflector;
  • the embodiment of the present application also provides an imaging method, including: controlling the optical emitting lens to emit infrared light to the target object; controlling the optical receiving lens to receive the first reflected infrared light and visible light reflected by the target object; wherein, the visible light and the first reflected infrared light The light is used for three-dimensional recognition of the target object; the optical receiving lens includes a first infrared receiver, a visible light receiver and an optical filter; the optical filter is arranged in the light incident path of the optical receiving lens, and is used to filter and separate the incident light to obtain The first reflected infrared light and visible light propagating along the light incident path and along the separated path, the separated path is perpendicular to the light incident path; the first infrared receiver is used to receive the first reflected infrared light, and the visible light receiver is used to receive visible light; the infrared light includes infrared Continuously modulated pulsed light; optical emission lens, including a second flood illuminator for emitting infrared continuously
  • the embodiment of the present application also provides an electronic device, including: at least one processor; a memory communicatively connected to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, So that at least one processor can execute the above-mentioned imaging method.
  • An embodiment of the present application also provides a computer-readable storage medium storing a computer program, and implementing the above-mentioned imaging method when the computer program is executed by a processor.
  • Fig. 1 is a schematic structural diagram of an imaging device implementing structured light provided according to an embodiment of the present application
  • Fig. 2 is a schematic diagram of the layout of a three-dimensional identification device on a terminal in the related art
  • Figures 3a to 3c are schematic diagrams of device arrangement schemes involved in three-dimensional identification in the related art
  • Fig. 4 is a schematic structural diagram of an imaging device for realizing monocular structured light according to another embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of an imaging device for realizing binocular structured light provided according to an embodiment of the present application
  • Fig. 6 is a schematic diagram of the location of the photosensitive device provided according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an imaging device for implementing TOF according to an embodiment of the present application.
  • Fig. 8 is a flowchart of an imaging method provided according to an embodiment of the present application.
  • FIG. 9 is a flowchart of an imaging method provided according to another embodiment of the present application.
  • Fig. 10 is a schematic diagram of an electronic device provided according to an embodiment of the present application.
  • the main purpose of the embodiments of the present application is to provide an imaging device, method, electronic equipment and storage medium, which can reduce the abnormally displayed area on the display screen of the terminal.
  • Embodiments of the present application relate to a camera device, as shown in Figure 1, which may include, but is not limited to:
  • An optical emitting lens 1100 used for emitting dot matrix projection light and compensating infrared light to the target object
  • the optical receiving lens 1200 is used to receive the first reflected infrared light and visible light reflected by the target object;
  • the visible light and the first reflected infrared light are used for three-dimensional identification of the target object, the first reflected infrared light is used to construct the depth information of the target object, the visible light is used to construct a two-dimensional image of the target object, and the depth information and the two-dimensional image are used for Construct a three-dimensional image of the target object;
  • the optical receiving lens 1200 includes a first infrared receiver 1210, a visible light receiver 1220 and a filter 1230;
  • the optical filter 1230 is arranged in the light incident path of the optical receiving lens 1200, and is used to filter and separate the incident light to obtain the first reflected infrared light and visible light propagating along the light incident path and along the separated path, and the separated path is perpendicular to the light incident path;
  • the first infrared receiver 1210 is used to receive the first reflected infrared light
  • the visible light receiver 1220 is used to receive visible light
  • the optical emission lens 1100 includes a first flood illuminator 1110, an infrared dot matrix projector 1120 and a reflector 1130; the infrared dot matrix projector 1120 is used to emit dot matrix projection light, and the first flood light illuminator 1110 is used to emit compensation infrared light;
  • the reflector 1130 is arranged in the light exit path of the optical emission lens 1100, and the light exit path includes a merge path, an initial exit path of the dot matrix projection light and an initial exit path of the compensation infrared light, and an initial exit path of the dot matrix projection light and the compensation infrared light
  • the paths form an intersection angle, and the dot-matrix projection light and the compensation infrared light are emitted along the combined path after passing through the reflector.
  • the imaging device of this embodiment can be applied to mobile terminals such as mobile phones, tablet computers, Customer Premise Equipment ("CPE”), smart homes, etc., and acts on various three-dimensional recognition application scenarios, for example, face recognition , AR, VR, 3D modeling, somatosensory games, holographic image interaction, 3D beauty, remote videophone, cloud real-time video and other different applications.
  • the optical receiving lens 1200 can be used not only as a device for realizing a three-dimensional recognition function, but also as a device for realizing a front camera function.
  • the visible light receiver 1220 in the optical receiving lens 1200 can receive visible light for the mobile terminal to generate a two-dimensional image when realizing the three-dimensional recognition function, and can also receive the light incident on the optical receiving lens 1200 when the optical receiving lens 1200 is used as a common front camera. Visible light for the mobile terminal to generate a two-dimensional image of the camera.
  • the mobile terminal can control the camera device to emit infrared light to the target object to be recognized, such as a human face, with the optical transmitting lens 1100, and receive the first reflected infrared light and visible light reflected by the target object with the optical receiving lens 1200, Therefore, the mobile terminal can construct the depth information of the target object according to the received first reflected infrared light, construct the two-dimensional image of the target object according to the received visible light, and then construct the three-dimensional image of the target object according to the depth information and the two-dimensional image.
  • the target object to be recognized such as a human face
  • the forward 3D recognition of mobile terminals mainly includes (a) monocular structured light, (b) Time of Flight (TOF) method ), (c) binocular structured light solution.
  • the monocular structured light solution consists of infrared dot matrix projector a301 projecting speckle or coded structured light to the target object, composed of flood light illuminator a302 (low power flood light illuminator), and infrared camera a303 receiving the reflected infrared light reflected by the target object
  • the visible light reflected by the target object is received by the RGB camera a304, and the four devices are arranged sequentially along the baseline a305 in the three-dimensional recognition area on the terminal.
  • the processor constructs the depth information of the target object based on the reflected infrared light received by the infrared camera a303, and the The visible light received by the camera a304 constructs a two-dimensional image of the target object, and constructs a three-dimensional image of the target object according to the depth information and the two-dimensional image.
  • the flood irradiator b301 (high-power flood irradiator) emits infrared continuously modulated pulse light to the target object
  • the infrared camera b302 receives the reflected pulse light reflected by the target object
  • the RGB camera b303 receives the visible light reflected by the target object
  • the three devices are sequentially arranged in the three-dimensional identification area on the terminal along the baseline b304.
  • the processor constructs the depth information of the target object according to the reflected pulsed light received by the infrared camera a302, constructs the two-dimensional image of the target object according to the visible light received by the RGB camera b303, and constructs the three-dimensional image of the target object according to the depth information and the two-dimensional image.
  • the binocular structured light solution consists of infrared dot matrix projector c301 projecting speckle or coded structured light to the target object, composed of flood light illuminator c302 (low power flood light illuminator), and infrared camera c303 and infrared camera c304 receiving the target object
  • the reflected infrared light is reflected by the RGB camera c305 to receive the visible light reflected by the target object
  • the five devices are arranged in sequence along the baseline c306 in the three-dimensional identification area on the terminal, and the processor constructs the For the depth information of the target object, a two-dimensional image of the target object is constructed according to the visible light received by the RGB camera c305, and a three-dimensional image of the target object is constructed according to the depth information and the two-dimensional image.
  • the projection area of the lens of the above-mentioned 3D recognition device is relatively large, and it is usually arranged in the "notch" area on the top of the display screen.
  • this area can only be prevented from becoming the screen display area. , so that the effective display area of the display screen is reduced, which also affects the overall appearance to a certain extent.
  • the application state needs to carry out special processing to enhance the light transmittance for the display area of the corresponding terminal screen (mainly the OLED screen with light transmittance), that is, by reducing RGB pixels or shrinking RGB pixels increase the amount of light passing through, so this display area will always present a display effect that is more or less different from the rest of the display area.
  • the application of 3D recognition under the screen will cause the area of the special treatment area to improve the light transmittance of the display screen to increase a lot, so reducing the area of the special treatment area is conducive to reducing the display difference area .
  • the most direct simplification is to reduce the number of 3D recognition devices installed under the screen.
  • the camera device can be set as an under-screen camera.
  • the under-screen camera means that the camera is set under the display screen of the mobile terminal.
  • the light only enters the sensor through the light-transmitting area on the screen of the mobile terminal.
  • the camera device can be set as an ordinary front camera, and the display screen of the mobile terminal reserves a non-display area for the camera device, forming screen styles such as "notch screen” and "water drop screen”.
  • the imaging device of this embodiment can obtain depth information and two-dimensional images for constructing a three-dimensional image of the target object by using an optical emitting lens to emit infrared light to the target object, and using an optical receiving lens to receive reflected red light and visible light reflected by the target object.
  • the optical receiving lens receives the first reflected infrared light through the internal first infrared receiver, receives the visible light through the internal visible light receiver, and separates the first reflected infrared light from the incident light by setting a filter in the light incident path.
  • Light and visible light so as to realize simultaneous reception of two kinds of light with an optical receiving lens, set the first floodlight illuminator, infrared dot matrix projector and reflector on the optical emitting lens, and the dot matrix projected light at the angle of intersection will be projected by the reflector
  • Both the control and the compensating infrared light are emitted along the combined path, so that one lens of the optical emission lens can be used to emit the dot matrix projection light and the compensating infrared light.
  • the structured light solution for 3D recognition can be realized, and the number of lenses required for 3D recognition can be reduced.
  • the lenses used for structured light three-dimensional recognition on the display screen are reduced from at least four lenses to two. Therefore, only part of the area occupied by the optical transmitting lens and the optical receiving lens is processed in the display screen to reduce the terminal display.
  • the area that needs to be processed for abnormal display in the screen increases the area used for normal display in the terminal display screen to improve user experience.
  • the optical transmitting lens 1100 and the optical receiving lens 1200 can be arranged under the terminal display screen 1010, wherein the terminal display screen 1010 can be composed of a touch panel (Touch Panel, referred to as "TP") 1011 and a liquid crystal display (Liquid Crystal Display, referred to as "LCD”) 1012 composition.
  • TP touch panel
  • LCD liquid crystal display
  • the infrared light emitted by the optical emitting lens 1100 penetrates the transmitting light-transmitting area of the terminal display 1010 and emits to the target object, and the optical receiving lens 1200 receives the incident light that penetrates the receiving light-transmitting area of the terminal display 1010, and separates the incident light to obtain Reflects infrared and visible light.
  • the optical receiving lens 1200 includes a first infrared receiver 1210, a visible light receiver 1220 and an optical filter 1230, wherein the optical filter 1230 is arranged in the light incident path of the optical receiving lens 1200, placed at an angle of 45 degrees to the incident light, and filters
  • the light sheet 1230 can be a visible/infrared dichroic filter, using a special filter film. When the incident angle between the incident light and the filter 1230 is 45 degrees, the reflectance of the visible light band from 0.3 ⁇ m to 0.6 ⁇ m is greater than 90%.
  • the near-infrared transmittance of 0.75 ⁇ m to 2.5 ⁇ m is greater than 90%, so that the reflected infrared light can pass through the filter 1230, reach the first infrared receiver 1210 below the filter, and reflect visible light into the separation path in the visible light receiver 1220.
  • the first infrared receiver 1210 includes an infrared sensitive substrate 1211, an infrared low-pass filter 1212, and a lens group 1213, wherein the lens group 1213 is used to gather the first reflected infrared light and a small part of visible light reflected by the filter 1230, and the infrared low-pass filter 1212
  • the filter 1212 is used to filter a small part of the visible light reflected by the optical filter 1230, and only allow the first reflected infrared light to pass through, so that the infrared sensitive substrate 1211 only obtains the first reflected infrared light.
  • Infrared photosensitive substrate 1211 has infrared CMOS image sensor (Infrared CMOS Image Sensor, referred to as "CIS" special chip.
  • Visible light receiver 1220 includes visible light photosensitive substrate 1221, infrared cut filter 1222 and lens group 1223, wherein, lens group 1223 is used for The visible light reflected by the optical filter 1230 and a small part of the first reflected infrared light are collected, and the infrared cutoff filter 1222 is used to filter the small part of the first reflected infrared light reflected by the optical filter 1230, so that only visible light can penetrate, so that the visible light photosensitive substrate 1221 Only visible light is obtained, and there is a CIS dedicated chip on the visible light photosensitive substrate 1221.
  • the lens group 1213 and the lens group 1223 can be composed of wafer-level optical lenses (Wafer Level Optics lens, referred to as "WLO").
  • the reflective and transmitted light band devices can be reversed, and the propagation paths of the first reflected infrared light and visible light can be reversed, so that the visible light receiver 1220 can exchange positions with infrared receiver 1210 .
  • the infrared dot matrix projector and the first flood irradiator can emit infrared light of different bands. But different bands are limited to different bands of near-infrared.
  • one of the infrared dot matrix projector and the first flood irradiator is located in the near-infrared short-wave (780-1100nm), and the other is located in the near-infrared long-wave (1100-2526nm).
  • the distance between the optical centers of the optical transmitting lens and the optical receiving lens is the baseline, and this baseline can be a conventional baseline or a micro baseline.
  • Monocular structured light adopts active three-dimensional measurement, estimates the spatial position of each pixel through the baseline, and then measures the distance between the object and the lens, that is, obtains depth information.
  • the depth range measured by the binocular camera is related to the baseline. The larger the baseline distance, the farther it can be measured.
  • the micro-baseline means that the distance between the two lenses is very short. Although the measurement distance is short, it is in line with the advantages of close-range applications of structured light, and it is also more conducive to the overall internal layout of the already compact mobile terminal.
  • the dot matrix projection light and the compensation infrared light are controlled to emit along the merged path by the reflector, so that the optical emission can be achieved.
  • One lens, one lens emits dot matrix projection light and compensates infrared light to realize a structured light solution for three-dimensional recognition.
  • the intersection angles formed by the dot matrix projection light and the compensation infrared light are all drawn as right angles.
  • the camera device can be used to implement a monocular structured light solution in 3D recognition.
  • the intersection angle formed by the initial exit path of the dot matrix projection light and the compensation infrared light satisfies the total reflection condition of the reflector, and the reflector 1130 can be an infrared total reflection lens, that is, the optical emission lens 1100, which can include a first flood illuminator 1110, an infrared Dot matrix projector 1120 and infrared total reflection lens, by infrared light, comprise that infrared dot matrix projector 1120 sends dot matrix projection light and first floodlight irradiator 1110 sends compensation infrared light, and infrared total reflection lens is arranged on optical emission lens 1100 In the light exit path, it is used to reflect and transmit the dot matrix projection light and the compensation infrared light at the same time, and emit along the combining path.
  • the first flood illuminator 1110 may be disposed below the infrared total reflection lens, that is, the initial emission path of the first flood illuminator 1110 may be on the same straight line as the combining path.
  • the infrared total reflection lens reflects and emits the dot matrix projection light emitted by the infrared dot matrix projector 1120 and makes the compensation infrared light emitted by the first flood illuminator 1110 penetrate and emit.
  • the first flood illuminator 1110 may include a diffuser 1111 and a low-power Vertical-Cavity Surface-Emitting Laser (Vertical-Cavity Surface-Emitting Laser, referred to as "VCSEL") 1112, and the infrared dot matrix projector 1120 may include an optical diffraction element (Diffractive Optical Elements, referred to as "DOE") 1121, high-power VCSEL1122 and WLO1123.
  • DOE optical diffraction element
  • the infrared dot matrix projector 1120 and the visible light receiver 1220 located in the separate path of the optical receiving lens 1200 are both located below the baseline of the optical transmitting lens 1100 and the optical receiving lens 1200 .
  • the baseline 1020 can be a conventional baseline length, or a micro-baseline length
  • the optical transmitting lens 1100 and the optical receiving lens 1200 which are composite devices inside, can make the local response of the LCD display area improve the light transmittance under the micro-baseline state
  • the composite devices of the two lenses are closely intertwined under the micro-baseline, which can realize integration and standardization.
  • Technical parameters such as dual-camera calibration can be adjusted to a basically stable state before the integrated output, effectively reducing separation
  • Infrared dot matrix projection light can use speckle structured light or coded structured light, and the baseline length can be very small, which is a micro baseline.
  • the imaging system's ability to image and accurately track long-distance targets is limited.
  • Using micro-baseline laser active lighting and composite lighting to illuminate distant, small and dark targets or their local areas can reduce the influence of background radiation and improve the system's ability to accurately track and clearly image long-distance, small and dark targets .
  • the working principle of the laser active lighting surveillance system is basically the same as that of the lidar. By adjusting the focus state (divergence angle) of the emitted laser beam, the entire target or key feature parts of the target are illuminated to meet the detection requirements of the receiving system and achieve the purpose of imaging and precise tracking of the target.
  • the first flood illuminator 1110 and the infrared dot projector 1120 when they are working simultaneously, they can each use a relatively narrow spectrum, and the first infrared receiver 1210 can use a relatively wide spectrum.
  • the first flood illuminator 1110 under the conditions of dark light at night and strong light during the day, the first flood illuminator 1110 can achieve the supplementary light effect, and the first infrared receiver 1210 of the processor can receive the infrared light reflected by the target object at the same time.
  • the structured light of the dot matrix projector and the unstructured light of the first flood irradiator, after fusion calculation, can enhance the target recognition ability, and at the same time, it can effectively make up for the shortcoming that the diffracted light of the dot matrix projection will quickly dissipate due to the increasing distance.
  • the infrared total reflection lens has both total reflection and transmission characteristics, can directly transmit the infrared unstructured light emitted by the first flood illuminator 1110 , and at the same time completely reflect the infrared structured light emitted by the infrared dot matrix projector 1120 .
  • the satisfying condition of total reflection is that the light is from an optically denser medium to an optically rarer medium and the incident angle is greater than or equal to the critical angle.
  • the requirement of greater than the critical angle is unified, and the refraction angle is 90 degrees as the critical state. Therefore, under normal circumstances, the initial outgoing path of light before total reflection and the merged path after total reflection are not vertical, but exist in the form of an obtuse angle.
  • the infrared dot matrix projector in Fig. 1 usually cannot be placed horizontally, but placed obliquely with the left side high and the right side low. Therefore, the illustration in Fig. 1 is only given from the perspective of position regulation.
  • the imaging device is provided with a first flood illuminator, an infrared dot matrix projector and an infrared total reflection lens on the optical emission lens, so that the intersection angle formed by the dot matrix projection light and the compensation infrared light meets the total reflection of the reflector.
  • the infrared total reflection lens can emit the dot matrix projection light and the compensation infrared light along the combined path at the same time, so that one lens of the optical emission lens can be used to emit the dot matrix projection light and the compensation infrared light to realize the structured light scheme of three-dimensional recognition .
  • the reflector can also be a movable infrared reflector 1132, that is, an optical emission lens 1100, which can include a first flood illuminator 1110, an infrared dot matrix projector 1120 and a movable infrared reflector 1132.
  • the reflective mirror 1132 is composed of infrared light including the infrared dot matrix projector 1120 that emits dot matrix projection light and the first flood light irradiator 1110 that sends out compensation infrared light.
  • the movable infrared reflective mirror 1132 is arranged in the light exit path of the optical emission lens 1100, It is used to control the dot matrix projection light and compensate the infrared light to emit along the combined path in time division.
  • the initial emission paths of the dot matrix projection light and the compensation infrared light can be perpendicular to each other, or can be in other angles, as long as the dot matrix projection light and the compensation infrared light can pass through the infrared reflector 1132 and then emit along the merged path.
  • the first flood illuminator 1110 may be arranged under the movable infrared reflector 1132 .
  • the movable infrared reflector 1132 is impenetrable to infrared light, the infrared dot matrix projector and the floodlight irradiator cannot work simultaneously, and the movable infrared reflector 1132 can only be time-shared according to the control of the system processor in two states of inclination and vertical Work.
  • the movable infrared reflector 1132 When the movable infrared reflector 1132 is in a tilted state, the infrared compensation light emitted by the first flood illuminator 1110 is blocked, and the infrared light projected by the infrared dot matrix projector 1120 is reflected; the movable infrared reflector 1132 is in a vertical state At this time, the optical path of the infrared dot matrix projector 1120 is blocked, and the optical path of the first flood irradiator 1110 is unimpeded to emit infrared light.
  • a more general working mode can be adopted, so that the first flood irradiator 1110 under the optical emission lens 1100 is used to start in advance, mainly used to predict the flood illumination of the target object, and a light source with a larger illumination angle is used.
  • the infrared dot matrix projector 1120 emits multiple light points (for example, thousands to tens of thousands) to project onto the human face, and cooperates with the optical receiving lens 1200 to receive the reflected light point changes and calculate the virtual human face
  • the surface profile is used to finely judge whether the detected face is the user of the terminal or other authenticated persons. It can be understood that the position of the infrared dot matrix projector 1120 and the first flood illuminator 1110 can be exchanged.
  • the imaging device is provided with a first flood illuminator, an infrared dot matrix projector, and a movable infrared reflector on the optical emission lens, and the dot matrix projected light perpendicular to each other and the compensated infrared light are controlled and separated by the reflector. Time is emitted along the combined path, so that one lens can be used to optically emit lens, emit dot matrix projection light and compensate infrared light, and realize a structured light solution for three-dimensional recognition.
  • the camera device can be used to implement a binocular structured light solution in 3D recognition.
  • the optical emission lens 1100 that realizes the binocular structured light scheme in three-dimensional recognition also includes: a second infrared receiver 1140 and a rotating device 1150; the second infrared receiver 1140 is used to receive the The second reflected infrared light, one side of the rotating device 1150 is provided with the second infrared receiver 1140, and the other side is provided with the infrared dot matrix projector 1120 or the first flood illuminator 1110, and the rotating device 1150 is used for the infrared dot matrix projector After 1120 or the first flood illuminator 1110 emits dot matrix projection light or compensated infrared light, it rotates to the other side for the second infrared receiver 1140 to receive the second reflected infrared light.
  • the infrared light is transmitted to the target object and the second infrared receiver 1140 and the rotating device 1150;
  • the processor controls the rotating device 1150 to rotate, Rotate the originally upward first flood illuminator 1110 to the bottom, and rotate the downward second infrared receiver 1140 to the top, and the first infrared receiver 1210 and the second infrared receiver 1140 simultaneously receive the reflected light from the target object.
  • the first reflected infrared light and the second reflected infrared light the system processor calculates and fuses the data of the target object received by the dual infrared receivers to realize three-dimensional recognition.
  • the infrared rays emitted and received by the first flood illuminator 1110 and the second infrared receiver 1140 can pass through the infrared total reflection lens 1131, and since the incident angles in both directions are far below the critical angle of total reflection, both will Present ideal infrared penetration effect. Therefore, and the binocular structured light solution can also be applied based on the imaging device shown in FIG. 1 .
  • Binocular structured light can use structured light to measure depth information in indoor environments, and switch to pure binocular mode when outdoor lighting causes structured light to fail, improving reliability and anti-interference capabilities.
  • the imaging device is provided with a first flood illuminator, an infrared dot matrix projector and a rotating device on the optical emission lens, and the dot matrix projection is emitted by the rotating device on the infrared dot matrix projector or the first flood illuminator. After emitting or compensating infrared light, it is rotated to the other side for the second infrared receiver to receive the second reflected infrared light.
  • both the optical transmitting lens and the optical receiving lens can be used to receive the first reflected infrared light and the second reflected infrared light reflected by the target object.
  • Infrared light so a binocular structured light solution for three-dimensional recognition can be realized.
  • the imaging device further includes: a photosensitive device, the photosensitive device is located in the first non-display area 601 between the light transmission area of the optical emitting lens and the edge of the terminal screen as shown in Figure 6, or the optical receiving lens In the second non-display area 602 between the light-transmitting area and the edge of the terminal screen. Therefore, the extremely narrow non-display area 600 at the top of the display screen is used to arrange proximity light sensors, ambient light sensors, and distance sensors for structured light prediction purposes, such as small-sized one-dimensional TOF sensors.
  • the camera device in this embodiment can further improve the accuracy of three-dimensional recognition and enhance user experience by disposing distance sensors for predicting targets in the first non-display area and the second non-display area.
  • the embodiment of the present application relates to a camera device for realizing the TOF solution in three-dimensional recognition, as shown in Figure 7, including:
  • Optical emission lens 7100 used to emit infrared light to the target object
  • the optical receiving lens 7200 is used to receive the first reflected infrared light and visible light reflected by the target object;
  • the visible light and the first reflected infrared light are used for three-dimensional identification of the target object, the first reflected infrared light is used to construct the depth information of the target object, the visible light is used to construct a two-dimensional image of the target object, and the depth information and the two-dimensional image are used for Construct a three-dimensional image of the target object;
  • the optical receiving lens 7200 includes a first infrared receiver 7210, a visible light receiver 7220 and an optical filter 7230;
  • the optical filter 7230 is arranged in the light incident path of the optical receiving lens 7200, and is used to filter and separate the incident light to obtain the first reflected infrared light and visible light propagating along the light incident path and along the separated path, and the separated path is perpendicular to the light incident path;
  • the first infrared receiver 7210 is used to receive the first reflected infrared light
  • the visible light receiver 7220 is used to receive visible light
  • Infrared light can be infrared continuously modulated pulsed light
  • the optical emission lens 7100 can be a second flood light illuminator 7110 for emitting infrared continuously modulated pulsed light.
  • the second flood light illuminator 7110 is located below the terminal screen, and the first infrared light
  • the receiver 7210 or visible light receiver 7220 is located below the second flood illuminator 7110 .
  • the optical filter 7230 reflects visible light to the separation path
  • the visible light receiver 7220 is located under the second flood illuminator 7110
  • the first infrared receiver 7210 is located Below the second flood illuminator 7110.
  • the second flood irradiator 7110 includes a diffuser 7111 and a high-power VCSEL7112.
  • the high-power VCSEL7112 is used to emit infrared continuously modulated pulsed light
  • the diffuser 7111 is used to control the diffusion of the infrared continuously modulated pulsed light emitted by the high-power VCSEL7112 to the target object.
  • the interior of the first infrared receiver 7210 of the TOF solution is more complex and requires higher performance.
  • TOF is a surface-emitting light, which can form a three-dimensional image at one time. There can be a zero baseline between the optical transmitting lens and the optical receiving lens, so the overall structure is more compact, and it can minimize the special treatment area of the terminal display screen to improve the light transmittance. .
  • TOF is divided into iTOF (indirect TOF) and dTOF (direct TOF).
  • the former uses VCSEL to emit infrared continuous modulation pulse light and receive the infrared light reflected by the target, and perform homodyne demodulation to measure the phase shift of reflected infrared light.
  • VCSEL Light Detection and Ranging
  • the core components include VCSEL, Single Photon Avalanche Diode (Single Photon Avalanche Diode, "SPAD” for short) and Time Digital Converter (Time Digital Converter, "TDC' for short)
  • VCSEL emits pulse waves into the scene
  • SPAD receives pulse waves reflected from the target object
  • TDC records the time of each received optical signal Time-of-flight, that is, the time interval between transmitting pulses and receiving pulses.
  • the long detection distance and high-precision imaging characteristics of dToF lidar can provide better night shooting, video and AR experience.
  • the imaging device of this embodiment can emit infrared light to the target object by using the optical transmitting lens, and receive the reflected red light and visible light reflected by the target object with the optical receiving lens, and can obtain depth information and two-dimensional images for constructing a three-dimensional image of the target object.
  • image wherein the optical receiving lens receives the first reflected infrared light through the internal first infrared receiver, receives the visible light through the internal visible light receiver, and separates the first from the incident light by setting a filter in the light incident path Reflect infrared light and visible light, so as to realize simultaneous reception of two kinds of light with one optical receiving lens.
  • the target object reflects the infrared continuous modulation pulse light and other light rays
  • the first infrared receiver receives the reflected infrared light reflected by the target object
  • the visible light receiver receives the target object
  • the visible light reflected by the object enables the mobile terminal to obtain the reflected infrared light and visible light, thereby constructing a three-dimensional image of the target object with the TOF scheme, and reducing the number of lenses required for three-dimensional recognition by multiplexing the optical receiving lens.
  • the lens used for TOF three-dimensional recognition on the display screen is reduced from at least three lenses to two.
  • the second flood illuminator is located below the terminal screen, the first infrared receiver or visible light receiver is located below the second flood illuminator, that is, the second flood illuminator and the first infrared receiver or visible light receiver are placed below the second flood illuminator.
  • the receivers are arranged up and down to increase the utilization of the space under the screen.
  • the embodiment of the present application also relates to a camera method, as shown in FIG. 8 , including the following steps:
  • Step 801 controlling the optical emission lens to emit dot matrix projection light and compensate infrared light to the target object;
  • Step 802 controlling the optical receiving lens to receive the first reflected infrared light and visible light reflected by the target object
  • the visible light and the first reflected infrared light are used for three-dimensional identification of the target object, the first reflected infrared light is used to construct the depth information of the target object, the visible light is used to construct a two-dimensional image of the target object, and the depth information and the two-dimensional image are used for Construct a three-dimensional image of the target object;
  • An optical receiving lens including a first infrared receiver, a visible light receiver and a filter;
  • the optical filter is arranged in the light incident path of the optical receiving lens, and is used to filter and separate the incident light to obtain the first reflected infrared light and visible light propagating along the light incident path and along the separation path, and the separation path is perpendicular to the light incidence path;
  • the first infrared receiver is used to receive the first reflected infrared light
  • the visible light receiver is used to receive visible light
  • the optical emission lens includes a first flood irradiator, an infrared dot matrix projector and a reflector; the first flood irradiator is used for emitting compensated infrared light, and the infrared dot matrix projector is used for emitting dot matrix projection light;
  • the reflector is arranged in the light exit path of the optical emission lens, and the light exit path includes a merge path, an initial exit path of the dot matrix projection light and an initial exit path of the compensated infrared light, and the initial exit path of the dot matrix projection light and the compensation infrared light forms At the intersection angle, the dot matrix projection light and the compensation infrared light are emitted along the combined path after passing through the reflector.
  • the embodiment of the present application also relates to another imaging method, as shown in FIG. 9 , including the following steps:
  • Step 901 controlling the optical emitting lens to emit infrared light to the target object
  • Step 302 controlling the optical receiving lens to receive the first reflected infrared light and visible light reflected by the target object
  • the visible light and the first reflected infrared light are used for three-dimensional identification of the target object, the first reflected infrared light is used to construct the depth information of the target object, the visible light is used to construct a two-dimensional image of the target object, and the depth information and the two-dimensional image are used for Construct a three-dimensional image of the target object;
  • An optical receiving lens including a first infrared receiver, a visible light receiver and a filter;
  • the optical filter is arranged in the light incident path of the optical receiving lens, and is used to filter and separate the incident light to obtain the first reflected infrared light and visible light propagating along the light incident path and along the separation path, and the separation path is perpendicular to the light incidence path;
  • the first infrared receiver is used to receive the first reflected infrared light
  • the visible light receiver is used to receive visible light
  • Infrared light includes infrared continuously modulated pulsed light
  • an optical emission lens including a second flood illuminator for emission of infrared continuously modulated pulsed light
  • the second floodlight illuminator is located under the terminal screen, and the first infrared receiver or visible light receiver is located under the second floodlight illuminator.
  • the camera method is applied to three-dimensional recognition, and the first infrared receiver adopts a relatively wide spectrum to receive two relatively narrow spectrum lights of different bands simultaneously sent by the floodlight illuminator and the dot matrix projector to realize dual optical paths
  • the light is effectively decomposed and transformed into 3D and 2D point clouds with coordinate registration, and finally the specific process of fusion to achieve 3D image enhancement is as follows:
  • the processor separates the photoelectrically converted signal of the first infrared band light of the flood illuminator reflected by the target object received by the infrared receiver and the photoelectrically converted signal of the second infrared band light of the dot matrix projector.
  • Load AI engine for clustering analysis
  • the separated two types of photoelectric data are preprocessed respectively to realize data filtering and compression;
  • the third step is to perform stereo registration on the two-dimensional image data of the reflected light from the flood illuminator and the three-dimensional image data of the reflected light from the dot matrix projector (the pixel coordinates of the two images, the image coordinates, the camera Coordinates and world coordinates are calibrated to realize the two-dimensional and three-dimensional superimposed points in the real three-dimensional space are mapped to the two-dimensional imaging plane), find out the key feature points, match the two into the same coordinate system, and determine the correspondence between the two images
  • the spatial coordinate relationship between points is to perform stereo registration on the two-dimensional image data of the reflected light from the flood illuminator and the three-dimensional image data of the reflected light from the dot matrix projector (the pixel coordinates of the two images, the image coordinates, the camera Coordinates and world coordinates are calibrated to realize the two-dimensional and three-dimensional superimposed points in the real three-dimensional space are mapped to the two-dimensional imaging plane), find out the key feature points, match the two into the same
  • the fourth step is to form point cloud data that is easy to store and process, and is used to expand high-dimensional feature information.
  • the fifth step is to load the AI engine again, and use the method of deep learning to classify and segment the 3D point cloud (learn the cross transformation according to the input point, and then use it to simultaneously weight the input features associated with the point and rearrange them into a potentially implied canonical order, which is then achieved by applying product and sum operations on the elements);
  • the sixth step is to realize 3D image recognition or reconstruction.
  • the two-dimensional image data of the floodlight illuminator can effectively compensate for the lack of three-dimensional image data of the dot matrix projection, enhance the three-dimensional recognition effect, and be more conducive to the safe unlocking and payment of users under strong or dark light, and Game Modeling, Virtual Reality and Augmented Reality.
  • the embodiment of the present application also relates to an electronic device, as shown in FIG. 10 , including: at least one processor 1001; a memory 1002 communicatively connected to the at least one processor; instructions, the instructions are executed by at least one processor 1001 in the above-mentioned imaging method.
  • the memory 1002 and the processor 1001 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors 1001 and various circuits of the memory 1002 together.
  • the bus may also connect together various other circuits such as peripherals, voltage regulators, and power management circuits, all of which are well known in the art and therefore will not be further described herein.
  • the bus interface provides an interface between the bus and the transceivers.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing means for communicating with various other devices over a transmission medium.
  • the information processed by the processor 1001 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the information and transmits the information to the processor 1001 .
  • the processor 1001 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management and other control functions. Instead, the memory 1002 may be used to store information used by the processor when performing operations.
  • Embodiments of the present application also relate to a computer-readable storage medium storing a computer program.
  • the above method embodiments are implemented when the computer program is executed by the processor.
  • the program is stored in a storage medium, and includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本申请实施例涉及摄像技术领域,公开了一种摄像装置、方法、电子设备及存储介质。本申请中摄像装置,包括:光学发射镜头(1100),用于向目标物体发射红外光;光学接收镜头(1200),用于接收目标物体反射的第一反射红外光和可见光;其中,可见光和第一反射红外光用于目标物体的三维识别;光学发射镜头(1100),包括第一泛光照射器(1110)、红外点阵投影器(1120)和反射镜(1130);第一泛光照射器(1110用于发出补偿红外光,红外点阵投影器(1120)用于发出点阵投影光;反射镜(1130)设置在光学发射镜头的光出射路径中,点阵投影光和补偿红外光的初始出射路径形成交角,点阵投影光和补偿红外光经过反射镜(1130)后沿合并路径射出。

Description

摄像装置、方法、电子设备及存储介质
交叉引用
本申请基于申请号为“202110686374.3”、申请日为2021年6月21日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请实施例涉及摄像技术领域,特别涉及一种摄像装置、方法、电子设备及存储介质。
背景技术
随着三维成像技术的发展,结构光测量、激光扫描、ToF等技术趋于成熟,三维识别功能逐渐配备于移动终端中,例如,用于实现人脸识别,使刷脸有更高的安全性。
由于三维识别器件和前向摄像头都设置在终端显示屏上,终端显示屏的需要为三维识别器件预留位置,由于三维识别器件包括至少三个镜头,所以终端显示屏中的非显示区域至少为三个镜头的投射区域面积,会使终端显示屏形成“刘海屏”,影响美观,降低用户体验。或者,三维识别器件也可以设置在终端显示屏下,终端显示屏需要对三维识别器件所在位置进行特殊的透光处理,透光处理的特殊区域与其他正常显示区域存在显示差异,当三维识别器件个数较多时,透光处理的特殊区域面积较大,与其他正常显示区域会更加明显,降低用户体验。
如若要满足三维识别需求,终端显示屏面中非正常显示区域(包括非显示区域和需要透光处理的特殊区域)的面积至少为三个镜头的投射区域面积,面积较大,影响用户体验。
发明内容
本申请实施例提供了一种摄像装置,包括:光学发射镜头,用于向目标物体发射点阵投影光和补偿红外光;光学接收镜头,用于接收目标物体反射的第一反射红外光和可见光;其中,可见光和第一反射红外光用于目标物体的三维识别;光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;滤光片设置在光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直于光入射路径;第一红外接收器用于接收第一反射红外光,可见光接收器用于接收可见光;光学发射镜头,包括第一泛光照射器、红外点阵投影器和反射镜;第一泛光照射器用于发出补偿红外光,红外点阵投影器用于发出点阵投影光;反射镜设置在光学发射镜头的光出射路径中,光出射路径包括合并路径、点阵投影光的初始出射路径和补偿红外光的初始出射路径,点阵投影光和补偿红外光的初始出射路径形成交角,点阵投影光和补偿红外光经过反射镜后沿合并路径射出。
本申请实施例提供了另一种摄像装置,包括:光学发射镜头,用于向目标物体发射红外光;光学接收镜头,用于接收目标物体反射的第一反射红外光和可见光;其中,可见光和第一反射红外光用于目标物体的三维识别;光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;滤光片设置在光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直于光入射路径;第一红外接收器用于接收第一反射红外光,可见光接收器用于接收可见光;红外光包括红外连续调制脉冲光;光学发射镜头,包括用于发射红外连续调制脉冲光的第二泛光照射器;第二泛光照射器位于终端屏面下方,第一红外接收器或可见光接收器位于第二泛光照射器下方。
本申请实施例还提供了一种摄像方法,包括:控制光学发射镜头向目标物体发射点阵投影光和补偿红外光;控制光学接收镜头接收目标物体反射的第一反射红外光和可见光;其中,可见光和第一反射红外光用于目标物体的三维识别;光学接收镜头包括第一红外接收器、可见光接收器和滤光片;滤光片设置在光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直于光入射路径; 第一红外接收器用于接收第一反射红外光,可见光接收器用于接收可见光;光学发射镜头,包括第一泛光照射器、红外点阵投影器和反射镜;第一泛光照射器用于发出补偿红外光,红外点阵投影器用于发出点阵投影光;反射镜设置在光学发射镜头的光出射路径中,光出射路径包括合并路径、点阵投影光的初始出射路径和补偿红外光的初始出射路径,点阵投影光和补偿红外光的初始出射路径形成交角,点阵投影光和补偿红外光经过反射镜后沿合并路径射出。
本申请实施例还提供了一种摄像方法,包括:控制光学发射镜头向目标物体发射红外光;控制光学接收镜头接收目标物体反射的第一反射红外光和可见光;其中,可见光和第一反射红外光用于目标物体的三维识别;光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;滤光片设置在光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直光入射路径;第一红外接收器用于接收第一反射红外光,可见光接收器用于接收可见光;红外光包括红外连续调制脉冲光;光学发射镜头,包括用于发射红外连续调制脉冲光的第二泛光照射器;第二泛光照射器位于终端屏面下方,第一红外接收器或可见光接收器位于第二泛光照射器下方。
本申请实施例还提供了一种电子设备,包括:至少一个处理器;与至少一个处理器通信连接的存储器;存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述的摄像方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述的摄像方法。
附图说明
图1是根据本申请一实施例提供的实现结构光的摄像装置结构示意图;
图2是相关技术中三维识别器件在终端上的布局示意图;
图3a至图3c是相关技术中三种三维识别涉及器件排布方案示意图;
图4是根据本申请另一实施例提供的实现单目结构光的摄像装置结构示意图;
图5是根据本申请一实施例提供的实现双目结构光的摄像装置结构示意图;
图6是根据本申请一实施例提供的实现光感器件设置位置示意图;
图7是根据本申请一实施例提供的实现TOF的摄像装置结构示意图;
图8是根据本申请一实施例提供的摄像方法流程图;
图9是根据本申请另一实施例提供的摄像方法流程图;
图10是根据本申请一实施例提供的电子设备示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。
本申请实施例的主要目的在于提出一种摄像装置、方法、电子设备及存储介质,可以减小终端显示屏中非正常显示的面积。
本申请的实施例涉及一种摄像装置,如图1所示,可以包括,但不限于:
光学发射镜头1100,用于向目标物体发射点阵投影光和补偿红外光;
光学接收镜头1200,用于接收目标物体反射的第一反射红外光和可见光;
其中,可见光和第一反射红外光用于目标物体的三维识别,第一反射红外光用于构建目标物体的深度信息,可见光用于构建目标物体的二维图像,深度信息和二维图像用于构建目标物体的三维图像;
其中,光学接收镜头1200,包括第一红外接收器1210、可见光接收器1220和滤光片1230;
滤光片1230设置在光学接收镜头1200的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直光入射路径;
第一红外接收器1210用于接收第一反射红外光,可见光接收器1220用于接收可见光;
光学发射镜头1100,包括第一泛光照射器1110、红外点阵投影器1120和反射镜1130;红外点阵投影器1120用于发出点阵投影光,第一泛光照射器1110用于发出补偿红外光;
反射镜1130设置在光学发射镜头1100的光出射路径中,光出射路径包括合并路径、点阵投影光的初始出射路径和补偿红外光的初始出射路径,点阵投影光和补偿红外光的初始出射路径形成交角,点阵投影光和补偿红外光经过反射镜后沿合并路径射出。
本实施例的摄像装置,可以应用于手机,平板电脑,客户前置设备(Customer Premise Equipment,简称“CPE”),智能家居等移动终端中,作用于各三维识别应用场景,例如,人脸识别、AR、VR、三维建模、体感游戏、全息影像交互、3D美颜、远程可视电话、云端实时视频等不同方式的应用。其中,光学接收镜头1200除了可以作为实现三维识别功能的器件,也可以作为实现前置摄像功能的器件。光学接收镜头1200中的可见光接收器1220除了可以在实现三维识别功能时接收可见光,供移动终端生成二维图像,也可以在光学接收镜头1200作为普通前置摄像头时,接收射入光学接收镜头1200的可见光,供移动终端生成摄像的二维图像。在实现三维识别时,移动终端可以控制摄像装置以光学发射镜头1100向待识别的目标物体,例如人脸,发射红外光,以光学接收镜头1200接收目标物体反射的第一反射红外光和可见光,从而移动终端可以根据接收的第一反射红外光构建目标物体的深度信息,根据接收的可见光构建目标物体的二维图像,再根据深度信息和二维图像,构建目标物体的三维图像。
如图2所示,大多数的厂家将移动终端前向三维识别涉及器件均布局在显示屏的预设的非显示区,例如“刘海区”中,三维识别涉及器件较多,“刘海区”的面积也较大,成为实现全面显示屏的一个障碍。即使将实现三维识别的摄像装置设置在屏下,由于三维识别涉及器件较多,三维识别的摄像装置对应在屏幕上的透光区也需要较大面积,会造成显示屏面较大的显示差异,进而影响用户体验。其中,不同的三维识别实现方案涉及不同的器件,如图3所示,移动终端前向三维识别主要有(a)单目结构光、(b)飞行时间法(Time of flight,简称“TOF”)、(c)双目结构光方案。单目结构光方案由红外点阵投影器a301向目标物体投射散斑或编码结构光,由泛光照射器a302(低功率泛光照射器) 组成,由红外摄像头a303接收目标物体反射的反射红外光,由RGB摄像头a304接收目标物体反射的可见光,四个器件沿基线a305依次排列在终端上的三维识别区中,处理器根据红外摄像头a303接收的反射红外光构建目标物体的深度信息,根据RGB摄像头a304接收的可见光构建目标物体的二维图像,根据深度信息和二维图像,构建目标物体的三维图像。TOF方案由泛光照射器b301(高功率泛光照射器)向目标物体发射红外连续调制脉冲光,由红外摄像头b302接收目标物体反射的反射脉冲光,由RGB摄像头b303接收目标物体反射的可见光,三个器件沿基线b304依次排列在终端上的三维识别区中。处理器根据红外摄像头a302接收的反射脉冲光构建目标物体的深度信息,根据RGB摄像头b303接收的可见光构建目标物体的二维图像,根据深度信息和二维图像,构建目标物体的三维图像。双目结构光方案由红外点阵投影器c301向目标物体投射散斑或编码结构光,由泛光照射器c302(低功率泛光照射器)组成,由红外摄像头c303和红外摄像头c304接收目标物体反射的反射红外光,由RGB摄像头c305接收目标物体反射的可见光,五个器件沿基线c306依次排列在终端上的三维识别区中,处理器根据红外摄像头c303和红外摄像头c304接收的反射红外光构建目标物体的深度信息,根据RGB摄像头c305接收的可见光构建目标物体的二维图像,根据深度信息和二维图像,构建目标物体的三维图像。
上述三维识别器件的镜头投射区域相对较大,通常要布局于显示屏上方顶部的“刘海”区域,在不能将这些三维识别器件设置在屏下应用时,只能使得该区域不能成为屏显示区,使得显示屏的有效显示区域减少,也一定程度影响了整体美观。即使移动终端已经有屏下摄像头的应用,但该应用状态需要针对相应的终端屏幕(主要是具有透光性的OLED屏)显示区进行增强透光性的特殊处理,即通过减RGB像素或缩小RGB像素,增加通光量,因此这块显示区或多或少总会呈现和其余显示区不同的显示效果。由于三维识别器件比单纯的屏下摄像头器件要多,屏下三维识别应用会造成显示屏的提升透光性的特殊处理区面积增大很多,因此减小特殊处理区面积有利于缩小显示差异区。这种情况下,针对屏下摄像头和屏下三维识别器件进行简化处理就有必要性。最直接的简化就是减少设置在屏下的三维识别器件个数。
本实施例中,摄像装置可以被设置为屏下摄像头,屏下摄像头是将摄像头 被设置在移动终端的显示屏下方,摄像时,光线仅通过移动终端屏幕上的透光区进入传感器。摄像装置可以被设置为普通前置摄像头,移动终端的显示屏为摄像装置预留出非显示区域,形成“刘海屏”、“水滴屏”等屏幕样式。本实施例的摄像装置通过使用光学发射镜头向目标物体发射红外光,用光学接收镜头接收目标物体反射的反射红光和可见光,可以获取用于构建目标物体三维图像的深度信息和二维图像,其中,光学接收镜头通过内部的第一红外接收器接收第一反射红外光,通过内部的可见光接收器接收可见光,并通过在光入射路径中设置滤光片,从入射光中分离第一反射红外光和可见光,从而实现以一个光学接收镜头同时接收两种光,在光学发射镜头上设置第一泛光照射器、红外点阵投影器和反射镜,由反射镜将呈交角的点阵投影光和补偿红外光控制都沿合并路径射出,从而可以实现以光学发射镜头一个镜头,射出点阵投影光和补偿红外光。以光学接收镜头、光学发射镜头的复用,实现三维识别的结构光方案,减少实现三维识别所需镜头数。从而将显示屏上用于结构光三维识别的镜头由原来的至少四个镜头缩减为两个,因此,显示屏中只要对光学发射镜头和光学接收镜头所占部分面积进行处理,减小终端显示屏中需要进行非正常显示处理的面积,增大终端显示屏中用于正常显示的面积,提高用户体验。
下面对本实施例的摄像装置的实现细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本方案的必须。
如图1所示,光学发射镜头1100和光学接收镜头1200可以被设置在终端显示屏1010下方,其中,终端显示屏1010可以由触控面板(TouchPanel,简称“TP”)1011和液晶显示器(Liquid Crystal Display,简称“LCD”)1012组成。光学发射镜头1100射出的红外光穿透终端显示屏1010的发射透光区,向目标物体发射,光学接收镜头1200接收穿透终端显示屏1010的接收透光区的入射光,将入射光分离得到反射红外光和可见光。光学接收镜头1200包括第一红外接收器1210、可见光接收器1220和滤光片1230,其中,滤光片1230设置在光学接收镜头1200的光入射路径中,与入射光呈45度角放置,滤光片1230可以是可见/红外分色滤光片,采用特制的滤光薄膜,当入射光与滤光片1230的入射角为45度时,0.3μm~0.6μm的可见光波段反射率大于90%,0.75μm~2.5μm的近红外透射光率大于90%,从而可以使反射红外光穿透滤光片1230, 到达滤光片下方的第一红外接收器1210,并使可见光反射至分离路径中的可见光接收器1220中。第一红外接收器1210包括红外感光基板1211、红外低通滤波器1212和透镜组1213,其中,透镜组1213用于聚集滤光片1230反射的第一反射红外光及少部分可见光,红外低通滤波器1212用于过滤滤光片1230反射的少部分可见光,仅供第一反射红外光穿透,使得红外感光基板1211仅获取到第一反射红外光。红外感光基板1211上具有红外CMOS图像传感器(Infrared CMOS Image Sensor,简称“CIS”专用芯片。可见光接收器1220包括可见光感光基板1221、红外截止滤波器1222和透镜组1223,其中,透镜组1223用于聚集滤光片1230反射的可见光及少部分第一反射红外光,红外截止滤波器1222用于过滤滤光片1230反射的少部分第一反射红外光,仅供可见光穿透,使得可见光感光基板1221仅获取到可见光,可见光感光基板1221上具有CIS专用芯片。透镜组1213和透镜组1223可以由晶圆级光学透镜(Wafer Level Optics lens,简称“WLO”)组成。
实际应用中,也可以通过改变可见/红外分色滤光片的滤光特性,将反射和透射光波段设备对调,就可以将第一反射红外光和可见光的传播路径对调,从而将可见光接收器1220可以和红外接收器1210互换位置。
本实施例中,红外点阵投影器和第一泛光照射器可以发射不同波段红外光。但是不同波段,仅限于近红外的不同波段,比如红外点阵投影器和第一泛光照射器中其一位于近红外短波(780~1100nm),其二则位于近红外长波(1100~2526nm)。光学发射镜头和光学接收镜头两个镜头光心之间距离为基线,这个基线可以是常规基线,也可以是微基线。单目结构光采用主动三维测量,通过基线估计每个像素的空间位置进而测量物体与镜头之间的距离,即获得深度信息。双目相机测量到的深度范围和基线有关,基线距离越大,能够测量到的就越远。微基线意味着两个镜头间距很短,虽然测量距离短,但符合结构光近距应用优势,也更利于本已空间紧凑的移动终端的内部总体布局。摄像装置通过在光学发射镜头上设置第一泛光照射器、红外点阵投影器和反射镜,由反射镜将点阵投影光和补偿红外光控制都沿合并路径射出,从而可以实现以光学发射镜头一个镜头,射出点阵投影光和补偿红外光,实现三维识别的结构光方案。其中,为了方便画图,本申请中将点阵投影光和补偿红外光形成交角都画 为直角。
具体地,摄像装置可以用于实现三维识别中的单目结构光方案。点阵投影光和补偿红外光的初始出射路径形成的交角满足反射镜的全反射条件,反射镜1130可以是红外全反射透镜,即光学发射镜头1100,可以包括第一泛光照射器1110、红外点阵投影器1120和红外全反射透镜,由红外光包括红外点阵投影器1120发出点阵投影光和第一泛光照射器1110发出补偿红外光,红外全反射透镜设置在光学发射镜头1100的光出射路径中,用于将点阵投影光和补偿红外光同时进行反射和透射,沿合并路径射出。
在一个例子中,第一泛光照射器1110可以被设置在红外全反射透镜的下方,即,第一泛光照射器1110的初始出射路径可以和合并路径在同一直线上。红外全反射透镜将红外点阵投影器1120发出的点阵投影光反射射出及使第一泛光照射器1110发出的补偿红外光穿透射出。其中,第一泛光照射器1110可以包括扩散片1111和低功率的垂直腔面发射激光器(Vertical-Cavity Surface-Emitting Laser,简称“VCSEL”)1112,红外点阵投影器1120可以包括光学衍射元件(Diffractive Optical Elements,简称“DOE”)1121、高功率的VCSEL1122和WLO1123。为节约移动终端的屏下空间,红外点阵投影器1120和位于光学接收镜头1200的分离路径的可见光接收器1220都位于光学发射镜头1100和光学接收镜头1200的基线下方。其中,基线1020可以是常规的基线长度,也可以是微基线长度,内部为复合器件的光学发射镜头1100和光学接收镜头1200在微基线状态下,可以使得LCD显示区的局部因应提升透光性的特殊处理区域最小化;同时两个镜头的复合器件在微基线下紧密交织,可以实现一体化和标准化,双摄标定等技术参数可以在一体化产出前调试到基本稳定状态,有效减少分离器件组装到整机后的多次参数校准工作量。可以理解的是,本实施例中,红外点阵投影器1120也可以和第一泛光照射器1110互换位置。
红外点阵投影光可采用散斑结构光或编码结构光,基线长度可以很小,为微基线。在自然光照明被动成像测量条件下,由于各种背景辐射的影响,限制了成像系统对远距离目标成像测量和精确跟踪能力。采用微基线激光主动照明和复合照明的方式,对远、小、暗目标或其局部进行照明,可以减小背景辐射的影响,提高系统对远距离、小、暗目标的精确跟踪和清晰成像能力。激光主 动照明监视系统的工作原理与激光雷达基本相同。通过调节发射激光束的聚焦状态(发散角),将目标全部或目标的关键特征部位照亮,满足接收系统探测要求,实现对目标的成像和精确跟踪的目的。
此外,第一泛光照射器1110和红外点阵投影器1120在同时工作时,可以各自采用相对较窄光谱,第一红外接收器1210采用相对较宽光谱。此种情况下,夜间暗光和白天强光条件下,第一泛光照射器1110均能达到补光作用,处理器在宽光谱的第一红外接收器1210可以同时接收到目标物体反射的红外点阵投影器的结构光和第一泛光照射器的非结构光,经过融合计算,可以增强目标识别能力,同时也能有效弥补点阵投影的衍射光因距离变大而快速消散的缺点。
红外全反射透镜兼具全反射和透射特性,可以让第一泛光照射器1110发射的红外非结构光直接透射,同时让红外点阵投影器1120发射的红外结构光全反射出去。全反射的满足条件是光线从光密介质到光疏介质且入射角大于等于临界角,本实施例统一按大于临界角要求,折射角90度时为临界状态。所以通常情况下,光线在全反射前的初始出射路径和全反射后的合并路径并不呈现垂直状态,而是以钝角形式存在。这种情况下,图1中的红外点阵投影器通常并不能横向平放,而是左高右低斜向放置。因此,图1这种示意仅是从位置规整角度给出。
本实施例中,摄像装置通过在光学发射镜头上设置第一泛光照射器、红外点阵投影器和红外全反射透镜,点阵投影光和补偿红外光所成的交角满足反射镜的全反射条件,使得红外全反射透镜可以将点阵投影光和补偿红外光同时沿合并路径射出,从而可以实现以光学发射镜头一个镜头,射出点阵投影光和补偿红外光,实现三维识别的结构光方案。
在另一个例子中,如图4所示,反射镜也可以是可移动红外反光镜1132,即光学发射镜头1100,可以包括第一泛光照射器1110、红外点阵投影器1120和可移动红外反光镜1132,由红外光包括红外点阵投影器1120发出点阵投影光和第一泛光照射器1110发出补偿红外光,可移动红外反光镜1132设置在光学发射镜头1100的光出射路径中,用于控制点阵投影光和补偿红外光分时沿合并路径射出。点阵投影光和补偿红外光的初始出射路径可以是相互垂直的,也可以呈其他角度关系,只需保证点阵投影光和补偿红外光都可以经过红外反光 镜1132后沿合并路径射出。其中,第一泛光照射器1110可以被设置在可移动红外反光镜1132的下方。
由于可移动红外反光镜1132不透红外光,红外点阵投影器和泛光照射器不能同时工作,只能根据系统处理器控制可移动红外反光镜1132在倾斜和竖直两种状态下分时工作。可移动红外反光镜1132处于倾斜状态时,第一泛光照射器1110发射的红外补偿光被堵,红外点阵投影器1120投射的红外光被反射出去;可移动红外反光镜1132处于竖直状态时,红外点阵投影器1120光路被堵住,第一泛光照射器1110光路畅通可以发射红外光出去。这种情况也就可采用较为通用的工作模式,从而采用光学发射镜头1100下的第一泛光照射器1110先行启动,主要用于预判目标物体的泛光照射,采用较大照射角度的光源(例如红外线),投射至一物体(例如人脸)的表面,接着由光学接收镜头1200接收从物体反射回来的光源,并且经由处理器等元件经计算后,粗略地判定物体是否为人脸;当确定物体为人脸之后,红外点阵投影器1120则发出多个光点(例如数千至数万个)投影至人脸上,搭配光学接收镜头1200接收反射的光点变化,计算出虚拟人脸表面轮廓,用以精细判断所检测的人脸是否为终端的使用者本人或其他经过认证的人物。可以理解的是,红外点阵投影器1120可以和第一泛光照射器1110互换位置。
本实施例中,摄像装置通过在光学发射镜头上设置第一泛光照射器、红外点阵投影器和可移动红外反光镜,由反射镜将相互垂直的点阵投影光和补偿红外光控制分时沿合并路径射出,从而可以实现以光学发射镜头一个镜头,射出点阵投影光和补偿红外光,实现三维识别的结构光方案。
进一步地,如图5所示,摄像装置可以用于实现三维识别中的双目结构光方案。较图4中的摄像装置,实现三维识别中的双目结构光方案的光学发射镜头1100还包括:第二红外接收器1140和旋转装置1150;第二红外接收器1140用于接收目标物体反射的第二反射红外光,旋转装置1150的一侧设置第二红外接收器1140,另一侧设置红外点阵投影器1120或第一泛光照射器1110,旋转装置1150用于在红外点阵投影器1120或第一泛光照射器1110射出点阵投影光或补偿红外光后,旋转至另一侧供第二红外接收器1140接收第二反射红外光。分时向目标物体发射红外光及接收目标物体反射的第二反射红外光。
第一泛光照射器1110工作时,仅由光学接收镜头1200的第一红外接收器1210接收红外光,在红外点阵投影器1120发出点阵投影光后,则处理器控制旋转装置1150旋转,把原本朝上的第一泛光照射器1110旋转至下方,把原本朝下第二红外接收器1140旋转至上方,由第一红外接收器1210和第二红外接收器1140同时接收目标物体反射的第一反射红外光和第二反射红外光,系统处理器针对双红外接收器接收的目标物体的数据进行计算融合进而实现三维识别。
具有第一泛光照射器1110和第二红外接收器1140发射和接收的红外光线都可以穿过红外全反射透镜1131,由于两个方向上入射角均远远达不到全反射临界角,都会呈现理想的红外穿透效果。因此,并且,双目结构光的方案也可以基于图1所示的摄像装置上进行应用。
移动终端用双目结构光主要是考虑单目结构光红外点阵投影器发出的编码光斑容易被太阳光淹没掉。双目结构光可以在室内环境下使用结构光测量深度信息,在室外光照导致结构光失效时转为纯双目的方式,提升可靠性和抗干扰能力。
本实施例中,摄像装置通过在光学发射镜头上设置第一泛光照射器、红外点阵投影器和旋转装置,由旋转装置在红外点阵投影器或第一泛光照射器射出点阵投影光或补偿红外光后,旋转至另一侧供第二红外接收器接收第二反射红外光。从而可以分时向目标物体发射红外光及接收目标物体反射的第二反射红外光,即,可以将光学发射镜头和光学接收镜头都用于接收目标物体反射的第一反射红外光和第二反射红外光,因此可以实现三维识别的双目结构光方案。
在一个例子中,摄像装置还包括:光感器件,光感器件位于如图6所示的光学发射镜头的透光区与终端屏面边缘之间的第一非显示区域601,或光学接收镜头的透光区与终端屏面边缘之间的第二非显示区域602中。从而用显示屏顶部极窄的非显示区600布局近距光感应器、环境光感应器和用于结构光预判目标用途的距离感应器,例如小尺寸的一维TOF传感器等。
本实施例中的摄像装置,通过在第一非显示区域和第二非显示区域设置用于预判目标的距离感应器,可以进一步提高三维识别的准确度,提升用户体验。
本申请的实施例涉及一种摄像装置,用于实现三维识别中的TOF方案,如 图7所示,包括:
光学发射镜头7100,用于向目标物体发射红外光;
光学接收镜头7200,用于接收目标物体反射的第一反射红外光和可见光;
其中,可见光和第一反射红外光用于目标物体的三维识别,第一反射红外光用于构建目标物体的深度信息,可见光用于构建目标物体的二维图像,深度信息和二维图像用于构建目标物体的三维图像;
其中,光学接收镜头7200,包括第一红外接收器7210、可见光接收器7220和滤光片7230;
滤光片7230设置在光学接收镜头7200的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直光入射路径;
第一红外接收器7210用于接收第一反射红外光,可见光接收器7220用于接收可见光
红外光可以是红外连续调制脉冲光,光学发射镜头7100,可以是用于发射红外连续调制脉冲光的第二泛光照射器7110,第二泛光照射器7110位于终端屏面下方,第一红外接收器7210或可见光接收器7220位于第二泛光照射器7110下方。具体地,当滤光片7230将可见光反射至分离路径时,可见光接收器7220位于第二泛光照射器7110下方,当滤光片7230将可见光反射至分离路径时,第一红外接收器7210位于第二泛光照射器7110下方。第二泛光照射器7110包括扩散片7111和高功率VCSEL7112,高功率VCSEL7112用于发出红外连续调制脉冲光,扩散片7111用于控制高功率VCSEL7112发出的红外连续调制脉冲光向目标物体扩散。TOF方案的第一红外接收器7210内部更复杂性能要求更高。TOF为面射型光,可以一次成型三维图像,光学发射镜头和光学接收镜头之间可以是零基线,因此整体结构更加紧凑,可实现最小化终端显示屏因应提升透光性而特殊处理的区域。
具体地,TOF分为iTOF(indirect TOF)和dTOF(direct TOF),前者用VCSEL发射红外连续调制脉冲光并接收目标反射的红外光,进行零差解调以测量反射红外光的相移,间接计算出光的飞行时间,进行目标深度的预判;后者也称作光探测和探距(Light Detection and Ranging,简称“LiDAR”),核心组 件包含VCSEL、单光子雪崩二极管(Single Photon Avalanche Diode,简称“SPAD”)和时间数字转换器(Time Digital Converter,简称“TDC’),VCSEL向场景中发射脉冲波,SPAD接收从目标物体反射回来的脉冲波,TDC记录每次接收到的光信号的飞行时间,也就是发射脉冲和接收脉冲之间的时间间隔。dToF激光雷达的长探测距离、高精度成像特性能有更好的夜拍、视频和AR体验。普通平板式移动终端屏下TOF前向应用仍以iTOF为主,但折叠屏等异形形态移动终端存在前后向屏下应用来回切换的场景采用dTOF会达到更理想的效果。
本实施例的摄像装置,可以通过使用光学发射镜头向目标物体发射红外光,用光学接收镜头接收目标物体反射的反射红光和可见光,可以获取用于构建目标物体三维图像的深度信息和二维图像,其中,光学接收镜头通过内部的第一红外接收器接收第一反射红外光,通过内部的可见光接收器接收可见光,并通过在光入射路径中设置滤光片,从入射光中分离第一反射红外光和可见光,从而实现以一个光学接收镜头同时接收两种光。并通过以第二泛光照射器发射红外连续调制脉冲光,目标物体对红外连续调制脉冲光和其他光线进行反射,由第一红外接收器接收目标物体反射的反射红外光,可见光接收器接收目标物体反射的可见光,从而使移动终端获取到反射红外光和可见光,从而以TOF的方案构建目标物体的三维图像,以光学接收镜头的复用,减少实现三维识别所需镜头数。从而将显示屏上用于TOF三维识别的镜头由原来的至少三个镜头缩减为两个,因此,显示屏中只要对光学发射镜头和光学接收镜头所占部分面积进行处理,减小终端显示屏中需要进行非正常显示处理的面积,增大终端显示屏中用于正常显示的面积,提高用户体验。同时,由于第二泛光照射器位于终端屏面下方,第一红外接收器或可见光接收器位于第二泛光照射器下方,即,将第二泛光照射器与第一红外接收器或可见光接收器上下排列,增加屏下空间的利用率。
本申请实施例还涉及一种摄像方法,如图8所示,包括以下步骤:
步骤801,控制光学发射镜头向目标物体发射点阵投影光和补偿红外光;
步骤802,控制光学接收镜头接收目标物体反射的第一反射红外光和可见光;
其中,可见光和第一反射红外光用于目标物体的三维识别,第一反射红外光用于构建目标物体的深度信息,可见光用于构建目标物体的二维图像,深度信息和二维图像用于构建目标物体的三维图像;
光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;
滤光片设置在光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直光入射路径;
第一红外接收器用于接收第一反射红外光,可见光接收器用于接收可见光;
光学发射镜头,包括第一泛光照射器、红外点阵投影器和反射镜;第一泛光照射器用于发出补偿红外光,红外点阵投影器用于发出点阵投影光;
反射镜设置在光学发射镜头的光出射路径中,光出射路径包括合并路径、点阵投影光的初始出射路径和补偿红外光的初始出射路径,点阵投影光和补偿红外光的初始出射路径形成交角,点阵投影光和补偿红外光经过反射镜后沿合并路径射出。
本申请实施例还涉及另一种摄像方法,如图9所示,包括以下步骤:
步骤901,控制光学发射镜头向目标物体发射红外光;
步骤302,控制光学接收镜头接收目标物体反射的第一反射红外光和可见光;
其中,可见光和第一反射红外光用于目标物体的三维识别,第一反射红外光用于构建目标物体的深度信息,可见光用于构建目标物体的二维图像,深度信息和二维图像用于构建目标物体的三维图像;
光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;
滤光片设置在光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿光入射路径传播和沿分离路径传播的第一反射红外光和可见光,分离路径垂直光入射路径;
第一红外接收器用于接收第一反射红外光,可见光接收器用于接收可见光;
红外光包括红外连续调制脉冲光;
光学发射镜头,包括用于发射红外连续调制脉冲光的第二泛光照射器;
第二泛光照射器位于终端屏面下方,第一红外接收器或可见光接收器位于第二泛光照射器下方。
在一个例子中,摄像方法应用于三维识别,由第一红外接收器采用相对较宽光谱接收泛光照射器和点阵投影器同时发的两路不同波段的相对较窄光谱光,实现双光路光有效分解并变换为坐标配准的三维和二维点云,最后融合实现三维图像增强的具体流程如下:
第一步,处理器把红外接收器接收到的目标物体反射的泛光照射器的第一红外波段光和点阵投影器的第二红外波段光光电转换后的信号进行分离,此过程可以通过加载AI引擎进行聚类分析实现;
第二步,分离后的两类光电数据分别进行预处理,实现数据过滤和压缩;
第三步,对源自泛光照射器的反射光的二维图像数据和源自点阵投影器的反射光的三维图像数据进行立体配准(把两幅图像的像素坐标、图像坐标、摄像机坐标和世界坐标进行标定,实现二维和三维叠加的真实三维空间中的点映射到二维成像平面),找出关键特征点,将两者匹配到同一坐标系下,确定两种图像中对应点间的空间坐标关系;
第四步,形成易于存储和处理的点云数据,用于拓展高维特征信息。
第五步,再次加载AI引擎,采用深度学习的方法,实现对三维点云进行分类和分割(根据输入点学习交叉变换,然后将其用于同时加权与点关联的输入特征和将它们重新排列成潜在隐含的规范顺序,之后再在元素上应用求积和求和运算来实现);
第六步,实现三维图像识别或重构。
从上述步骤可以看出,泛光照射器的二维图像数据能够有效补偿点阵投影的三维图像数据不足,增强了三维识别效果,更利于强光或暗光下的用户安全解锁和支付,以及游戏建模、虚拟现实和增强现实。
本申请实施例还涉及一种电子设备,如图10所示,包括:至少一个处理器1001;与至少一个处理器通信连接的存储器1002;其中,存储器1002存储有可被至少一个处理器1001执行的指令,指令被至少一个处理器1001执行上述 摄像方法。
其中,存储器1002和处理器1001采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器1001和存储器1002的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器1001处理的信息通过天线在无线介质上进行传输,进一步,天线还接收信息并将信息传送给处理器1001。
处理器1001负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器1002可以被用于存储处理器在执行操作时所使用的信息。
本申请的实施例还涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (10)

  1. 一种摄像装置,包括:
    光学发射镜头,用于向目标物体发射点阵投影光和补偿红外光;
    光学接收镜头,用于接收所述目标物体反射的第一反射红外光和可见光;
    其中,所述可见光和所述第一反射红外光用于所述目标物体的三维识别;
    所述光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;
    所述滤光片设置在所述光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿所述光入射路径传播和沿分离路径传播的所述第一反射红外光和所述可见光,所述分离路径垂直于所述光入射路径;
    所述第一红外接收器用于接收所述第一反射红外光,所述可见光接收器用于接收所述可见光;
    所述光学发射镜头,包括第一泛光照射器、红外点阵投影器和反射镜;所述第一泛光照射器用于发出所述补偿红外光,所述红外点阵投影器用于发出所述点阵投影光;
    所述反射镜设置在所述光学发射镜头的光出射路径中,所述光出射路径包括合并路径、所述点阵投影光的初始出射路径和所述补偿红外光的初始出射路径,所述点阵投影光和所述补偿红外光的初始出射路径形成交角,所述点阵投影光和所述补偿红外光经过所述反射镜后沿所述合并路径射出。
  2. 根据权利要求1所述的摄像装置,其中,所述交角满足所述反射镜的全反射条件;
    所述反射镜包括红外全反射透镜,所述红外全反射透镜用于将所述点阵投影光和所述补偿红外光同时进行反射和透射,沿所述合并路径射出。
  3. 根据权利要求1或2所述的摄像装置,其中,所述反射镜包括可移动红外反光镜,所述可移动红外反光镜用于控制所述点阵投影光和所述补偿红外光分时沿所述合并路径射出。
  4. 根据权利要求1至3中任一项所述的摄像装置,其中,所述光学发射镜头还包括:第二红外接收器和旋转装置;所述第二红外接收器用于接收所述目标物体反射的第二反射红外光;
    所述旋转装置的一侧设置所述第二红外接收器,另一侧设置所述红外点阵 投影器或所述第一泛光照射器,所述旋转装置用于在所述红外点阵投影器或所述第一泛光照射器射出所述点阵投影光或所述补偿红外光后,旋转至另一侧供所述第二红外接收器接收所述第二反射红外光。
  5. 根据权利要求1至4中任一项所述的摄像装置,其中,所述摄像装置还包括:光感器件,所述光感器件位于所述光学发射镜头的透光区与终端屏面边缘之间的第一非显示区域,或所述光学接收镜头的透光区与终端屏面边缘之间的第二非显示区域中。
  6. 一种摄像装置,包括:
    光学发射镜头,用于向目标物体发射红外光;
    光学接收镜头,用于接收所述目标物体反射的第一反射红外光和可见光;
    其中,所述可见光和所述第一反射红外光用于所述目标物体的三维识别;
    所述光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;
    所述滤光片设置在所述光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿所述光入射路径传播和沿分离路径传播的所述第一反射红外光和所述可见光,所述分离路径垂直于所述光入射路径;
    所述第一红外接收器用于接收所述第一反射红外光,所述可见光接收器用于接收所述可见光;
    所述红外光包括红外连续调制脉冲光;
    所述光学发射镜头,包括用于发射所述红外连续调制脉冲光的第二泛光照射器;
    所述第二泛光照射器位于终端屏面下方,所述第一红外接收器或所述可见光接收器位于所述第二泛光照射器下方。
  7. 一种摄像方法,包括:
    控制光学发射镜头向目标物体发射点阵投影光和补偿红外光;
    控制光学接收镜头接收所述目标物体反射的第一反射红外光和可见光;
    其中,所述可见光和所述第一反射红外光用于所述目标物体的三维识别;
    所述光学接收镜头包括第一红外接收器、可见光接收器和滤光片;
    所述滤光片设置在所述光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿所述光入射路径传播和沿分离路径传播的所述第一反射红外光和所 述可见光,所述分离路径垂直于所述光入射路径;
    所述第一红外接收器用于接收所述第一反射红外光,所述可见光接收器用于接收所述可见光;
    所述光学发射镜头,包括第一泛光照射器、红外点阵投影器和反射镜;所述第一泛光照射器用于发出所述补偿红外光,所述红外点阵投影器用于发出所述点阵投影光;
    所述反射镜设置在所述光学发射镜头的光出射路径中,所述光出射路径包括合并路径、所述点阵投影光的初始出射路径和所述补偿红外光的初始出射路径,所述点阵投影光和所述补偿红外光的初始出射路径形成交角,所述点阵投影光和所述补偿红外光经过所述反射镜后沿所述合并路径射出。
  8. 一种摄像方法,包括:
    控制光学发射镜头向目标物体发射红外光;
    控制光学接收镜头接收所述目标物体反射的第一反射红外光和可见光;
    其中,所述可见光和所述第一反射红外光用于所述目标物体的三维识别;
    所述光学接收镜头,包括第一红外接收器、可见光接收器和滤光片;
    所述滤光片设置在所述光学接收镜头的光入射路径中,用于将入射光过滤分离得到沿所述光入射路径传播和沿分离路径传播的所述第一反射红外光和所述可见光,所述分离路径垂直于所述光入射路径;
    所述第一红外接收器用于接收所述第一反射红外光,所述可见光接收器用于接收所述可见光;
    所述红外光包括红外连续调制脉冲光;
    所述光学发射镜头,包括用于发射所述红外连续调制脉冲光的第二泛光照射器;
    所述第二泛光照射器位于终端屏面下方,所述第一红外接收器或所述可见光接收器位于所述第二泛光照射器下方。
  9. 一种电子设备,包括:
    至少一个处理器;
    与所述至少一个处理器通信连接的存储器;
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述 至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求7或8所述的摄像方法。
  10. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求7或8所述的摄像方法。
PCT/CN2022/087239 2021-06-21 2022-04-15 摄像装置、方法、电子设备及存储介质 WO2022267645A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22827147.4A EP4344186A4 (en) 2021-06-21 2022-04-15 PHOTOGRAPHY APPARATUS AND METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
JP2023566723A JP2024524813A (ja) 2021-06-21 2022-04-15 撮像装置、方法、電子機器及びコンピュータプログラム
US18/393,437 US20240127566A1 (en) 2021-06-21 2023-12-21 Photography apparatus and method, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110686374.3 2021-06-21
CN202110686374.3A CN115580766A (zh) 2021-06-21 2021-06-21 摄像装置、方法、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/393,437 Continuation US20240127566A1 (en) 2021-06-21 2023-12-21 Photography apparatus and method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022267645A1 true WO2022267645A1 (zh) 2022-12-29

Family

ID=84545208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/087239 WO2022267645A1 (zh) 2021-06-21 2022-04-15 摄像装置、方法、电子设备及存储介质

Country Status (5)

Country Link
US (1) US20240127566A1 (zh)
EP (1) EP4344186A4 (zh)
JP (1) JP2024524813A (zh)
CN (1) CN115580766A (zh)
WO (1) WO2022267645A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354409A (zh) * 2023-12-04 2024-01-05 深圳市华维诺电子有限公司 一种屏下摄像头组件及手机

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6556791B1 (en) * 1999-12-21 2003-04-29 Eastman Kodak Company Dual channel optical imaging system
JP2006301149A (ja) * 2005-04-19 2006-11-02 Nikon Corp 一眼レフ電子カメラ
US20100328780A1 (en) * 2008-03-28 2010-12-30 Contrast Optical Design And Engineering, Inc. Whole Beam Image Splitting System
CN104748721A (zh) * 2015-03-22 2015-07-01 上海砺晟光电技术有限公司 一种具有同轴测距功能的单目视觉传感器
CN108040243A (zh) * 2017-12-04 2018-05-15 南京航空航天大学 多光谱立体视觉内窥镜装置及图像融合方法
CN207751449U (zh) * 2018-01-11 2018-08-21 苏州江奥光电科技有限公司 一种基于视场匹配的单目深度相机

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10204262B2 (en) * 2017-01-11 2019-02-12 Microsoft Technology Licensing, Llc Infrared imaging recognition enhanced by 3D verification
CN111083453B (zh) * 2018-10-18 2023-01-31 中兴通讯股份有限公司 一种投影装置、方法及计算机可读存储介质
CN111190323B (zh) * 2018-11-15 2022-05-13 中兴通讯股份有限公司 一种投影器和终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6556791B1 (en) * 1999-12-21 2003-04-29 Eastman Kodak Company Dual channel optical imaging system
JP2006301149A (ja) * 2005-04-19 2006-11-02 Nikon Corp 一眼レフ電子カメラ
US20100328780A1 (en) * 2008-03-28 2010-12-30 Contrast Optical Design And Engineering, Inc. Whole Beam Image Splitting System
CN104748721A (zh) * 2015-03-22 2015-07-01 上海砺晟光电技术有限公司 一种具有同轴测距功能的单目视觉传感器
CN108040243A (zh) * 2017-12-04 2018-05-15 南京航空航天大学 多光谱立体视觉内窥镜装置及图像融合方法
CN207751449U (zh) * 2018-01-11 2018-08-21 苏州江奥光电科技有限公司 一种基于视场匹配的单目深度相机

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4344186A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354409A (zh) * 2023-12-04 2024-01-05 深圳市华维诺电子有限公司 一种屏下摄像头组件及手机
CN117354409B (zh) * 2023-12-04 2024-02-02 深圳市华维诺电子有限公司 一种屏下摄像头组件及手机

Also Published As

Publication number Publication date
EP4344186A1 (en) 2024-03-27
US20240127566A1 (en) 2024-04-18
CN115580766A (zh) 2023-01-06
EP4344186A4 (en) 2024-08-21
JP2024524813A (ja) 2024-07-09

Similar Documents

Publication Publication Date Title
WO2020057205A1 (zh) 屏下光学系统、衍射光学元件的设计方法及电子设备
WO2020057208A1 (zh) 电子设备
EP3660575B1 (en) Eye tracking system and eye tracking method
US20200409163A1 (en) Compensating display screen, under-screen optical system and electronic device
US10877281B2 (en) Compact optical system with MEMS scanners for image generation and object tracking
US10983340B2 (en) Holographic waveguide optical tracker
CN105629474B (zh) 一种近眼显示系统及头戴显示设备
WO2020057207A1 (zh) 电子设备
US20080198459A1 (en) Conjugate optics projection display system and method having improved resolution
CN111083453B (zh) 一种投影装置、方法及计算机可读存储介质
US10832052B2 (en) IR illumination module for MEMS-based eye tracking
KR20150086388A (ko) 사람에 의해 트리거되는 홀로그래픽 리마인더
US20240127566A1 (en) Photography apparatus and method, electronic device, and storage medium
WO2020057206A1 (zh) 屏下光学系统及电子设备
EP3935437B1 (en) Ir illumination module for mems-based eye tracking
US10838489B2 (en) IR illumination module for MEMS-based eye tracking
WO2021196976A1 (zh) 一种光发射装置及电子设备
EP4443379A1 (en) Three-dimensional recognition apparatus, terminal, image enhancement method and storage medium
CN111024626B (zh) 光源模组、成像装置和电子设备
CN215499049U (zh) 具有3d摄像模组的显示装置和电子设备
CN215499363U (zh) 具有3d摄像模组的显示装置和电子设备
EP4053588A1 (en) Optical sensing system
EP4047387A1 (en) Optical sensing system
KR20230079618A (ko) 인체를 3차원 모델링하는 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827147

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023566723

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2022827147

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022827147

Country of ref document: EP

Effective date: 20231221

NENP Non-entry into the national phase

Ref country code: DE