WO2024016478A1 - 一种3d传感模组、3d传感方法及电子设备 - Google Patents

一种3d传感模组、3d传感方法及电子设备 Download PDF

Info

Publication number
WO2024016478A1
WO2024016478A1 PCT/CN2022/122365 CN2022122365W WO2024016478A1 WO 2024016478 A1 WO2024016478 A1 WO 2024016478A1 CN 2022122365 W CN2022122365 W CN 2022122365W WO 2024016478 A1 WO2024016478 A1 WO 2024016478A1
Authority
WO
WIPO (PCT)
Prior art keywords
light source
light
outgoing
modulation
sensing module
Prior art date
Application number
PCT/CN2022/122365
Other languages
English (en)
French (fr)
Inventor
刘欣
郑德金
黄泽铗
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2024016478A1 publication Critical patent/WO2024016478A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone

Definitions

  • This application relates to the field of optical imaging technology, and in particular to a 3D sensing module, a 3D sensing method and electronic equipment.
  • the proximity sensor is an indispensable part of the electronic device. It is mainly used to detect the distance between the electronic device and the target object, and based on the detection results, draw conclusions such as the target object is close, far away, appears or missing. , which can effectively improve the overall operating logic of electronic devices, so that the user experience can be greatly improved.
  • the traditional 2D face ID ie, two-dimensional face-swiping authentication method
  • 3D face ID ie, three-dimensional face-swiping authentication method
  • electronic equipment is equipped with a camera device that has functions such as taking pictures, recording videos, and facial recognition.
  • the proximity sensor and 3D sensing module mentioned above are all part of the camera device.
  • proximity sensors and 3D sensing modules are both indispensable components of camera devices (which are located inside electronic devices and have functions such as photography, video recording, and face recognition), proximity sensors and 3D sensing modules are not related to each other.
  • the sensing modules are relatively independent, that is, the proximity sensor and the 3D sensing module are set up separately, which results in a larger volume and mass of the camera device and higher manufacturing costs. At the same time, this is not conducive to the thinning of electronic equipment. development direction.
  • This application provides a 3D sensing module, a 3D sensing method and electronic equipment, aiming to solve the problems in related technologies of large volume, mass and high manufacturing cost of camera devices.
  • the first aspect of the embodiment of the present application provides a 3D sensing module, including a transmitting end and a receiving end; the transmitting end is used to transmit the first outgoing beam in the depth imaging mode, or in In the proximity sensing mode, a second outgoing beam is emitted; wherein, after the first outgoing beam is reflected by the target object, a corresponding first reflected beam is generated, and after the second outgoing beam is reflected by the target object, a corresponding first reflected beam is generated.
  • the field of view angle of the first outgoing beam is greater than the field of view angle of the second outgoing beam; the receiving end is used to turn on all pixels and the transmitting end in the depth imaging mode Form a first detection channel to receive the first reflected beam and generate a depth image of the target object, or turn on some pixels in the proximity sensing mode to form a second detection channel with the transmitter to receive The second reflected light beam generates distance information between the 3D sensing module and the target object.
  • the second aspect of the embodiment of the present application provides a 3D sensing method, which is applied to a 3D sensing module.
  • the 3D sensing method includes: controlling the transmitting end to emit the first outgoing beam in the depth imaging mode, or in proximity In the degree sensing mode, a second outgoing beam is emitted; wherein, after the first outgoing beam is reflected by the target object, a corresponding first reflected beam is generated, and after the second outgoing beam is reflected by the target object, a corresponding first beam is generated.
  • the second reflected beam, the field of view angle of the first outgoing beam is greater than the field of view angle of the second outgoing beam; control the receiving end to turn on all pixels in the depth imaging mode to form a first detection with the transmitting end channel to receive the first reflected beam and generate a depth image of the target object, or to turn on some pixels and the transmitting end to form a second detection channel in the proximity sensing mode to receive the second
  • the light beam is reflected and distance information between the 3D sensing module and the target object is generated.
  • the third aspect of the embodiment of the present application provides an electronic device, including the 3D sensing module described in the first aspect of the embodiment of the present application.
  • the beneficial effect of this application is to form a 3D sensing module with a transmitting end and a receiving end, and at the same time set two working modes for the constituted 3D sensing module, namely depth imaging mode and proximity sensing mode. Based on this, when it is necessary to obtain the depth image of the target object, the 3D sensing module can be adjusted to the depth imaging mode.
  • the transmitter will emit the first outgoing beam to the target object, and then the receiver will turn on all
  • the pixel and the transmitting end form a first detection channel to receive the first reflected beam generated by the first outgoing beam being reflected by the target object and generate a depth image of the target object; when it is necessary to obtain the distance between the 3D sensing module and the target object
  • the 3D sensing module can be adjusted to the proximity sensing mode.
  • the transmitter will emit a second outgoing beam to the target object, and then the receiver will turn on some pixels to form a second detection with the transmitter. channel to receive the second reflected beam generated by the second outgoing beam being reflected by the target object and generate distance information between the 3D sensing module and the target object.
  • the 3D sensing module provided by this application can not only obtain the depth image of the target object, but also obtain the distance information between the 3D sensing module and the target object, which is equivalent to combining the 3D sensing module in the traditional camera device.
  • the 3D sensing module and the proximity sensor are integrated, so that the 3D sensing module and the proximity sensor can share the same transmitter and the same receiver. At this time, there is no need to use a proximity sensor and a 3D sensor.
  • the module constitutes a camera device, and only one 3D sensing module provided by the present application is installed in the camera device, thereby effectively reducing the volume and quality of the camera device, and significantly saving manufacturing costs.
  • Figure 1 is a schematic structural diagram of a 3D sensing module provided by an embodiment of the present application.
  • Figure 2 is a first structural schematic diagram of a transmitter provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of the modulation area of the light modulation device provided by the embodiment of the present application.
  • Figure 4 is a first schematic diagram of the first modulation region provided by an embodiment of the present application.
  • Figure 5 is a second schematic diagram of the first modulation region provided by the embodiment of the present application.
  • Figure 6 is a second structural schematic diagram of a transmitter provided by an embodiment of the present application.
  • Figure 7 is a third structural schematic diagram of a transmitter provided by an embodiment of the present application.
  • Figure 8 is a schematic flowchart of a 3D sensing method provided by an embodiment of the present application.
  • the proximity sensor is an indispensable part of the electronic device. It is mainly used to detect the distance between the electronic device and the target object, and based on the detection results, draw conclusions such as the target object is close, far away, appears or missing. , which can effectively improve the overall operating logic of electronic devices, so that the user experience can be greatly improved.
  • the traditional 2D face ID ie, two-dimensional face-swiping authentication method
  • 3D face ID ie, three-dimensional face-swiping authentication method
  • electronic equipment is equipped with a camera device that has functions such as taking pictures, recording videos, and facial recognition.
  • the proximity sensor and 3D sensing module mentioned above are all part of the camera device.
  • the proximity sensor and the 3D sensing module are both indispensable components of the camera device, the proximity sensor and the 3D sensing module are relatively independent, that is, the proximity sensor and the 3D sensing module are relatively independent.
  • the sensor module is set up separately, which results in a larger volume and mass of the camera device and higher manufacturing cost. At the same time, this is not conducive to the development direction of thinner and lighter electronic equipment.
  • embodiments of the present application provide a 3D sensing module for application in electronic devices; where the electronic devices may include but are not limited to mobile phones, notebooks, tablet computers, POS machines, vehicle-mounted computers and smart wearable devices.
  • FIG. 1 is a schematic structural diagram of a 3D sensing module provided by an embodiment of the present application.
  • the 3D sensing module provided by the embodiment of the present application includes a transmitting end 10 and a receiving end 20 .
  • the transmitting end 10 is used to transmit a first outgoing beam to the target object in the depth imaging mode (a in Figure 1); or, in the proximity sensing mode, to emit a second outgoing beam to the target object ( As shown in c) in Figure 1; wherein, after the first outgoing beam is reflected by the target object, a corresponding first reflected beam (b) in Figure 1 is generated; after the second outgoing beam is reflected by the target object, a corresponding first reflected beam is generated.
  • Two reflected beams (d in Figure 1); the field of view angle of the first outgoing beam is greater than the field of view angle of the second outgoing beam (the field of view angle of a in Figure 1 is significantly greater than the field of view angle of c).
  • the receiving end 20 is used to turn on all pixels in the depth imaging mode and form a first detection channel with the transmitting end 10 to receive the first reflected beam and generate a depth image of the target object; or, in the proximity sensing mode, turn on some pixels and
  • the transmitting end 10 forms a second detection channel to receive the second reflected beam and generate distance information between the 3D sensing module and the target object.
  • the 3D sensing module also includes a control and processing circuit.
  • the control and processing circuit is used to control the transmitter 10 to emit the first outgoing beam or the second outgoing beam in the depth imaging mode or the proximity sensing mode. , and controls the receiving end 20 to turn on all pixels or part of the pixels to collect the first reflected beam or the second reflected beam reflected by the target object to generate a depth image or distance information.
  • the field of view angle of the first outgoing beam should be greater than the field of view angle of the second outgoing beam.
  • the transmitting end 10 emits the first outgoing beam
  • the transmitting end 10 emits the second outgoing beam
  • the receiving end 20 generates a depth image of the target object
  • the receiving end 20 generates a 3D
  • the distance information between the sensing module and the target object illustrates the timing sequence between the depth imaging mode and the proximity sensing mode of the 3D sensing module in this embodiment. It is a non-overlapping relationship, that is, it is either in the depth imaging mode to perform 3D perception of the target object, or in the proximity sensing mode to perform distance sensing of the target object, but cannot be in the depth imaging mode and the target object at the same time. Proximity sensing mode.
  • the 3D sensing module can be adjusted to the depth imaging mode.
  • the transmitter end 10 will emit the first outgoing beam to the target object, and then the receiver end will 20 will turn on all pixels and the transmitting end 10 to form a first detection channel to receive the first reflected beam generated by the first outgoing beam being reflected by the target object and generate a depth image of the target object; when it is necessary to obtain the 3D sensing module and the target
  • the 3D sensing module can be adjusted to the proximity sensing mode. In this mode, the transmitting end 10 will emit the second outgoing beam to the target object, and then the receiving end 20 will turn on some pixels.
  • a second detection channel is formed with the transmitting end 10 to receive the second reflected beam generated by the second outgoing beam being reflected by the target object and generate distance information between the 3D sensing module and the target object.
  • the 3D sensing module can first be adjusted to the proximity sensing mode to perform distance sensing on the mobile phone user (ie, the target object). , and after sensing that the mobile phone user is close enough (that is, the distance between the 3D sensing module and the mobile phone user is less than or equal to the preset distance), the 3D sensing module is adjusted to the depth imaging mode to image the mobile phone.
  • the user performs 3D perception to obtain the depth image of the mobile phone user (for example, obtaining the depth image of the mobile phone user's face for face recognition).
  • this embodiment uses the transmitting end 10 and the receiving end 20 to form a 3D sensing module.
  • two working modes are set for the 3D sensing module, namely the depth imaging mode and the proximity sensing mode; wherein, The depth imaging mode is used for 3D perception of target objects; the proximity sensing mode is used for close sensing of target objects.
  • this embodiment can not only obtain the depth image of the target object, but also obtain the distance information between the 3D sensing module and the target object, which is equivalent to combining the 3D sensing module and proximity in the traditional camera device.
  • the sensors are integrated so that the 3D sensing module and the proximity sensor can share the same transmitting end 10 and the same receiving end 20.
  • FIG. 2 is a first structural schematic diagram of a transmitting end provided by an embodiment of the present application.
  • the emission end 10 may include a first light source 11, a second light source 12, a light modulation device 13 and a substrate 14; wherein the first light source 11 and the second light source 12 are both disposed on the substrate 14, and the first light source 11 and the second light source 12 Spaced apart from each other, the light modulation device 13 is mounted on the substrate 14 and located on the optical path of the first light source 11 and the second light source 12 .
  • the substrate 14 plays the role of carrying and protecting optical devices such as the first light source 11 , the second light source 12 and the light modulation device 13 provided thereon, and the substrate 14 can be a ceramic substrate.
  • the first light source 11 is used to emit a first incident beam to the light modulation device 13 in the depth imaging mode.
  • the second light source 12 is used to emit a second incident beam to the light modulation device 13 in the proximity sensing mode.
  • the light modulation device 13 is used to modulate the first incident light beam and project a corresponding first outgoing light beam; or to modulate the second incident light beam and project a corresponding second outgoing light beam.
  • the first light source 11 first emits the first incident beam, and then the light modulation device 13 modulates the first incident beam and projects the corresponding first outgoing beam, so as to utilize the The projected first outgoing beam performs 3D perception of the target object; in the proximity sensing mode, the second incident beam is first emitted by the second light source 12, and then the second incident beam is modulated by the light modulation device 13 and projected A corresponding second outgoing beam is emitted, so as to use the projected second outgoing beam to perform distance sensing on the target object.
  • the first light source 11 is used to perform 3D perception of the target object, that is, the first incident beam emitted by the first light source 11 is modulated by the light modulation device 13 and the first outgoing beam has a larger field of view.
  • the first light source 11 may include one or more sub-light sources, and when the first light source 11 includes multiple sub-light sources, the multiple sub-light sources are distributed in an array;
  • the second light source 12 is used to sense the distance of the target object, that is, the third
  • the second incident beam emitted by the second light source 12 is modulated by the light modulation device 13 and the second emergent beam has a smaller field of view. Therefore, the second light source 12 may only include one sub-light source.
  • the second incident beam and the corresponding The second outgoing beam is a point beam.
  • the types of the first light source 11 and the second light source 12 may include but are not limited to LED (Light-Emitting Diode, light-emitting diode) and VCSEL (Vertical-Cavity Surface-Emitting Laser, vertical cavity surface emitting laser), which generally refers to the present invention.
  • An optical active device commonly used in the field that can emit incident light beams in the infrared band (or other bands).
  • the transmitting end 10 of this embodiment may also include a control circuit, which is disposed on the substrate 14 and is electrically connected to the first light source 11 and the second light source 12 .
  • the control circuit is used to turn on the first light source 11 and turn off the second light source 12 in the depth imaging mode; or, in the proximity sensing mode, turn on the second light source 12 and turn off the first light source 11;
  • the first light source 11 is turned on by the control circuit, it can emit the first incident beam to the light modulation device 13;
  • the first light source 11 is turned off by the control circuit, it cannot emit the first incident beam to the light modulation device 13.
  • Light beam when the second light source 12 is turned on by the control circuit, it can emit a second incident light beam to the light modulation device 13; when the second light source 12 is turned off by the control circuit, it cannot emit a second incident light beam to the light modulation device 13. beam.
  • this embodiment achieves "the first light source 11 emits the first incident beam” by arranging a control circuit on the substrate 14 for controlling the opening/closing of the first light source 11 and the second light source 12 .
  • the timing control of "the second light source 12 emits the second incident beam” realizes the timing control of the depth imaging mode and the proximity sensing mode.
  • support portions 141 extending in the direction of the optical paths of the first light source 11 and the second light source 12 may be formed on opposite sides of the substrate 14 .
  • the opposite sides of the light modulation device 13 may be formed. are respectively provided on the two supporting parts 141.
  • the light modulation device 13 is mounted on the substrate 14 by forming support portions 141 on both opposite sides of the substrate 14 . Further, we can open holes, grooves and other structures inside the support parts 141, and arrange the opposite sides of the light modulation device 13 in the holes, grooves and other structures inside the two support parts 141 to achieve light modulation.
  • the light modulation device 13 in this embodiment can be a liquid crystal panel, and the liquid crystal panel has two working states, which are a diffusion state and a transparent state.
  • the control circuit provided on the substrate 14 can also be used to adjust the liquid crystal panel to a diffusion state in the depth imaging mode; or to adjust the liquid crystal panel to a transparent state in the proximity sensing mode. It can be understood that when the working status of the liquid crystal panel is different, its modulation effect on the incident beam (i.e., the first incident beam and the second incident beam) is different.
  • this embodiment not only requires the control circuit To turn on the first light source 11 and turn off the second light source 12, it is also necessary to adjust the liquid crystal panel to the diffusion state through the control circuit, so that the liquid crystal panel in the diffusion state can correctly modulate the first incident light beam, thereby producing a light beam suitable for the target.
  • the first outgoing light beam for 3D sensing of an object not only do you need to turn on the second light source 12 and turn off the first light source 11 through the control circuit, but you also need to adjust the liquid crystal panel to a transparent state through the control circuit, This enables the liquid crystal panel in a transparent state to correctly modulate the second incident light beam, thereby generating a second outgoing light beam suitable for close range sensing of the target object.
  • the liquid crystal panel actually does not perform any modulation of the second incident beam, that is, the second incident beam directly passes through the liquid crystal panel and irradiates the target object, or in other words , the second outgoing beam projected by the liquid crystal panel according to the incident second incident beam and the second incident beam belong to the same beam of light.
  • FIG. 3 is a schematic diagram of the modulation area of the light modulation device provided by the embodiment of the present application.
  • the light modulation device 13 may have a first modulation area A and a second modulation area B connected to each other; wherein the first modulation area A corresponds to the first light source 11 and the second modulation area B corresponds to the second light source 12 .
  • the first modulation area A is used to modulate the first incident beam emitted by the first light source 11 and project a corresponding first outgoing beam; wherein, the illumination range of the first incident beam on the first modulation area A is It is limited to the first modulation area A, that is, the irradiation range of the first incident light beam on the first modulation area A is smaller than the area of the first modulation area A.
  • the second modulation area B is used to modulate the second incident beam emitted by the second light source 12 and project a corresponding second outgoing beam; wherein, the irradiation range of the second incident beam in the second modulation area B is limited to the Within the second modulation area B, that is, the irradiation range of the second incident beam on the second modulation area B is smaller than the area of the second modulation area B.
  • the light modulation device 13 is divided into two modulation areas, namely the first modulation area A and the second modulation area B. Then in the depth imaging mode, the first light source 11 first emits the first The incident beam is then modulated by the first modulation area A, and a corresponding first outgoing beam is projected, so as to use the projected first outgoing beam to perform 3D sensing of the target object; in proximity sensing In the mode, the second incident beam is first emitted by the second light source 12, and then the second incident beam is modulated by the second modulation area B, and a corresponding second outgoing beam is projected, so as to utilize the projected second outgoing beam. Providing close range sensing of target objects.
  • the irradiation range of the first incident light beam on the first modulation area A is limited to the first modulation area A
  • the irradiation range of the second incident light beam on the second modulation area B is limited to the second modulation area B
  • the function is to prevent interference between the first incident beam and the second incident beam, that is, to prevent the first incident beam and the second incident beam from interfering with each other.
  • the first modulation area A may include multiple interconnected sub-modulation areas, and the number of sub-modulation areas is adapted to the sub-light sources included in the first light source 11 , that is, multiple sub-modulation areas One-to-one correspondence with the plurality of sub-light sources included in the first light source 11 .
  • the sub-modulation areas are used to modulate the first incident light beam emitted by the corresponding sub-light source in the first light source 11 and project the corresponding first outgoing light beam; wherein, the first outgoing light beam projected by different sub-modulation areas is
  • the field of view angles of the light beams are different, which means that each sub-modulation area has different modulation effects on the incident light beam.
  • this specific implementation achieves adjustment of the field of view angle of the first outgoing beam by dividing the first modulation area A into a plurality of sub-modulation areas with different modulation effects on the incident beam, that is, if we need To adjust the field of view of the current first outgoing beam to the target field of view, you only need to turn off the currently on sub-light source and turn on other sub-light sources (the first incident beam emitted by the other sub-light sources is passed through the corresponding The field of view angle of the first outgoing light beam generated by the sub-modulation area modulation is the target field of view angle), and the opening/closing of all sub-light sources in the first light source 11 can be realized by the control circuit provided on the substrate 14 .
  • FIG. 4 is a first schematic diagram of the first modulation region provided by an embodiment of the present application.
  • Figure 4 shows the situation where the first modulation area A includes two sub-modulation areas (respectively the first sub-modulation area A1 and the second sub-modulation area A2), where the first sub-modulation area A1 is suitable for the first outgoing beam.
  • the second sub-modulation area A2 is suitable for situations where the field of view of the first outgoing beam is small.
  • the first modulation area A may include n sub-modulation areas, and the n-th sub-modulation area is obtained by extending a preset distance outward from the periphery of the (n-1)-th sub-modulation area; where, n is a positive integer greater than 1.
  • the sub-modulation region is used to modulate the first incident beam emitted by the first light source 11 and project the corresponding first outgoing beam; wherein, in order from the first sub-modulation region to the n-th sub-modulation region, The number of sub-modulation areas in the open state is positively related to the size of the field of view of the first outgoing beam, that is, when the first sub-modulation area is in the open state and the other sub-modulation areas are in the closed state, the first light source
  • the modulation area of the first incident beam emitted by 11 is only the first sub-modulation area. At this time, the first incident beam emitted by the first light source 11 is modulated by the first sub-modulation area to produce the first outgoing beam.
  • the field of view angle is the smallest; when all sub-modulation areas in the first modulation area A are in the open state, the modulation area that modulates the first incident beam emitted by the first light source 11 is the first modulation area A (equivalent to All sub-modulation areas in the first modulation area A), at this time, the first incident light beam emitted by the first light source 11 is modulated by the first modulation area A and the field of view angle of the first outgoing light beam is the largest.
  • this specific implementation divides the first modulation area A into n sub-modulation areas, and sets the n-th sub-modulation area to be obtained by extending a preset distance from the periphery of the n-1th sub-modulation area, thereby achieving Adjustment of the field of view of the first outgoing beam, that is, when we need to reduce the field of view of the first outgoing beam, we only need to close the specified number in the order from the nth sub-modulation area to the first sub-modulation area.
  • sub-modulation areas are enough; when we need to increase the field of view of the first outgoing beam, we only need to open a specified number of additional sub-modulation areas in the order from the first sub-modulation area to the n-th sub-modulation area, that is, Can.
  • FIG. 5 is a second schematic diagram of the first modulation region provided by an embodiment of the present application.
  • Figure 5 shows a situation where the first modulation area A includes two sub-modulation areas (respectively the first sub-modulation area A1 and the second sub-modulation area A2), where the second sub-modulation area A2 is derived from the first sub-modulation area.
  • the peripheral edge of A1 extends outward by a preset distance. Based on this, when the first sub-modulation area A1 is in the open state and the second sub-modulation area A2 is in the closed state, the first incident beam emitted by the first light source 11 is modulated by the first sub-modulation area A1 to generate the first outgoing beam.
  • the field of view angle of the light beam is small; when both the first sub-modulation area A1 and the second sub-modulation area A2 are in the open state, the first incident light beam emitted by the first light source 11 is modulated by the first modulation area A to generate the first
  • the field of view of the outgoing beam is larger.
  • FIG. 6 is a second structural schematic diagram of a transmitting end provided by an embodiment of the present application.
  • the light modulation device 13 may include a diffuser 131 and a transparent glass 132; where the diffuser 131 corresponds to the first light source 11 and the transparent glass 132 corresponds to the second light source 12; in fact, this embodiment is equivalent to The first modulation area A of the aforementioned embodiment is provided as the diffuser 131 , and the second modulation area B of the aforementioned embodiment is provided as the transparent glass 132 .
  • the diffuser 131 is used to modulate the first incident beam emitted by the first light source 11 and project a corresponding first outgoing beam, and the first outgoing beam at this time is a flood light beam; the transparent glass 132 is used to provide The second incident beam emitted by the second light source 12 is incident and projects a corresponding second outgoing beam, and the second incident beam at this time is the same as the second outgoing beam.
  • the transparent glass 132 is actually not incident on the second incident beam.
  • the light beam undergoes any modulation, that is, the second incident light beam directly passes through the transparent glass 132 and irradiates the target object, or in other words, the transparent glass 132 projects the second outgoing light beam and the second incident light beam according to the incident second incident light beam.
  • the size relationship between the space occupied by the diffuser 131 and the transparent glass 132 depends on the divergence angles of the first incident beam and the second incident beam, and the irradiation area of the first incident beam on the diffuser 131 , the irradiation area of the second incident beam on the transparent glass 132, etc.; however, regardless of the size relationship between the diffuser 131 and the transparent glass 132, it is necessary to ensure that the distance between the first incident beam and the second incident beam is Will not interfere with each other.
  • FIG. 7 is a third structural schematic diagram of a transmitting end provided by an embodiment of the present application.
  • the light modulation device 13 may include a diffractive optical element (Diffractive Optical Elements, DOE) 133 and an optical lens 134; wherein the diffractive optical element 133 corresponds to the first light source 11, and the optical lens 134 corresponds to the second light source 12; in fact, this embodiment This is equivalent to configuring the first modulation area A of the aforementioned embodiment as the diffractive optical element 133 and arranging the second modulation area B of the aforementioned embodiment as the optical lens 134 .
  • DOE diffractive Optical Elements
  • the diffractive optical element 133 is used to modulate the first incident beam emitted by the first light source 11 and project a corresponding first outgoing beam, and the first outgoing beam at this time is a speckle beam; the optical lens 134 is used
  • the purpose of converging the second incident beam emitted by the second light source 12 and projecting the corresponding second outgoing beam; here, the purpose of converging the second incident beam by the light lens 134 is to improve the projected second outgoing beam. energy density.
  • the size relationship between the space occupied by the diffractive optical element 133 and the optical lens 134 depends on the divergence angles of the first incident beam and the second incident beam, and the irradiation area of the first incident beam on the diffractive optical element 133 , the irradiation area of the second incident beam on the optical lens 134, etc.; however, regardless of the relationship between the space occupied by the diffractive optical element 133 and the optical lens 134, it is necessary to ensure that the distance between the first incident beam and the second incident beam is Will not interfere with each other.
  • the light lens 134 can be replaced with the transparent glass 132 in the previous embodiment.
  • the transparent glass 132 please refer to the previous embodiment, and this embodiment will not be described again.
  • the foregoing embodiment is only a preferred implementation of the embodiment of the present application, and is not the only limitation of the structural form of "the light modulation device 13 modulates the first incident beam and the second incident beam" in the embodiment of the present application; in this regard, , those skilled in the art can make flexible settings according to actual application scenarios based on the embodiments of this application.
  • the aforementioned embodiments all achieve adjustment of the field of view angle of the first outgoing beam by dividing the first modulation area A into a plurality of sub-modulation areas, those skilled in the art should know that, in fact, not only Limited to adjusting the field of view of the first emergent beam, these sub-modulation areas can also be used to generate light fields with different distribution forms, such as pre-designed stripes, speckle arrangements, and floodlighting of different shapes. This article is in This will not be listed one by one.
  • the receiving end 20 may include an image sensor, and the image sensor may include a pixel array composed of a plurality of pixels.
  • the image sensor is used in the depth imaging mode to receive the first reflected beam generated by the target object reflecting the first outgoing beam through the first detection channel formed by the pixel array and the transmitting end 10, and according to the received The first reflected beam generates a depth image of the target object; or, in the proximity sensing mode, the second detection channel formed by the pixel array and the transmitting end 10 receives the third beam generated by the target object reflecting the second outgoing beam.
  • Two reflected beams and based on the received second reflected beam, the distance information between the 3D sensing module and the target object is generated; wherein, in the depth imaging mode, all pixels in the pixel array are on; in proximity In the degree sensing mode, some pixels in the pixel array corresponding to the second reflected light beam are in an on state.
  • the 3D sensing module performs 3D perception of the target object, that is, when the 3D sensing module is in the depth imaging mode, in order to ensure the detail of the depth image of the target object generated according to the first reflected beam, For accuracy, all pixels in the image sensor's pixel array need to be on.
  • the 3D sensing module performs close sensing of the target object, that is, when the 3D sensing module is in the proximity sensing mode, the distance between the 3D sensing module and the target object is generated based on the second reflected light beam.
  • the accuracy requirement of the information is not high, so in order to reduce power consumption, only some pixels in the pixel array of the image sensor corresponding to the second reflected beam are turned on.
  • FIG. 8 is a schematic flowchart of a 3D sensing method provided by an embodiment of the present application.
  • the embodiment of the present application also provides a 3D sensing method, which is applied to the 3D sensing module provided by the embodiment of the present application, and the 3D sensing method includes the following steps 801 to 802.
  • Step 801 Control the transmitting end to emit the first outgoing beam in the depth imaging mode; or to emit the second outgoing beam in the proximity sensing mode.
  • the transmitting end 10 and the receiving end 20 form a 3D sensing module
  • the 3D sensing module also includes a control and processing circuit (not shown).
  • two working modes are set up for the 3D sensing module, namely depth imaging mode and proximity sensing mode; among them, the depth imaging mode is used for 3D perception of the target object; the proximity sensing mode is used for 3D perception of the target object. Perform close range sensing.
  • the 3D sensing module can be adjusted to the depth imaging mode; in this mode, the control and processing circuit controls the transmitter 10 to emit the first outgoing beam to the target object, and the target The object will reflect the first outgoing beam, thereby generating a corresponding first reflected beam.
  • the 3D sensing module can be adjusted to the proximity sensing mode; in this mode, the control and processing circuit controls the transmitter 10 to transmit to the target object.
  • a second outgoing beam is emitted, and the target object reflects the second outgoing beam, thereby generating a corresponding second reflected beam.
  • Step 802 Control the receiving end to turn on all pixels and the transmitting end in the depth imaging mode to form a first detection channel to receive the first reflected beam and generate a depth image of the target object; or, turn on some pixels and the transmitting end in the proximity sensing mode.
  • the transmitting end forms a second detection channel to receive the second reflected beam and generate distance information between the 3D sensing module and the target object.
  • the control and processing circuit controls the receiving end 20 to turn on all pixels and the transmitting end 10 to form a first detection channel to receive the first outgoing beam reflected by the target object.
  • the first reflected beam is generated and generates a depth image of the target object;
  • the control and processing circuit controls the receiving end 20 to turn on some pixels and the transmitting end 10 to form a second detection channel, so as to Receive the second reflected beam generated by the second outgoing beam being reflected by the target object and generate distance information between the 3D sensing module and the target object.
  • the field of view angle of the first outgoing beam should be larger than the field of view angle of the second outgoing beam.
  • the transmitting end 10 emits the first outgoing beam
  • the transmitting end 10 emits the second outgoing beam
  • the receiving end 20 generates a depth image of the target object
  • the receiving end 20 generates a 3D
  • the distance information between the sensing module and the target object illustrates the timing sequence between the depth imaging mode and the proximity sensing mode of the 3D sensing module in this embodiment. It is a non-overlapping relationship, that is, it is either in the depth imaging mode to perform 3D perception of the target object, or in the proximity sensing mode to perform close sensing of the target object, but cannot be in the depth imaging mode at the same time. and proximity sensing modes.
  • both the depth image of the target object and the distance information between the 3D sensing module and the target object can be obtained.
  • This is equivalent to combining the traditional camera device with the The 3D sensing module and the proximity sensor are integrated, so that the 3D sensing module and the proximity sensor can share the same transmitting end 10 and the same receiving end 20, then when the 3D sensing provided by this embodiment
  • After the method is applied to the camera device it is no longer necessary to configure the camera device with a proximity sensor and a 3D sensing module, and only one 3D sensing module involved in this embodiment is required to be provided in the camera device. , thereby effectively reducing the size and quality of the camera device, and significantly saving manufacturing costs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本申请提供了一种3D传感模组、3D传感方法及电子设备。3D传感模组包括发射端10和接收端20;发射端10用于在深度成像模式下发射第一出射光束a,或在接近度传感模式下发射第二出射光束c;接收端20用于在深度成像模式下开启全部像素与发射端10形成第一探测通道,以接收第一反射光束(由第一出射光束a反射所得)并生成目标物体的深度图像,或在接近度传感模式下开启部分像素与发射端形成第二探测通道,以接收第二反射光束(由第二出射光束c反射所得)并生成3D传感模组与目标物体之间的距离信息;其中,第一出射光束a的视场角大于第二出射光束c的视场角。本申请不再需要以一个接近度传感器和一个3D传感模组构成摄像装置,从而减小了摄像装置的体积、质量和制造成本。

Description

一种3D传感模组、3D传感方法及电子设备
本申请要求于2022年7月18日提交中国专利局,申请号为202210843030.3,发明名称为“一种3D传感模组、3D传感方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
【技术领域】
本申请涉及光学成像技术领域,尤其涉及一种3D传感模组、3D传感方法及电子设备。
【背景技术】
随着用户对诸如手机、平板电脑等电子设备的轻薄化需求的不断加剧,设置在电子设备内部的各装置也逐步向小型化的方向发展。接近度传感器是电子设备内部不可或缺的一部分,其主要用于对电子设备与目标物体之间的距离进行检测,并以检测结果为依据,来得出目标物体靠近、远离、出现或失踪等结论,其可以有效地完善电子设备的整体运行逻辑,使得用户的使用体验能够得到很大的提升。近些年来,随着新型支付方式,即刷脸支付方式的不断流行,传统的2D face ID(即二维刷脸认证方式)已无法满足用户的需求,由此3D face ID(即三维刷脸认证方式)才被广泛应用于电子设备,其中最常见的表现形式即为电子设备中的3D传感模组。实际上,电子设备中设置有一个具有拍照、录像和人脸识别等功能的摄像装置,前面所提及的接近度传感器和3D传感模组均属于该摄像装置的一部分。
相关技术中,接近度传感器和3D传感模组虽然都是摄像装置(位于电子设备内部,具有拍照、录像和人脸识别等功能)中不可或缺的组成部分,但是接近度传感器与3D传感模组之间是相对独立的,即接近度传感器与3D传感模组是分开设置的,从而导致摄像装置的体积、质量较大,制造成本较高,同时这也不利于电子设备轻薄化的发展方向。
因此,有必要对上述3D传感模组的结构进行改进。
【技术解决方案】
本申请提供了一种3D传感模组、3D传感方法及电子设备,旨在解决相关技术中摄像装置的体积、质量较大,制造成本较高的问题。
为了解决上述技术问题,本申请实施例第一方面提供了一种3D传感模组,包括发射端和接收端;所述发射端用于在深度成像模式下,发射第一出射光束,或在接近度传感模式下,发射第二出射光束;其中,所述第一出射光束经目标物体反射后,产生相应的第一反射光束,所述第二出射光束经所述目标物体反射后,产生相应的第二反射光束,所述第一出射光束的视场角大于所述第二出射光束的视场角;所述接收端用于在所述深度成像模式下开启全部像素与所述发射端形成第一探测通道,以接收所述第一反射光束并生成所述目标物体的深度图 像,或在所述接近度传感模式下开启部分像素与所述发射端形成第二探测通道,以接收所述第二反射光束并生成所述3D传感模组与所述目标物体之间的距离信息。
本申请实施例第二方面提供了一种3D传感方法,应用于3D传感模组,所述3D传感方法包括:控制发射端在深度成像模式下,发射第一出射光束,或在接近度传感模式下,发射第二出射光束;其中,所述第一出射光束经目标物体反射后,产生相应的第一反射光束,所述第二出射光束经所述目标物体反射后,产生相应的第二反射光束,所述第一出射光束的视场角大于所述第二出射光束的视场角;控制接收端在所述深度成像模式下开启全部像素与所述发射端形成第一探测通道,以接收所述第一反射光束并生成所述目标物体的深度图像,或在所述接近度传感模式下开启部分像素与所述发射端形成第二探测通道,以接收所述第二反射光束并生成所述3D传感模组与所述目标物体之间的距离信息。
本申请实施例第三方面提供了一种电子设备,包括如本申请实施例第一方面所述的3D传感模组。
【有益效果】
从上述描述可知,与相关技术相比,本申请的有益效果在于:以发射端和接收端构成3D传感模组,同时为所构成的3D传感模组设置两种工作模式,分别为深度成像模式和接近度传感模式。基于此,当需要获取目标物体的深度图像时,可以将3D传感模组调整为深度成像模式,而在此模式下,发射端会向目标物体发射第一出射光束,之后接收端会开启全部像素与发射端形成第一探测通道,以接收第一出射光束经目标物体反射所产生的第一反射光束并生成目标物体的深度图像;当需要获取3D传感模组与目标物体之间的距离信息时,可以将3D传感模组调整为接近度传感模式,而在此模式下,发射端会向目标物体发射第二出射光束,之后接收端会开启部分像素与发射端形成第二探测通道,以接收第二出射光束经目标物体反射所产生的第二反射光束并生成3D传感模组与目标物体之间的距离信息。由此可见,本申请提供的3D传感模组既可以获取目标物体的深度图像,又可以获取3D传感模组与目标物体之间的距离信息,这相当于将传统摄像装置中的3D传感模组与接近度传感器进行了整合,使得3D传感模组与接近度传感器能够共用同一个发射端,以及共用同一个接收端,此时不再需要以一个接近度传感器和一个3D传感模组构成摄像装置,而仅需在摄像装置中设置一个本申请提供的3D传感模组即可,从而能够有效地减小摄像装置的体积、质量,并大幅度节约了制造成本。
【附图说明】
为了更清楚地说明相关技术或本申请实施例中的技术方案,下面将对相关技术或本申请 实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,而并非是全部实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的3D传感模组的结构示意图;
图2为本申请实施例提供的发射端的第一种结构示意图;
图3为本申请实施例提供的光调制器件的调制区域示意图;
图4为本申请实施例提供的第一调制区域的第一种示意图;
图5为本申请实施例提供的第一调制区域的第二种示意图;
图6为本申请实施例提供的发射端的第二种结构示意图;
图7为本申请实施例提供的发射端的第三种结构示意图;
图8为本申请实施例提供的3D传感方法的流程示意图。
【本发明的实施方式】
为了使本申请的目的、技术方案以及优点更加的明显和易懂,下面将结合本申请实施例以及相应的附图,对本申请进行清楚、完整地描述,其中,自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。应当理解的是,下面所描述的本申请的各个实施例仅仅用以解释本申请,并不用于限定本申请,也即基于本申请的各个实施例,本领域的普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。此外,下面所描述的本申请的各个实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互组合。
随着用户对诸如手机、平板电脑等电子设备的轻薄化需求的不断加剧,设置在电子设备内部的各装置也逐步向小型化的方向发展。接近度传感器是电子设备内部不可或缺的一部分,其主要用于对电子设备与目标物体之间的距离进行检测,并以检测结果为依据,来得出目标物体靠近、远离、出现或失踪等结论,其可以有效地完善电子设备的整体运行逻辑,使得用户的使用体验能够得到很大的提升。近些年来,随着新型支付方式,即刷脸支付方式的不断流行,传统的2D face ID(即二维刷脸认证方式)已无法满足用户的需求,由此3D face ID(即三维刷脸认证方式)才被广泛应用于电子设备,其中最常见的表现形式即为电子设备中的3D传感模组。实际上,电子设备中设置有一个具有拍照、录像和人脸识别等功能的摄像装置,前面所提及的接近度传感器和3D传感模组均属于该摄像装置的一部分。
相关技术中,接近度传感器和3D传感模组虽然都是摄像装置中不可或缺的组成部分,但是接近度传感器与3D传感模组之间是相对独立的,即接近度传感器与3D传感模组是分开设 置的,从而导致摄像装置的体积、质量较大,制造成本较高,同时这也不利于电子设备轻薄化的发展方向。为此,本申请实施例提供了一种3D传感模组,以应用于电子设备;其中,电子设备可以包括但不限于手机、笔记本、平板电脑、POS机、车载电脑和智能穿戴设备。
请参阅图1,图1为本申请实施例提供的3D传感模组的结构示意图。本申请实施例提供的3D传感模组包括发射端10和接收端20。
具体地,发射端10用于在深度成像模式下,向目标物体发射第一出射光束(如图1中的a);或,在接近度传感模式下,向目标物体发射第二出射光束(如图1中的c);其中,第一出射光束经目标物体反射后,产生相应的第一反射光束(如图1中的b);第二出射光束经目标物体反射后,产生相应的第二反射光束(如图1中的d);第一出射光束的视场角大于第二出射光束的视场角(如图1中a的视场角明显大于c的视场角)。接收端20用于在深度成像模式下开启全部像素与发射端10形成第一探测通道,以接收第一反射光束并生成目标物体的深度图像;或,在接近度传感模式下开启部分像素与发射端10形成第二探测通道,以接收第二反射光束并生成3D传感模组与目标物体之间的距离信息。在一些实施例中,3D传感模组还包括控制与处理电路,控制与处理电路用于控制发射端10在深度成像模式下或接近度传感模式下发射第一出射光束或第二出射光束,并控制接收端20开启全部像素或部分像素采集经目标物体反射的第一反射光束或第二反射光束生成深度图像或距离信息。
可以理解的是,由于第一出射光束被用来对目标物体进行3D感知(即获取目标物体的深度图像),而第二出射光束被用来对目标物体进行近距离感测(即获取目标物体与3D传感模组之间的距离信息),所以本实施例中第一出射光束的视场角应当大于第二出射光束的视场角。此外,“发射端10发射第一出射光束”与“发射端10发射第二出射光束”之间是“或”的关系,“接收端20生成目标物体的深度图像”与“接收端20生成3D传感模组与目标物体之间的距离信息”之间也是“或”的关系,这就说明本实施例中3D传感模组所具有的深度成像模式与接近度传感模式之间在时序上属于非交叠的关系,即要么处于深度成像模式,去执行对目标物体的3D感知,要么处于接近度传感模式,去执行对目标物体的距离感测,而不能同时处于深度成像模式和接近度传感模式。
在实际应用中,当需要获取目标物体的深度图像时,可以将3D传感模组调整为深度成像模式,而在此模式下,发射端10会向目标物体发射第一出射光束,之后接收端20会开启全部像素与发射端10形成第一探测通道,以接收第一出射光束经目标物体反射所产生的第一反射光束并生成目标物体的深度图像;当需要获取3D传感模组与目标物体之间的距离信息时,可以将3D传感模组调整为接近度传感模式,而在此模式下,发射端10会向目标物体发射第 二出射光束,之后接收端20会开启部分像素与发射端10形成第二探测通道,以接收第二出射光束经目标物体反射所产生的第二反射光束并生成3D传感模组与目标物体之间的距离信息。作为一种实例,当本实施例提供的3D传感模组应用于手机时,可以先将3D传感模组调整为接近度传感模式,以对手机用户(即目标物体)进行距离感测,而在感测到手机用户足够近(即3D传感模组与手机用户之间的距离小于或等于预置的距离)后,再将3D传感模组调整为深度成像模式,以对手机用户进行3D感知,从而获得手机用户的深度图像(比如获得手机用户的脸部的深度图像,以用于人脸识别)。
由上可见,本实施例以发射端10和接收端20构成3D传感模组,同时为3D传感模组设置了两种工作模式,分别为深度成像模式和接近度传感模式;其中,深度成像模式用于对目标物体进行3D感知;接近度传感模式用于对目标物体进行近距离感测。重要的是,本实施例既可以获取目标物体的深度图像,又可以获取3D传感模组与目标物体之间的距离信息,这相当于将传统摄像装置中的3D传感模组与接近度传感器进行了整合,使得3D传感模组与接近度传感器能够共用同一个发射端10,以及共用同一个接收端20,此时不再需要以一个接近度传感器和一个3D传感模组构成摄像装置,而仅需在摄像装置中设置一个本实施例提供的3D传感模组即可,从而能够有效地减小摄像装置的体积、质量,并大幅度节约了制造成本。
在一些实施例中,请进一步参阅图2,图2为本申请实施例提供的发射端的第一种结构示意图。发射端10可以包括第一光源11、第二光源12、光调制器件13和基板14;其中,第一光源11和第二光源12均设置在基板14上,第一光源11与第二光源12相互间隔,光调制器件13架设在基板14上,且位于第一光源11和第二光源12的光路上。在本实施例中,基板14起承载、保护设置在其上的诸如第一光源11、第二光源12和光调制器件13等光器件的作用,而基板14可以采用陶瓷基板。
具体地,第一光源11用于在深度成像模式下,向光调制器件13发射第一入射光束。第二光源12用于在接近度传感模式下,向光调制器件13发射第二入射光束。光调制器件13用于对第一入射光束进行调制,并投射出相应的第一出射光束;或,对第二入射光束进行调制,并投射出相应的第二出射光束。可以理解的是,在深度成像模式下,先由第一光源11发射第一入射光束,再由光调制器件13对第一入射光束进行调制,并投射出相应的第一出射光束,以利用所投射出的第一出射光束对目标物体进行3D感知;在接近度传感模式下,先由第二光源12发射第二入射光束,再由光调制器件13对第二入射光束进行调制,并投射出相应的第二出射光束,以利用所投射出的第二出射光束对目标物体进行距离感测。
在本实施例中,第一光源11被用来对目标物体进行3D感知,即第一光源11发射的第一 入射光束经光调制器件13调制所产生的第一出射光束的视场角较大,因而第一光源11可以包括一个或多个子光源,且当第一光源11包括多个子光源时,多个子光源呈阵列分布;第二光源12被用来对目标物体进行距离感测,即第二光源12发射的第二入射光束经光调制器件13调制所产生的第二出射光束的视场角较小,因而第二光源12可以仅包括一个子光源,此时第二入射光束及相应的第二出射光束便为点光束。此外,第一光源11和第二光源12的类型可以包括但不限于LED(Light-Emitting Diode,发光二极管)和VCSEL(Vertical-Cavity Surface-Emitting Laser,垂直腔面发射激光器),其泛指本领域内常用的可以发射红外波段(或其它波段)的入射光束的光学有源器件。
进一步地,本实施例的发射端10还可以包括控制电路,该控制电路设置在基板14上,且与第一光源11和第二光源12电性连接。具体地,该控制电路用于在深度成像模式下,打开第一光源11,并关闭第二光源12;或,在接近度传感模式下,打开第二光源12,并关闭第一光源11;其中,当第一光源11被该控制电路打开时,其可以向光调制器件13发射第一入射光束;当第一光源11被该控制电路关闭时,其无法向光调制器件13发射第一入射光束;当第二光源12被该控制电路打开时,其可以向光调制器件13发射第二入射光束;当第二光源12被该控制电路关闭时,其无法向光调制器件13发射第二入射光束。可以理解的是,本实施例通过在基板14上设置用于控制第一光源11、第二光源12的打开/关闭的控制电路的方式,实现了对“第一光源11发射第一入射光束”与“第二光源12发射第二入射光束”在时序上的控制,即实现了深度成像模式与接近度传感模式在时序上的控制。
作为一种实施方式,仍然参阅图2,基板14相对的两侧可以均形成有沿第一光源11、第二光源12的光路的方向延伸的支撑部141,此时光调制器件13相对的两侧分别设置在两个支撑部141上。可以理解的是,本实施方式通过在基板14相对的两侧均形成支撑部141的方式,将光调制器件13架设在基板14上。进一步地,我们可以在支撑部141的内侧开设孔、槽等结构,并将光调制器件13相对的两侧分别设置在两个支撑部141内侧的孔、槽等结构中,以实现对光调制器件13的加固;或者,我们还可以在支撑部141的端部额外增设一些本领域内常用的夹持、固定结构,并通过这些夹持、固定结构实现对光调制器件13的加固;对此,本实施方式不做唯一限定。
作为一种实施方式,本实施例中的光调制器件13可以为液晶板,且该液晶板具有两种工作状态,分别为扩散状态和透明状态。基于此,设置在基板14上的控制电路还可以用于在深度成像模式下,将该液晶板调整为扩散状态;或,在接近度传感模式下,将该液晶板调整为透明状态。可以理解的是,当液晶板的工作状态不同时,其对入射光束(即第一入射光束和 第二入射光束)的调制作用不同,因而本实施方式在深度成像模式下,不仅需要通过控制电路打开第一光源11,并关闭第二光源12,还需要通过控制电路将液晶板调整为扩散状态,以使处于扩散状态的液晶板能够对第一入射光束进行正确地调制,从而产生适合对目标物体进行3D感知的第一出射光束;以及在接近度传感模式下,不仅需要通过控制电路打开第二光源12,并关闭第一光源11,还需要通过控制电路将液晶板调整为透明状态,以使处于透明状态的液晶板能够对第二入射光束进行正确地调制,从而产生适合对目标物体进行近距离感测的第二出射光束。此处,有必要进行说明,当液晶板处于透明状态时,液晶板实际上是不对第二入射光束进行任何调制的,即第二入射光束直接穿过液晶板照射在目标物体上,或者是说,液晶板根据入射的第二入射光束所投射出的第二出射光束与第二入射光束属于同一束光。
作为另一种实施方式,请进一步参阅图3,图3为本申请实施例提供的光调制器件的调制区域示意图。光调制器件13可以具有相互连接的第一调制区域A和第二调制区域B;其中,第一调制区域A对应于第一光源11,第二调制区域B对应于第二光源12。
具体地,第一调制区域A用于对第一光源11发射的第一入射光束进行调制,并投射出相应的第一出射光束;其中,第一入射光束在第一调制区域A上的照射范围限制于第一调制区域A内,即第一入射光束在第一调制区域A上的照射范围小于第一调制区域A的面积。第二调制区域B用于对第二光源12发射的第二入射光束进行调制,并投射出相应的第二出射光束;其中,第二入射光束在第二调制区域B上的照射范围限制于第二调制区域B内,即第二入射光束在第二调制区域B上的照射范围小于第二调制区域B的面积。
可以理解的是,本实施方式将光调制器件13划分为了两个调制区域,分别为第一调制区域A和第二调制区域B,那么在深度成像模式下,先由第一光源11发射第一入射光束,再由第一调制区域A对第一入射光束进行调制,并投射出相应的第一出射光束,以利用所投射出的第一出射光束对目标物体进行3D感知;在接近度传感模式下,先由第二光源12发射第二入射光束,再由第二调制区域B对第二入射光束进行调制,并投射出相应的第二出射光束,以利用所投射出的第二出射光束对目标物体进行近距离感测。此外,“第一入射光束在第一调制区域A上的照射范围限制于第一调制区域A内,以及第二入射光束在第二调制区域B上的照射范围限制于第二调制区域B内”的作用在于:防止第一入射光束与第二入射光束之间发生干涉,也即使得第一入射光束与第二入射光束之间互不干扰。
作为本实施方式的一种具体实现,第一调制区域A可以包括多个相互连接的子调制区域,且子调制区域的数量与第一光源11所包括的子光源相适应,即多个子调制区域与第一光源11所包括的多个子光源一一对应。
具体地,子调制区域用于对第一光源11中相应子光源发射的第一入射光束进行调制,并投射出相应的第一出射光束;其中,不同的子调制区域所投射出的第一出射光束的视场角不同,这就说明各子调制区域对入射光束的调制作用不尽相同。可以理解的是,本具体实现通过将第一调制区域A划分为多个对入射光束具有不同调制作用的子调制区域的方式,实现对第一出射光束的视场角的调节,即如果我们需要将当前第一出射光束的视场角调整为目标视场角,那么仅需将当前处于打开状态的子光源关闭,并打开其它子光源即可(该其它子光源发射的第一入射光束经相应子调制区域调制所产生的第一出射光束的视场角即为目标视场角),而对于第一光源11中所有子光源的打开/关闭,均可以由设置在基板14上的控制电路实现。
在一个示例中,请进一步参阅图4,图4为本申请实施例提供的第一调制区域的第一种示意图。图4示出了第一调制区域A包括两个子调制区域(分别为第一子调制区域A1和第二子调制区域A2)的情况,其中的第一子调制区域A1适用于第一出射光束的视场角较大的场合,第二子调制区域A2适用于第一出射光束的视场角较小的场合。基于此,当我们需要第一出射光束具有较大的视场角时,那么仅需打开第一光源11中与第一子调制区域A1对应的子光源,并关闭第一光源11中与第二子调制区域B1对应的子光源即可;同理,当我们需要第一出射光束具有较小的视场角时,那么仅需打开第一光源11中与第二子调制区域B1对应的子光源,并关闭第一光源11中与第一子调制区域A1对应的子光源即可。
作为本实施方式的另一种具体实现,第一调制区域A可以包括n个子调制区域,且第n个子调制区域由自第n-1个子调制区域的周缘向外延伸预设距离得到;其中,n为大于1的正整数。
具体地,子调制区域用于对第一光源11发射的第一入射光束进行调制,并投射出相应的第一出射光束;其中,按照第一个子调制区域至第n个子调制区域的次序,处于打开状态的子调制区域的数量与第一出射光束的视场角的大小正相关,即当第一个子调制区域处于打开状态,而其它子调制区域均处于关闭状态时,对第一光源11发射的第一入射光束起调制作用的调制区域仅为第一个子调制区域,此时第一光源11发射的第一入射光束经第一个子调制区域调制所产生的第一出射光束的视场角最小;当第一调制区域A中的所有子调制区域均处于打开状态时,对第一光源11发射的第一入射光束起调制作用的调制区域便为第一调制区域A(相当于第一调制区域A中的所有子调制区域),此时第一光源11发射的第一入射光束经第一调制区域A调制所产生的第一出射光束的视场角最大。可以理解的是,本具体实现将第一调制区域A划分为了n个子调制区域,并设置第n个子调制区域由自第n-1个子调制区域的 周缘向外延伸预设距离得到,从而实现了对第一出射光束的视场角的调节,即当我们需要减小第一出射光束的视场角时,那么仅需按照第n个子调制区域至第一个子调制区域的次序,关闭指定数量的子调制区域即可;当我们需要增大第一出射光束的视场角时,那么仅需按照第一个子调制区域至第n个子调制区域的次序,额外打开指定数量的子调制区域即可。
在一个示例中,请进一步参阅5,图5为本申请实施例提供的第一调制区域的第二种示意图。图5示出了第一调制区域A包括两个子调制区域(分别为第一子调制区域A1和第二子调制区域A2)的情况,其中的第二子调制区域A2由自第一子调制区域A1的周缘向外延伸预设距离得到。基于此,当第一子调制区域A1处于打开状态,而第二子调制区域A2处于关闭状态时,第一光源11发射的第一入射光束经第一子调制区域A1调制所产生的第一出射光束的视场角较小;当第一子调制区域A1和第二子调制区域A2均处于打开状态时,第一光源11发射的第一入射光束经第一调制区域A调制所产生的第一出射光束的视场角较大。
作为又一种实施方式,请进一步参阅图6,图6为本申请实施例提供的发射端的第二种结构示意图。光调制器件13可以包括漫射体(diffuser)131和透明玻璃132;其中,漫射体131对应于第一光源11,透明玻璃132对应于第二光源12;实际上,本实施方式相当于将前述实施方式的第一调制区域A设置为了漫射体131,以及将前述实施方式的第二调制区域B设置为了透明玻璃132。
具体地,漫射体131用于调制第一光源11发射的第一入射光束,并投射出相应的第一出射光束,且此时的第一出射光束为泛光光束;透明玻璃132用于供第二光源12发射的第二入射光束入射,并投射出相应的第二出射光束,且此时的第二入射光束与第二出射光束相同,这就说明透明玻璃132实际上是不对第二入射光束进行任何调制的,即第二入射光束直接穿过透明玻璃132照射在目标物体上,或者是说,透明玻璃132根据入射的第二入射光束所投射出的第二出射光束与第二入射光束属于同一束光。可以理解的是,漫射体131与透明玻璃132之间所占空间的大小关系,取决于第一入射光束及第二入射光束的发散角、第一入射光束在漫射体131上的照射面积、第二入射光束在透明玻璃132上的照射面积等;但是,不管漫射体131与透明玻璃132之间所占空间的大小关系如何,都需要保证第一入射光束与第二入射光束之间不会互相干扰。
作为再一种实施方式,请进一步参阅图7,图7为本申请实施例提供的发射端的第三种结构示意图。光调制器件13可以包括衍射光学元件(Diffractive Optical Elements,DOE)133和光透镜134;其中,衍射光学元件133对应于第一光源11,光透镜134对应于第二光源12;实际上,本实施方式相当于将前述实施方式的第一调制区域A设置为了衍射光学元件 133,以及将前述实施方式的第二调制区域B设置为了光透镜134。
具体地,衍射光学元件133用于对第一光源11发射的第一入射光束进行调制,并投射出相应的第一出射光束,且此时的第一出射光束为散斑光束;光透镜134用于对第二光源12发射的第二入射光束进行汇聚,并投射出相应的第二出射光束;此处,光透镜134对第二入射光束进行汇聚的目的是提高所投射出的第二出射光束的能量密度。可以理解的是,衍射光学元件133与光透镜134之间所占空间的大小关系,取决于第一入射光束及第二入射光束的发散角、第一入射光束在衍射光学元件133上的照射面积、第二入射光束在光透镜134上的照射面积等;但是,不管衍射光学元件133与光透镜134之间所占空间的大小关系如何,都需要保证第一入射光束与第二入射光束之间不会互相干扰。
当然,也并非仅限于此,在其它实施方式中,可以将光透镜134替换为上一个实施方式中的透明玻璃132,相关描述参见上一个实施方式即可,本实施方式在此不再赘述。
应当理解的是,前述实施方式仅作为本申请实施例的优选实现,并非是本申请实施例对“光调制器件13调制第一入射光束和第二入射光束”的结构形式的唯一限定;对此,本领域技术人员可以在本申请实施例的基础上,根据实际应用场景进行灵活设定。此外,前述实施方式虽然都是通过将第一调制区域A划分为多个子调制区域的途径,去实现对第一出射光束的视场角的调节,但是本领域技术人员应当知晓,实际上并非仅限于调节第一出射光束的视场角,还可以通过这些子调制区域,去产生不同分布形式的光场,比如预先设计好的条纹、散斑排布及不同形状的泛光照明等,本文在此不再一一列举。
在一些实施例中,接收端20可以包括图像传感器,而该图像传感器可以包括由多个像素组成的像素阵列。
具体地,图像传感器用于在深度成像模式下,通过像素阵列与发射端10形成的第一探测通道接收经目标物体对第一出射光束进行反射而产生的第一反射光束,并根据所接收的第一反射光束,生成目标物体的深度图像;或,在接近度传感模式下,通过像素阵列与发射端10形成的第二探测通道接收经目标物体对第二出射光束进行反射而产生的第二反射光束,并根据所接收的第二反射光束,生成3D传感模组与目标物体之间的距离信息;其中,在深度成像模式下,像素阵列中的所有像素均处于开启状态;在接近度传感模式下,像素阵列中与第二反射光束相对应的部分像素处于开启状态。
可以理解的是,当3D传感模组对目标物体进行3D感知时,即当3D传感模组处于深度成像模式时,为了保证根据第一反射光束生成的目标物体的深度图像的详细度、准确性,图像传感器的像素阵列中的所有像素均需要处于开启状态。当3D传感模组对目标物体进行近距离 感测时,即当3D传感模组处于接近度传感模式时,由于根据第二反射光束生成3D传感模组与目标物体之间的距离信息的精度要求并不高,所以为了降低功耗,仅需图像传感器的像素阵列中与第二反射光束相对应的部分像素处于开启状态即可。
请参阅图8,图8为本申请实施例提供的3D传感方法的流程示意图。本申请实施例还提供了一种3D传感方法,应用于本申请实施例提供的3D传感模组,且该3D传感方法包括如下步骤801至802。
步骤801、控制发射端在深度成像模式下,发射第一出射光束;或,在接近度传感模式下,发射第二出射光束。
在本实施例中,以发射端10和接收端20构成3D传感模组,3D传感模组还包括控制与处理电路(未图示)。同时为3D传感模组设置了两种工作模式,分别为深度成像模式和接近度传感模式;其中,深度成像模式用于对目标物体进行3D感知;接近度传感模式用于对目标物体进行近距离感测。基于此,当需要获取目标物体的深度图像时,可以将3D传感模组调整为深度成像模式;在此模式下,控制与处理电路控制发射端10向目标物体发射第一出射光束,而目标物体会对第一出射光束进行反射,从而产生相应的第一反射光束。当需要获取3D传感模组与目标物体之间的距离信息时,可以将3D传感模组调整为接近度传感模式;在此模式下,控制与处理电路控制发射端10向目标物体发射第二出射光束,而目标物体会对第二出射光束进行反射,从而产生相应的第二反射光束。
步骤802、控制接收端在深度成像模式下开启全部像素与发射端形成第一探测通道,以接收第一反射光束并生成目标物体的深度图像;或,在接近度传感模式下开启部分像素与发射端形成第二探测通道,以接收第二反射光束并生成3D传感模组与目标物体之间的距离信息。
在本实施例中,当3D传感模组处于深度成像模式时,控制与处理电路控制接收端20开启全部像素与发射端10形成第一探测通道,以接收第一出射光束经目标物体反射所产生的第一反射光束并生成目标物体的深度图像;当3D传感模组处于接近度传感模式时,控制与处理电路控制接收端20开启部分像素与发射端10形成第二探测通道,以接收第二出射光束经目标物体反射所产生的第二反射光束并生成3D传感模组与目标物体之间的距离信息。
可以理解的是,由于第一出射光束被用来对目标物体进行3D感知(即获取目标物体的深度图像),而第二出射光束被用来对目标物体进行距离感测(即获取目标物体与3D传感模组之间的距离信息),所以本实施例中第一出射光束的视场角应当大于第二出射光束的视场角。此外,“发射端10发射第一出射光束”与“发射端10发射第二出射光束”之间是“或”的 关系,“接收端20生成目标物体的深度图像”与“接收端20生成3D传感模组与目标物体之间的距离信息”之间也是“或”的关系,这就说明本实施例中3D传感模组所具有的深度成像模式与接近度传感模式之间在时序上属于非交叠的关系,即要么处于深度成像模式,去执行对目标物体的3D感知,要么处于接近度传感模式,去执行对目标物体的近距离感测,而不能同时处于深度成像模式和接近度传感模式。
由上可见,在本实施例提供的3D传感方法中,既可以获取目标物体的深度图像,又可以获取3D传感模组与目标物体之间的距离信息,这相当于将传统摄像装置中的3D传感模组与接近度传感器进行了整合,使得3D传感模组与接近度传感器能够共用同一个发射端10,以及共用同一个接收端20,那么当本实施例提供的3D传感方法应用于摄像装置后,便可以不再需要以一个接近度传感器和一个3D传感模组构成摄像装置,而仅需在摄像装置中设置一个本实施例所涉及的3D传感模组即可,从而能够有效地减小摄像装置的体积、质量,并大幅度节约了制造成本。
需要说明的是,本申请内容中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于产品类实施例而言,由于其与方法类实施例相似,所以描述的比较简单,相关之处参见方法类实施例的部分说明即可。
还需要说明的是,在本申请内容中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请内容。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本申请内容中所定义的一般原理可以在不脱离本申请内容的精神或范围的情况下,在其它实施例中实现。因此,本申请内容将不会被限制于本申请内容所示的这些实施例,而是要符合与本申请内容所公开的原理和新颖特点相一致的最宽的范围。

Claims (17)

  1. 一种3D传感模组,其特征在于,包括发射端和接收端;
    所述发射端用于在深度成像模式下,发射第一出射光束,或在接近度传感模式下,发射第二出射光束;其中,所述第一出射光束经目标物体反射后,产生相应的第一反射光束,所述第二出射光束经所述目标物体反射后,产生相应的第二反射光束,所述第一出射光束的视场角大于所述第二出射光束的视场角;
    所述接收端用于在所述深度成像模式下开启全部像素与所述发射端形成第一探测通道,以接收所述第一反射光束并生成所述目标物体的深度图像,或在所述接近度传感模式下开启部分像素与所述发射端形成第二探测通道,以接收所述第二反射光束并生成所述3D传感模组与所述目标物体之间的距离信息。
  2. 如权利要求1所述的3D传感模组,其特征在于,所述发射端包括第一光源、第二光源、光调制器件和基板;其中,所述第一光源和所述第二光源均设置在所述基板上,所述第一光源与所述第二光源相互间隔,所述光调制器件架设在所述基板上,且位于所述第一光源和所述第二光源的光路上;
    所述第一光源用于在所述深度成像模式下,发射第一入射光束;
    所述第二光源用于在所述接近度传感模式下,发射第二入射光束;
    所述光调制器件用于对所述第一入射光束进行调制,并投射出相应的所述第一出射光束,或对所述第二入射光束进行调制,并投射出相应的所述第二出射光束。
  3. 如权利要求2所述的3D传感模组,其特征在于,所述基板相对的两侧均形成有沿所述光路的方向延伸的支撑部,所述光调制器件相对的两侧分别设置在两个所述支撑部上。
  4. 如权利要求2所述的3D传感模组,其特征在于,所述发射端还包括控制电路,所述控制电路设置在所述基板上,所述控制电路电性连接于所述第一光源和所述第二光源;
    所述控制电路用于在所述深度成像模式下,打开所述第一光源,并关闭所述第二光源,或在所述接近度传感模式下,打开所述第二光源,并关闭所述第一光源。
  5. 如权利要求4所述的3D传感模组,其特征在于,所述光调制器件为液晶板,所述液晶板具有扩散状态和透明状态;
    所述控制电路还用于在所述深度成像模式下,将所述液晶板调整为所述扩散状态,或在所述接近度传感模式下,将所述液晶板调整为所述透明状态。
  6. 如权利要求2所述的3D传感模组,其特征在于,所述第一光源包括至少一个子光源;其中,当所述第一光源包括多个所述子光源时,多个所述子光源呈阵列分布。
  7. 如权利要求6所述的3D传感模组,其特征在于,所述光调制器件具有相互连接的第 一调制区域和第二调制区域;其中,所述第一调制区域对应于所述第一光源,所述第二调制区域对应于所述第二光源;
    所述第一调制区域用于对所述第一入射光束进行调制,并投射出相应的所述第一出射光束;其中,所述第一入射光束在所述第一调制区域上的照射范围限制于所述第一调制区域内;
    所述第二调制区域用于对所述第二入射光束进行调制,并投射出相应的所述第二出射光束;其中,所述第二入射光束在所述第二调制区域上的照射范围限制于所述第二调制区域内。
  8. 如权利要求7所述的3D传感模组,其特征在于,所述第一调制区域包括多个子调制区域,所述多个子调制区域相互连接,所述子调制区域的数量与所述子光源相适应,且一个所述子调制区域对应于一个所述子光源;
    所述子调制区域用于对相应所述子光源发射的所述第一入射光束进行调制,并投射出相应的所述第一出射光束;其中,不同的所述子调制区域投射出的所述第一出射光束的视场角不同。
  9. 如权利要求7所述的3D传感模组,其特征在于,所述第一调制区域包括n个子调制区域,第n个所述子调制区域由自第n-1个所述子调制区域的周缘向外延伸预设距离得到;其中,n为大于1的正整数;
    所述子调制区域用于对所述第一入射光束进行调制,并投射出相应的所述第一出射光束;其中,按照第一个所述子调制区域至第n个所述子调制区域的次序,处于打开状态的所述子调制区域的数量与所述第一出射光束的视场角的大小正相关。
  10. 如权利要求2所述的3D传感模组,其特征在于,所述光调制器件包括漫射体和透明玻璃;其中,所述漫射体对应于所述第一光源,所述透明玻璃对应于所述第二光源;
    所述漫射体用于对所述第一入射光束进行调制,并投射出相应的所述第一出射光束;其中,所述第一出射光束为泛光光束;
    所述透明玻璃用于供所述第二入射光束入射,并投射出相应的所述第二出射光束;其中,所述第二入射光束与所述第二出射光束相同。
  11. 如权利要求2所述的3D传感模组,其特征在于,所述光调制器件包括衍射光学元件和透明玻璃;其中,所述衍射光学元件对应于所述第一光源,所述透明玻璃对应于所述第二光源;
    所述衍射光学元件用于对所述第一入射光束进行调制,并投射出相应的所述第一出射光束;其中,所述第一出射光束为散斑光束;
    所述透明玻璃用于供所述第二入射光束入射,并投射出相应的所述第二出射光束;其中, 所述第二入射光束与所述第二出射光束相同。
  12. 如权利要求2所述的3D传感模组,其特征在于,所述光调制器件包括衍射光学元件和光透镜;其中,所述衍射光学元件对应于所述第一光源,所述光透镜对应于所述第二光源;
    所述衍射光学元件用于对所述第一入射光束进行调制,并投射出相应的所述第一出射光束;其中,所述第一出射光束为散斑光束;
    所述光透镜用于对所述第二入射光束进行汇聚,并投射出相应的所述第二出射光束。
  13. 如权利要求2所述的3D传感模组,其特征在于,所述基板为陶瓷基板。
  14. 如权利要求2所述的3D传感模组,其特征在于,所述第一光源和所述第二光源的类型均包括LED和VCSEL中的任一种。
  15. 如权利要求1所述的3D传感模组,其特征在于,所述接收端包括图像传感器,所述图像传感器包括像素阵列;
    所述图像传感器用于通过所述像素阵列与所述发射端形成的第一探测通道接收所述第一反射光束,并根据所述第一反射光束生成所述目标物体的深度图像,或通过所述像素阵列与所述发射端形成的第二探测通道接收所述第二反射光束,并根据所述第二反射光束生成所述3D传感模组与所述目标物体之间的距离信息;其中,在所述深度成像模式下,所述像素阵列中的所有像素均处于开启状态;在所述接近度传感模式下,所述像素阵列中与所述第二反射光束相对应的部分像素处于所述开启状态。
  16. 一种3D传感方法,应用于3D传感模组,其特征在于:
    控制发射端在深度成像模式下,发射第一出射光束,或在接近度传感模式下,发射第二出射光束;其中,所述第一出射光束经目标物体反射后,产生相应的第一反射光束,所述第二出射光束经所述目标物体反射后,产生相应的第二反射光束,所述第一出射光束的视场角大于所述第二出射光束的视场角;
    控制接收端在所述深度成像模式下开启全部像素与所述发射端形成第一探测通道,以接收所述第一反射光束并生成所述目标物体的深度图像,或在所述接近度传感模式下开启部分像素与所述发射端形成第二探测通道,以接收所述第二反射光束并生成所述3D传感模组与所述目标物体之间的距离信息。
  17. 一种电子设备,其特征在于,包括如权利要求1-15任一项所述的3D传感模组。
PCT/CN2022/122365 2022-07-18 2022-09-29 一种3d传感模组、3d传感方法及电子设备 WO2024016478A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210843030.3 2022-07-18
CN202210843030.3A CN115407308A (zh) 2022-07-18 2022-07-18 一种3d传感模组、3d传感方法及电子设备

Publications (1)

Publication Number Publication Date
WO2024016478A1 true WO2024016478A1 (zh) 2024-01-25

Family

ID=84157285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/122365 WO2024016478A1 (zh) 2022-07-18 2022-09-29 一种3d传感模组、3d传感方法及电子设备

Country Status (2)

Country Link
CN (1) CN115407308A (zh)
WO (1) WO2024016478A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359945B (zh) * 2023-05-16 2023-10-20 荣耀终端有限公司 一种tof传感模组及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140346361A1 (en) * 2013-05-23 2014-11-27 Yibing M. WANG Time-of-flight pixels also sensing proximity and/or detecting motion in imaging devices & methods
CN107884066A (zh) * 2017-09-29 2018-04-06 深圳奥比中光科技有限公司 基于泛光功能的光传感器及其3d成像装置
CN107968865A (zh) * 2017-12-26 2018-04-27 广东欧珀移动通信有限公司 输出模组和电子装置
CN208781240U (zh) * 2018-06-26 2019-04-23 深圳阜时科技有限公司 3d芯片模组、身份识别装置及电子设备
CN110456379A (zh) * 2019-07-12 2019-11-15 深圳奥比中光科技有限公司 融合的深度测量装置及距离测量方法
CN111142088A (zh) * 2019-12-26 2020-05-12 深圳奥比中光科技有限公司 一种光发射单元、深度测量装置和方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140346361A1 (en) * 2013-05-23 2014-11-27 Yibing M. WANG Time-of-flight pixels also sensing proximity and/or detecting motion in imaging devices & methods
CN107884066A (zh) * 2017-09-29 2018-04-06 深圳奥比中光科技有限公司 基于泛光功能的光传感器及其3d成像装置
CN107968865A (zh) * 2017-12-26 2018-04-27 广东欧珀移动通信有限公司 输出模组和电子装置
CN208781240U (zh) * 2018-06-26 2019-04-23 深圳阜时科技有限公司 3d芯片模组、身份识别装置及电子设备
CN110456379A (zh) * 2019-07-12 2019-11-15 深圳奥比中光科技有限公司 融合的深度测量装置及距离测量方法
CN111142088A (zh) * 2019-12-26 2020-05-12 深圳奥比中光科技有限公司 一种光发射单元、深度测量装置和方法

Also Published As

Publication number Publication date
CN115407308A (zh) 2022-11-29

Similar Documents

Publication Publication Date Title
WO2020057208A1 (zh) 电子设备
WO2020057207A1 (zh) 电子设备
US20210352198A1 (en) Camera Assembly and Electronic Device
US11546453B2 (en) Projection module and terminal
JP2017502466A (ja) バックライトモジュール及び該バックライトモジュールを用いた液晶表示装置
WO2024016478A1 (zh) 一种3d传感模组、3d传感方法及电子设备
WO2021109374A1 (zh) 补光装置、补光装置的控制方法及计算机存储介质
WO2019213865A1 (zh) 一种光源模组、图像获取装置、身份识别装置及电子设备
CN111965895A (zh) 一种显示装置
US20230154359A1 (en) Display assembly and display device
WO2020020125A1 (zh) 移动终端
CN211603554U (zh) Tof模组、摄像装置及电子设备
WO2022017438A1 (zh) 显示组件和显示装置
WO2015016048A1 (ja) 光源装置、照明装置及び液晶表示装置
CN209448840U (zh) 光源模组、3d成像系统、身份识别装置及电子设备
WO2021164455A1 (zh) 液晶模组、电子设备及屏幕交互系统
CN109581746A (zh) 背光模组和显示装置
US20230324776A1 (en) Optical system comprising hybrid light source, and projector device comprising same
US20210264625A1 (en) Structured light code overlay
CN209803502U (zh) 电子设备
CN208907945U (zh) 投影模组、成像装置和电子装置
TWI438547B (zh) 多向穿透式彩色影像投射裝置及其方法
US20210124181A1 (en) Projection module, structured light three-dimensional imaging device and electronic apparatus
CN113747140A (zh) Tof摄像模组、电子设备及3d图像的生成方法
EP4369695A1 (en) Electronic device comprising cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22951742

Country of ref document: EP

Kind code of ref document: A1