CN113452983A - Image capturing device and method and computing system - Google Patents

Image capturing device and method and computing system Download PDF

Info

Publication number
CN113452983A
CN113452983A CN202010217801.9A CN202010217801A CN113452983A CN 113452983 A CN113452983 A CN 113452983A CN 202010217801 A CN202010217801 A CN 202010217801A CN 113452983 A CN113452983 A CN 113452983A
Authority
CN
China
Prior art keywords
sub
light sources
light source
light
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010217801.9A
Other languages
Chinese (zh)
Inventor
王志
毛信贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Oumaisi Microelectronics Co Ltd
Original Assignee
Jiangxi Oumaisi Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Oumaisi Microelectronics Co Ltd filed Critical Jiangxi Oumaisi Microelectronics Co Ltd
Priority to CN202010217801.9A priority Critical patent/CN113452983A/en
Publication of CN113452983A publication Critical patent/CN113452983A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/322Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using varifocal lenses or mirrors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to an image capturing device, a method and a computing system, wherein the image capturing device comprises: the emitter is used for emitting light beams to the field of view and comprises at least one group of light source units, each light source unit comprises a plurality of sub light sources which are arranged in an array and controlled independently, each light source unit corresponds to one subarea of the field of view, and the sub light sources correspond to a plurality of subareas of the subareas one by one; the receiver is used for receiving the light beams reflected back by the field of view and outputting response signals, and comprises at least one group of receiving units, each receiving unit is a single photon sensing unit, each receiving unit corresponds to a partition, and the single photon sensing units correspond to a plurality of sub-areas of the partition; and the processor is electrically connected with the emitter and the receiver and is used for generating an emission signal to control the plurality of sub light sources of the light source unit to be sequentially lightened and obtaining depth information according to the emission signal and the received response signal, and the final imaging pixel effect is the imaging effect of multiple times of the prior art, so that the depth image accuracy is higher.

Description

Image capturing device and method and computing system
Technical Field
The invention relates to the technical field of 3D imaging systems, in particular to an image capturing device and method and a computing system.
Background
In a 3D imaging system, a Time Of Flight (TOF) method may measure a distance to a target to obtain a depth image including a target depth value, thereby implementing measurement Of a three-dimensional structure or a three-dimensional contour Of a target object (or a target object detection area), and is widely applied in various fields such as motion sensing control, behavior analysis, monitoring, automatic driving, artificial intelligence, machine vision, and automatic 3D modeling.
The existing time-of-flight method includes an indirect time-of-flight method (I-ToF) and a direct time-of-flight method (D-ToF), wherein the indirect time-of-flight method is the mainstream technical scheme at present, and calculates a time difference by measuring a phase generated by a Laser emitted by a Vertical-Cavity Surface-Emitting Laser (VCSEL) going back and forth to a target object once to obtain a depth value, the application and operation of the indirect time-of-flight method are simple, the Vertical-Cavity Surface-Emitting Laser and an image sensor of a camera are synchronously arranged, a pulse of the Laser is emitted in a phase consistent with a shutter of the camera, and a photon time-of-flight is calculated by using a light pulse desynchronization effect, so that a distance between an Emitting point and an object can be deduced; the direct time-of-flight method, which is a complementary technical scheme of the indirect time-of-flight method, measures and calculates a time difference by directly measuring a time interval from emission to reception of a pulse signal emitted by a vertical cavity surface emitting laser to obtain a depth value.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image capturing apparatus and method, and a computing system, which are directed to the problem of poor imaging performance.
An image capture device comprising:
an emitter for emitting a light beam to a field of view, comprising at least one set of light source units, each of the light source units comprising a plurality of sub-light sources arranged in an array and controlled independently, each of the light source units corresponding to a partition of the field of view, the plurality of sub-light sources corresponding to a plurality of sub-areas of the partition, wherein the field of view comprises at least one of the partitions, and each of the partitions comprises a plurality of the sub-areas;
a receiver for receiving the light beam reflected back by the field of view and outputting a response signal, comprising at least one set of receiving units, each receiving unit being a single photon sensing unit, each receiving unit corresponding to one of the zones, one single photon sensing unit corresponding to all sub-zones of one of the zones;
and the processor is electrically connected with the transmitter and the receiver and used for generating a transmitting signal to control the plurality of sub light sources of the light source unit to be sequentially lightened and obtaining depth information according to the transmitting signal and the received response signal.
The technical scheme at least has the following technical effects: in the above image capturing apparatus, the light source units, the field-of-view partitions, and the receiving units are in one-to-one correspondence, the sub light sources and the sub-areas are in one-to-one correspondence, and one single-photon sensing unit corresponds to all the sub-areas in one partition; the processor generates a transmitting signal and transmits the transmitting signal to the transmitter, the transmitter controls the sub-light sources to be lightened according to the received transmitting signal, the sub-light sources emit light beams towards the view field to lighten the sub-regions corresponding to the sub-light sources, the light beams are reflected by the view field at the moment, the receiver receives the light beams reflected by the view field and transmits response signals to the processor, the processor obtains depth information according to the transmitting signal and the received response signals, the processor controls the sub-light sources of the transmitter to be lightened sequentially, so that the plurality of sub-light sources corresponding to the sub-light sources can be ensured to be lightened sequentially, the receiver receives the reflected light beams of the plurality of sub-light sources sequentially and outputs a plurality of response signals sequentially, and a plurality of depth information can be obtained under the action of the processor, and the plurality of depth information belong to one sub-region at the moment and are independent and do not overlap with each other, therefore, the sum of the plurality of depth information is calculated by the processor, the final imaging pixel effect is the imaging effect of multiple times of the prior art, and the accuracy of the depth image containing the target depth value is high.
In one embodiment, the transmitter includes an array of VCSEL lasers integrated on the same semiconductor die.
The technical scheme is characterized in that the emitters are defined as arrayed VCSEL lasers integrated on the same semiconductor die, so that the array arrangement of the sub-light sources is convenient and can be controlled independently.
In one embodiment, the receiver comprises an array of Single Photon Avalanche Diodes (SPADs).
The technical scheme is characterized in that the receiver comprises an array of Single Photon Avalanche Diodes (SPADs) so as to output response signals according to the received light beams reflected back by the field of view.
In one embodiment, each of the sub-light sources in the light source unit has the same shape.
According to the technical scheme, the shapes of all the sub light sources in the light source unit are limited to be the same, so that the light source unit is simple in structure and convenient for division of the view field partitions.
In one embodiment, the number of the sub-light sources of all the light source units is the same.
According to the technical scheme, the number of the sub light sources in all the light source units is limited to be the same, so that the structure of the emitter is simple, and the division of the light source units is convenient to realize.
In one embodiment, the processor is further configured to focus the receiver according to the field of view.
According to the technical scheme, the focus of the receiver can be adjusted according to the size of the field of view by limiting the processor, so that the receiver can adapt to different light fields of the transmitter, and the field of view with different distances can be met.
In addition, the present invention also provides an image capturing method comprising the steps of:
step S401, sequentially turning on a plurality of sub-light sources in a light source unit according to an emission signal, wherein the sub-light sources emit light beams to a field of view, the plurality of sub-light sources are arranged in an array and are independently controlled, each light source unit corresponds to a partition of the field of view, and the plurality of sub-light sources correspond to a plurality of sub-areas of the partition one by one, wherein the field of view comprises at least one partition, and each partition comprises a plurality of sub-areas;
step S402, receiving the emission light beam reflected by the field of view and outputting a response signal;
step S403, analyzing the transmitting signal and the received response signal to obtain depth information.
The technical scheme at least has the following technical effects: through step S401, the processor generates an emission signal and transmits the emission signal to the emitter, the emitter controls the sub sequential light sources to be turned on according to the received emission signal, each sub light source emits a light beam toward the field of view to turn on the sub area corresponding to the sub light source, and the light beam is reflected by the field of view at this time; the receiver receives the light beam reflected by the field of view and outputs a response signal, and the corresponding signal is transmitted to the processor, via step S402; through step S403, the processor obtains depth information according to the emission signal and the received response signal, and can obtain a piece of depth information according to each corresponding emission signal and response signal, and the processor can obtain a sum of a plurality of pieces of depth information according to the emission signal and response signal received in sequence, and the final imaging pixel effect is an imaging effect of the prior art with multiple times, so that the accuracy of a depth image containing a target depth value is high. The imaging effect of pixel multiplication can be conveniently and quickly obtained by the method.
In one embodiment, the sequentially turning on a plurality of sub-light sources in a light source unit according to the emission signal, where the sub-light sources emit light beams to the field of view, specifically includes:
in each of the light source units, the sub-light sources fixedly emit light beams to a sub-area of one of the zones.
The image capturing method has simple logic and is convenient for the setting of the processor by limiting the sub light sources to have fixed corresponding relations with the sub areas, so that the light beams emitted by the sub light sources are fixedly irradiated on the corresponding sub areas.
In one embodiment, the sequentially turning on a plurality of sub-light sources in a light source unit according to the emission signal, where the sub-light sources emit light beams to the field of view, specifically includes:
in each light source unit, the sub-light sources randomly emit light beams to a sub-area of the partition, and the emitted light beams of the sub-light sources correspond to the sub-areas of the partition one by one.
The image capturing method enables the sub-light sources to emit the light beams to be randomly irradiated on the sub-regions by limiting the sub-light sources to have random corresponding relations with the sub-regions, and the emission light beams of the sub-light sources are limited to correspond to the sub-regions one by one, so that abundant image information can be captured.
In one embodiment, the analyzing the transmission signal and the received response signal to obtain depth information specifically includes:
analyzing one sub light source emission signal and the received corresponding response signal to obtain depth information;
and superposing the depth information corresponding to the plurality of sub-light sources to obtain the depth information of the fields of view corresponding to the plurality of sub-light sources.
According to the image capturing method, the corresponding depth information is obtained according to the multiple groups of corresponding emission signals and response signals in sequence, and then the multiple depth information is superposed, so that the imaging effect of pixel multiplication can be obtained conveniently and quickly.
In one embodiment, the image capture method further comprises focusing the receiver according to the field of view.
According to the image capturing method, the processor can adjust the focal length of the receiver according to the size of the field of view, so that the receiver can adapt to different light fields of the transmitter, and further the field of view with different distances can be met.
In addition, the invention also provides a computing system comprising the image capturing device according to any one of the above technical solutions.
The technical scheme at least has the following technical effects: since in the above image capturing apparatus, the light source units, the field-of-view zones, and the receiving units are in one-to-one correspondence, the sub light sources and the sub zones are in one-to-one correspondence, and one single-photon sensing unit corresponds to all the sub zones in one zone; the processor generates a transmitting signal and transmits the transmitting signal to the transmitter, the transmitter controls the sub-light sources to be lightened according to the received transmitting signal, the sub-light sources emit light beams towards the view field to lighten the sub-regions corresponding to the sub-light sources, the light beams are reflected by the view field at the moment, the receiver receives the light beams reflected by the view field and transmits response signals to the processor, the processor obtains depth information according to the transmitting signal and the received response signals, the processor controls the sub-light sources of the transmitter to be lightened sequentially, so that the plurality of sub-light sources corresponding to the sub-light sources can be ensured to be lightened sequentially, the receiver receives the reflected light beams of the plurality of sub-light sources sequentially and outputs a plurality of response signals sequentially, and a plurality of depth information can be obtained under the action of the processor, and the plurality of depth information belong to one sub-region at the moment and are independent and do not overlap with each other, therefore, the sum of the plurality of depth information is calculated by the processor, the final imaging pixel effect is the imaging effect of multiple times of the prior art, and the accuracy of the depth image containing the target depth value is high. The computing system with the image capture device is able to obtain pixel-multiplied imaging effects.
In one embodiment, the computing system is a mobile computer, such as a tablet computer, a smart phone, or the like.
Drawings
FIG. 1 is a schematic diagram of an image capture device according to the prior art;
FIG. 2 is a schematic diagram of an image capture device according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating operation of an image capture device according to an embodiment of the present invention;
FIG. 4 is a flowchart of an image capturing method according to an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic diagram illustrating an operation of an image capturing apparatus in the prior art, in which a light beam 02 emitted from an emitter 01 is projected to a whole field of view 03, the light beam 02 reflected from the field of view 03 is transmitted to a receiver 04, and a time difference is directly calculated according to time points of emission of the light beam 02 and sensing of the light beam 02 after sensing the light beam 02 by a light sensing point on the receiver 04, so that depth information can be obtained, but since a sub-light source in the existing emitter 01, a sub-area of the field of view 03, and a light sensing point of the receiver 04 are in a one-to-one correspondence relationship, obtaining depth information is poor, and it is difficult to obtain a good imaging pixel effect, so that accuracy of a depth image including a target depth value is reduced. In order to solve the problems, the depth information is improved by changing the corresponding relation among the sub-light sources, the sub-areas and the photosites, and the imaging effect of pixel multiplication is obtained.
In the first embodiment, the first step is,
f, G, M in fig. 3 is a corresponding enlarged schematic view, the present invention provides an image capturing apparatus 100 for calculating depth information of a target object (field of view), the image capturing apparatus 100 includes three parts of a transmitter 110, a receiver 120 and a processor 140, the processor 140 is electrically connected to the transmitter 110 and the receiver 120, there is transmission of electrical signals between the processor 140 and the transmitter 110 and between the processor 140 and the receiver 120, respectively, wherein,
the emitter 110 is configured to emit a light beam 111 to the field of view 130, the emitter 110 is capable of emitting the light beam 111 when receiving an emission signal sent by the processor 140, the emitter 110 includes at least one group of light source units 112, the number of the light source units 112 may be one, two, three, and three or more, each light source unit 112 includes a plurality of sub-light sources 113, the plurality of sub-light sources 113 may be two sub-light sources 113, three sub-light sources 113, four sub-light sources 113, and four or more sub-light sources 113, the sub-light sources 113 are arranged in an array, the plurality of sub-light sources 113 may be arranged in a matrix with rows and columns, the plurality of sub-light sources 113 may also be arranged in an irregular array, and the sub-light sources 113 are independently controlled by the processor 140, the field of view 130 includes at least one partition 131, and each partition 131 includes a plurality of sub-partitions 132, each light source unit 112 corresponds to one partition 131 of the field of view 130, the plurality of light source units 112 correspond to the plurality of partitions 131 of the field of view 130 one to one, the plurality of sub-light sources 113 correspond to the plurality of sub-regions 132 of one partition 131 one to one, and one sub-light source 113 emits the light beam 111 to one partition 131;
the receiver 120 is configured to receive the light beam 111 reflected by the field of view 130, the light beam 111 acts on the receiver 120 to output a response signal, the output signal is transmitted to the processor 140, the receiver 120 includes at least one group of receiving units 121, the number of the receiving units 121 may be one, two, three, or more than three, each receiving unit 121 is a single photon sensing unit 122, each receiving unit 121 corresponds to a partition 131, the single photon sensing unit 122 corresponds to all sub-partitions 132 of the partition 131, and light reflected from all sub-partitions 132 of the same partition 131 is received by the same single photon sensing unit 122;
the processor 140 is configured to generate a transmission signal to control the plurality of sub-light sources 113 of the light source unit 112 to be sequentially turned on, the transmission signal is transmitted to the transmitter 110, the transmitter 110 controls the corresponding sub-light sources 113 to be turned on according to the transmission signal, and the processor 140 analyzes and calculates to obtain the depth information according to the transmission signal and the received response signal.
In the image capturing apparatus 100, the light source units 112, the field of view 130, and the receiving units 121 correspond to one another, the sub-light sources 113 and the sub-regions 132 correspond to one another, and one single-photon sensing unit 122 corresponds to all the sub-regions 132 in one sub-region 131; the processor 140 generates a transmission signal and transmits the transmission signal to the transmitter 110, the transmitter 110 controls the sub-light sources 113 to light according to the received transmission signal, the sub-light sources 113 emit light beams 111 toward the field of view 130 to light the sub-regions 132 corresponding to the sub-light sources 113, and the light beams 111 are reflected by the field of view 130 at the moment, the receiver 120 receives the light beams 111 reflected by the field of view 130 and transmits response signals to the processor 140, the processor 140 obtains depth information according to the transmission signal and the received response signals, the processor 140 controls the sub-light sources 113 of the transmitter 110 to light in sequence, so that the sub-regions 131 corresponding to the sub-light sources can be ensured to light in sequence, the receiver 120 receives the reflected light beams 111 of the sub-light sources 113 in sequence and outputs the response signals in sequence, and a plurality of depth information can be obtained under the action of the processor 140, because the depth information belongs to one sub-region 131 at the moment, and independent of each other without overlapping each other, so that the sum of the plurality of depth information is calculated by the processor 140, and the final imaging pixel effect is multiple times of the imaging effect of the prior art, so that the depth image containing the target depth value has high accuracy.
The structure of the emitter 110 is various, so that in order to facilitate the array arrangement and independent control of the plurality of sub-light sources 113, in a preferred embodiment, the emitter 110 may include an array of VCSEL (Vertical-Cavity Surface-Emitting Laser) lasers integrated on the same semiconductor die.
In the image capturing apparatus 100, since the VCSEL laser has advantages of low cost, easy integration into a large area array, and the like, and has great advantages in the quality of the light beam 111, the coupling efficiency with the optical fiber, and the reflectivity of the cavity surface, the emitter 110 is defined as the VCSEL laser integrated on the same semiconductor die in an array, so that the array of the sub-light sources 113 can be arranged and controlled independently. In a specific arrangement, the emitter 110 may be a VCSEL array light source chip formed by generating a plurality of VCSEL light sources on a single semiconductor substrate, the light beam 111 emitted by the emitter 110 may be visible light, infrared light, ultraviolet light, etc., although the emitter 110 is not limited to the VCSEL laser, and may also be a Light Emitting Diode (LED), an Edge Emitting Laser (EEL), etc., or a combination of a Light Emitting Diode (LED), an Edge Emitting Laser (EEL) and a VCSEL laser, and the specific structural form of the emitter 110 is determined according to the actual situation of the image capturing apparatus 100.
The receiver 120 can be configured in a variety of ways, such that in order to facilitate the output of a response signal based on the received light beam 111 reflected back from the field of view 130, in a preferred embodiment, the receiver 120 can include an array of Single Photon Avalanche Diodes (SPADs), each single photon sensing unit 122 being a single photon avalanche diode, the plurality of arrays of single photon avalanche diodes forming the receiver 120.
In the image capturing apparatus 100, the Single Photon Avalanche Diode (SPAD) has the advantages of high sensitivity, high response speed, and the like, and can realize long-distance and high-precision measurement, and compared with an image sensor based on light integration, such as a CCD (charge coupled device)/CMOS (complementary metal oxide semiconductor), the single photon avalanche diode can count single photons, for example, collect weak light signals by using a time dependent single photon counting method (TCSPC), and therefore, the receiver 120 is limited to include an array of Single Photon Avalanche Diodes (SPADs), so as to output response signals according to the received light beams 111 reflected back from the field of view 130. In a specific arrangement, the receiver 120 is not limited to the above-mentioned array of Single Photon Avalanche Diodes (SPADs), but may be in other structural forms that can meet the requirements, and the specific structural form of the receiver 120 is determined according to the actual situation of the image capturing apparatus 100.
Since the size of the corresponding light source unit 112 is proportional to the size of the corresponding field of view 130, that is, the smaller the size of the field of view 130, the smaller the light source unit 112 required for illuminating it, therefore, the size of the field of view 130 can be adjusted by changing the number and shape of the sub-light sources 113 in the light source unit 112, and in order to facilitate the division of the field of view 130, a preferred embodiment is that, as shown in fig. 3, the shape of each sub-light source 113 in the light source unit 112 is the same as the shape of each partition 131 of the field of view 130 corresponding to the light source unit 112.
In the image capturing apparatus 100 described above, by defining the shape of each sub-light source 113 in the light source unit 112 to be the same, the structure of the light source unit 112 is made simple, facilitating the division of the field of view 130. In a specific setting, the plurality of sub light sources 113 in the light source unit 112 have the same shape, each sub light source 113 may have a square shape, a rectangular shape, a triangular shape, a circular shape, and the like, and the corresponding each partition 131 has a corresponding square shape, a rectangular shape, a triangular shape, a circular shape, and the like, and the sub light sources 113 in the plurality of light source units 112 of the emitter 110 may have different shapes, for example, the sub light sources 113 in one group of light source units 112 have a square shape, and the sub light sources 113 in the other group of light source units 112 have a circular shape, so as to adapt to different image characteristics, and the sub light sources 113 in the plurality of light source units 112 of the emitter 110 may have the same shape, for example, the sub light sources 113 in one group of light source units 112 have a square shape, and the sub light sources 113 in the other group of light source units 112 have a square shape, so that the structure of the entire emitter 110 is simple, and the division of the field of view 130 is further facilitated.
In order to facilitate division of the field of view 130, as shown in fig. 3, specifically, the number of sub-light sources 113 of all the light source units 112 is the same as the number of each partition 131 of the field of view 130 corresponding to the light source unit 112.
In the above-described image capturing apparatus 100, by defining the number of the sub light sources 113 to be the same in all the light source units 112, the configuration of the transmitter 110 is made simple, facilitating the division of the plurality of light source units 112. In a specific setting, the number of the sub light sources 113 in the light source unit 112 is the same, each sub light source 113 may be two, three, four, five, and the like, the shape of each corresponding partition 131 is correspondingly two, three, four, five, and the like, the number of the sub light sources 113 in the light source units 112 of the emitter 110 may be different, for example, the number of the sub light sources 113 in one group of the light source units 112 is three, the number of the sub light sources 113 in another group of the light source units 112 is six, so as to adapt to different image characteristics, the number of the sub light sources 113 in the light source units 112 of the emitter 110 may be the same, for example, the number of the sub light sources 113 in one group of the light source units 112 is four, and the number of the sub light sources 113 in another group of the light source units 112 is also four, so that the structure of the entire emitter 110 is simple, and the division of the field of view 130 is further facilitated.
In order to use fields of view 130 at different distances, in a preferred embodiment, the processor 140 is further configured to focus the receiver 120 based on the fields of view 130.
In the image capturing apparatus 100, the processor 140 is limited to perform the focus adjustment on the receiver 120 according to the size of the field of view 130, so that the receiver can adapt to different light fields of the transmitter 110, and thus the field of view 130 with different distances can be satisfied. In a specific setting, when the distance between the transmitter 110 and the field of view 130 is larger, the emission angle of the transmitter 110 is larger, the field of view 130 is larger, when the distance between the transmitter 110 and the field of view 130 is smaller, the emission angle of the transmitter 110 is smaller, the field of view 130 is smaller, and when the distance between the transmitter 110 and the field of view 130 is gradually increased or gradually decreased, the focal length of the receiver 120 needs to be correspondingly adjusted according to the size of the field of view 130.
In the second embodiment, the first embodiment of the method,
in addition, as shown in fig. 2, 3 and 4, the present invention also provides an image capturing method comprising the steps of:
step S401, a plurality of sub light sources 113 in a light source unit 112 are sequentially turned on according to emission signals, the sub light sources 113 emit light beams 111 to the field of view 130, the plurality of sub light sources 113 are arranged in an array and are independently controlled, each light source unit 112 corresponds to a partition 131 of the field of view 130, the plurality of sub light sources 113 correspond to a plurality of sub areas 132 of the partition 131 one by one, wherein the field of view 130 comprises at least one partition 131, each partition 131 comprises a plurality of sub areas 132, and when the specific setting is carried out, the plurality of sub light sources 113 are sequentially turned on according to the emission signals, so that the plurality of sub areas 132 in the field of view 130 corresponding to the plurality of sub light sources 113 are sequentially illuminated;
step S402, receiving the emission beam 111 reflected by the field of view 130 and outputting a response signal;
step S403, analyzing the transmitted signal and the received response signal to obtain depth information.
In the above image capturing method, through step S401, the processor 140 generates an emission signal and transmits the emission signal to the emitter 110, the emitter 110 controls the sub sequential light sources to be turned on according to the received emission signal, each sub light source 113 emits a light beam 111 toward the field of view 130 to turn on the sub area 132 corresponding to the sub light source 113, and the light beam 111 is reflected by the field of view 130 at this time; the receiver 120 receives the light beam 111 reflected by the field of view 130 and outputs a response signal, and the corresponding signal is transmitted to the processor 140, via step S402; through step S403, the processor 140 obtains depth information according to the emission signal and the received response signal, and can obtain a piece of depth information according to each corresponding emission signal and response signal, and the processor 140 can obtain a sum of a plurality of pieces of depth information according to the emission signal and response signal received in sequence, and the final imaging pixel effect is an imaging effect of the prior art with multiple times, so that the accuracy of a depth image containing a target depth value is high. The imaging effect of pixel multiplication can be conveniently and quickly obtained by the method.
In order to facilitate the setting of the processor 140, in a preferred embodiment, the sequentially turning on a plurality of sub-light sources 113 in a light source unit 112 according to the emission signal, the sub-light sources 113 emitting light beams to the field of view 130, specifically includes: in each light source unit 112, the sub-light source 113 fixedly emits light beams to a sub-area 132 of a partition 131.
In the above-described image capturing method, by defining the sub-light sources 113 to have a fixed correspondence with the sub-areas 132 so that the light beams emitted by the sub-light sources 113 are fixedly irradiated onto the corresponding sub-areas 132, the logic is simple to facilitate the setting of the processor 140. In a specific setting, the number of the sub-light sources 113 of the light source unit 112 is the same as the number of the sub-areas 132 in the sub-area 131, and each sub-light source 113 generates a light beam which is fixedly irradiated onto one sub-area 132 each time it is turned on, and the sub-light sources 113 and the sub-areas 132 have the same corresponding mapping relationship.
In order to capture more abundant image information, a preferred embodiment sequentially turns on a plurality of sub-light sources 113 in a light source unit 112 according to an emission signal, and the sub-light sources 113 emit light beams to the field of view 130, which specifically includes: in each light source unit 112, the sub-light sources 113 randomly emit light beams to a sub-area 132 of a partition 131, and the emitted light beams of the sub-light sources 113 correspond to the sub-areas 132 of the partition 131 one by one.
In the above image capturing method, by defining the sub-light sources 113 to have random correspondence with the sub-regions 132, the light beams emitted by the sub-light sources 113 are randomly irradiated onto the sub-regions 132, and defining the emission light beams of the plurality of sub-light sources 113 to correspond to the plurality of sub-regions 132 one by one, so that richer image information can be captured. In a specific setting, the number of the sub-light sources 113 of the light source unit 112 is the same as the number of the sub-regions 132 in the sub-region 131, and the sub-region 132 to which the light beam generated by each sub-light source 113 is irradiated is not fixed when each sub-light source 113 is turned on, one sub-region 132 to which the light beam generated by the sub-light source 113 is irradiated is turned on at one time, another sub-region 132 to which the light beam generated by the sub-light source 113 is irradiated is turned on at the next time, and although the plurality of sub-light sources 113 and the sub-regions 132 have different corresponding mapping relationships, the emitted light beams of the plurality of sub-light sources 113 each time correspond to the plurality of sub-regions 132 of the sub-region 131 one by one.
It should be noted that, when the light source units 112 and the partitions 131 are turned on for multiple times, the light source units 112 and the partitions 131 may have the same mapping relationship, and at this time, one light source unit 112 corresponds to one fixed partition 131, so as to facilitate the setting of the processor; of course, the plurality of light source units 112 and the plurality of partitions 131 may have different corresponding mapping relationships when turned on a plurality of times, in which case the light source unit 112 corresponds to one partition 131 when turned on once and the light source unit 112 corresponds to another partition 131 when turned on next, so as to enrich image capturing information.
In order to obtain the imaging effect of pixel multiplication conveniently and quickly, a preferred embodiment analyzes the emission signal and the received response signal to obtain depth information, and specifically includes:
analyzing a signal emitted by one sub-light source 113 and the received corresponding response signal to obtain depth information; the depth information corresponding to the plurality of sub-light sources 113 is superimposed to obtain the depth information of the fields of view 130 corresponding to the plurality of sub-light sources 113.
According to the image capturing method, the corresponding depth information is obtained according to the multiple groups of corresponding emission signals and response signals in sequence, and then the multiple depth information is superposed, so that the imaging effect of pixel multiplication can be obtained conveniently and quickly.
For convenience of illustration, the embodiment of the invention sets one light source unit 112 having four sub-light sources 113, so as to illustrate the operation principle of the image capturing apparatus 100, the sub-area 131 corresponding to the four sub-light sources 113 has four sub-areas 132, the sub-area 132 corresponding to the light source a in the four sub-light sources 113 is the area a, the sub-area 132 corresponding to the light source B is the area B, the sub-area 132 corresponding to the light source C is the area C, the sub-area 132 corresponding to the light source D is the area D, the processor 140 controls the light source a, the light source B, the light source C and the light source D to be sequentially turned on, the corresponding areas a, B, C and D are sequentially turned on, and through the actions of the receiver 120 and the processor 140, the depth information h (a) corresponding to the area a, the depth information h (B) corresponding to the area B, the depth information h (C) corresponding to the area C, the depth information h (D) corresponding to the area D can be sequentially obtained, at this time, the fields of view 130 corresponding to the depth information h (a), the depth information h (b), the depth information h (c), and the depth information h (D) all belong to the same partition 131 having the area a, the area b, the area c, and the area D, and are independent depth information, and there is no overlap between them, so that the depth information of the partition 131 obtained by the processor 140 is the sum of the depth information h (a), the depth information h (b), the depth information h (c), and the depth information h (D), and the final 3D information map obtains 4 times of depth information and 4 times of imaging pixel effect compared with the prior art, and therefore, the depth information of the image capturing apparatus 100 is larger and the imaging pixel effect is better.
In order to be able to satisfy fields of view 130 of different distances, in a preferred embodiment, the image capture method further comprises focusing the receiver 120 according to the field of view 130.
In the above image capturing method, the processor 140 can perform focus adjustment on the receiver 120 according to the size of the field of view 130, so that the receiving can adapt to different light fields of the transmitter 110, and thus can satisfy the field of view 130 with different distances. When the distance between the transmitter 110 and the field of view 130 is large, the emission angle of the transmitter 110 is large, and the field of view 130 is large, and when the distance between the transmitter 110 and the field of view 130 is small, the emission angle of the transmitter 110 is small, and the field of view 130 is small, and when the distance between the transmitter 110 and the field of view 130 gradually increases or gradually decreases, the focal length of the receiver 120 needs to be correspondingly adjusted according to the size of the field of view 130.
In the third embodiment, the first step is that,
in addition, the invention also provides a computing system comprising the image capturing device 100 according to any one of the above technical solutions. In a preferred embodiment, the computing system may be a mobile computer, such as a tablet computer, a smart phone, a notebook computer, etc., and may also be in other structural forms meeting the needs.
In the above-described computing system, since in the above-described image capturing apparatus 100, the light source units 112, the field of view 130, and the receiving units 121 correspond one to one, the sub light sources 113 and the sub-zones 132 correspond one to one, and one single-photon sensing unit 122 corresponds to all the sub-zones 132 in one zone 131; the processor 140 generates a transmission signal and transmits the transmission signal to the transmitter 110, the transmitter 110 controls the sub-light sources 113 to light according to the received transmission signal, the sub-light sources 113 emit light beams 111 toward the field of view 130 to light the sub-regions 132 corresponding to the sub-light sources 113, and the light beams 111 are reflected by the field of view 130 at the moment, the receiver 120 receives the light beams 111 reflected by the field of view 130 and transmits response signals to the processor 140, the processor 140 obtains depth information according to the transmission signal and the received response signals, the processor 140 controls the sub-light sources 113 of the transmitter 110 to light in sequence, so that the sub-regions 131 corresponding to the sub-light sources can be ensured to light in sequence, the receiver 120 receives the reflected light beams 111 of the sub-light sources 113 in sequence and outputs the response signals in sequence, and a plurality of depth information can be obtained under the action of the processor 140, because the depth information belongs to one sub-region 131 at the moment, and independent of each other without overlapping each other, so that the sum of the plurality of depth information is calculated by the processor 140, and the final imaging pixel effect is multiple times of the imaging effect of the prior art, so that the depth image containing the target depth value has high accuracy. The computing system with the image capturing apparatus 100 can obtain an imaging effect of pixel multiplication.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. An image capturing apparatus, characterized by comprising:
an emitter for emitting a light beam to a field of view, comprising at least one set of light source units, each of the light source units comprising a plurality of sub-light sources arranged in an array and controlled independently, each of the light source units corresponding to a partition of the field of view, the plurality of sub-light sources corresponding to a plurality of sub-areas of the partition, wherein the field of view comprises at least one of the partitions, and each of the partitions comprises a plurality of the sub-areas;
a receiver for receiving the light beam reflected back by the field of view and outputting a response signal, comprising at least one set of receiving units, each receiving unit being a single photon sensing unit, each receiving unit corresponding to one of the zones, one single photon sensing unit corresponding to all sub-zones of one of the zones;
and the processor is electrically connected with the transmitter and the receiver and used for generating a transmitting signal to control the plurality of sub light sources of the light source unit to be sequentially lightened and obtaining depth information according to the transmitting signal and the received response signal.
2. The image capture device of claim 1, wherein the transmitter comprises an array of VCSEL lasers integrated on the same semiconductor die.
3. The image capture device of claim 1, wherein the receiver comprises an array of Single Photon Avalanche Diodes (SPADs).
4. The image capturing apparatus according to claim 1, wherein each of the sub light sources in the light source unit has the same shape.
5. The image capturing apparatus according to claim 4, wherein the number of the sub light sources of all the light source units is the same.
6. An image capturing method characterized by comprising the steps of:
sequentially turning on a plurality of sub-light sources in a light source unit according to emission signals, wherein the sub-light sources emit light beams to a field of view, the plurality of sub-light sources are arranged in an array and are independently controlled, each light source unit corresponds to a partition of the field of view, and the plurality of sub-light sources correspond to a plurality of sub-areas of the partition one by one, wherein the field of view comprises at least one partition, and each partition comprises a plurality of sub-areas;
receiving the light beam reflected back by the field of view and outputting a response signal;
and analyzing the transmitting signal and the received response signal to obtain depth information.
7. The image capturing method as claimed in claim 6, wherein the sequentially turning on a plurality of sub-light sources in a light source unit according to the emission signal, the sub-light sources emitting light beams to the field of view, comprises:
in each of the light source units, the sub-light sources fixedly emit light beams to a sub-area of one of the zones.
8. The image capturing method as claimed in claim 6, wherein the sequentially turning on a plurality of sub-light sources in a light source unit according to the emission signal, the sub-light sources emitting light beams to the field of view, comprises:
in each light source unit, the sub-light sources randomly emit light beams to a sub-area of the partition, and the emitted light beams of the sub-light sources correspond to the sub-areas of the partition one by one.
9. The image capturing method according to any one of claims 6 to 8, wherein the analyzing the transmitted signal and the received response signal to obtain depth information specifically comprises:
analyzing one sub light source emission signal and the received corresponding response signal to obtain depth information;
and superposing the depth information corresponding to the plurality of sub-light sources to obtain the depth information of the fields of view corresponding to the plurality of sub-light sources.
10. A computing system comprising the image capture device of any of claims 1-5.
11. The computing system of claim 10, wherein the computing system is a mobile computer.
CN202010217801.9A 2020-03-25 2020-03-25 Image capturing device and method and computing system Pending CN113452983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010217801.9A CN113452983A (en) 2020-03-25 2020-03-25 Image capturing device and method and computing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010217801.9A CN113452983A (en) 2020-03-25 2020-03-25 Image capturing device and method and computing system

Publications (1)

Publication Number Publication Date
CN113452983A true CN113452983A (en) 2021-09-28

Family

ID=77806891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010217801.9A Pending CN113452983A (en) 2020-03-25 2020-03-25 Image capturing device and method and computing system

Country Status (1)

Country Link
CN (1) CN113452983A (en)

Similar Documents

Publication Publication Date Title
CN111722241B (en) Multi-line scanning distance measuring system, method and electronic equipment
US11585906B2 (en) Solid-state electronic scanning laser array with high-side and low-side switches for increased channels
CN106997603B (en) Depth camera based on VCSEL array light source
CN111025317B (en) Adjustable depth measuring device and measuring method
CN111830530B (en) Distance measuring method, system and computer readable storage medium
WO2018209989A1 (en) Structured light projection module based on vcsel array light source
WO2021072802A1 (en) Distance measurement system and method
CN110824490B (en) Dynamic distance measuring system and method
US11953600B2 (en) Synchronized image capturing for electronic scanning LIDAR systems comprising an emitter controller and plural sensor controllers
CN111766596A (en) Distance measuring method, system and computer readable storage medium
CN209894976U (en) Time flight depth camera and electronic equipment
CN111965658B (en) Distance measurement system, method and computer readable storage medium
CN110780312B (en) Adjustable distance measuring system and method
CN111796295B (en) Collector, manufacturing method of collector and distance measuring system
CN110716190A (en) Transmitter and distance measurement system
CN110658529A (en) Integrated beam splitting scanning unit and manufacturing method thereof
CN111427230A (en) Imaging method based on time flight and 3D imaging device
CN111965659B (en) Distance measurement system, method and computer readable storage medium
CN110716189A (en) Transmitter and distance measurement system
CN111323787A (en) Detection device and method
CN212135134U (en) 3D imaging device based on time flight
CN108828559A (en) Laser radar apparatus and laser radar system
CN112346076A (en) Control method of electronic device, and computer-readable storage medium
CN111796296A (en) Distance measuring method, system and computer readable storage medium
CN113452983A (en) Image capturing device and method and computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210928

WD01 Invention patent application deemed withdrawn after publication