WO2023056585A1 - 一种探测系统、终端设备、控制探测方法及控制装置 - Google Patents
一种探测系统、终端设备、控制探测方法及控制装置 Download PDFInfo
- Publication number
- WO2023056585A1 WO2023056585A1 PCT/CN2021/122544 CN2021122544W WO2023056585A1 WO 2023056585 A1 WO2023056585 A1 WO 2023056585A1 CN 2021122544 W CN2021122544 W CN 2021122544W WO 2023056585 A1 WO2023056585 A1 WO 2023056585A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixels
- pixel array
- light source
- pixel
- array
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 185
- 238000000034 method Methods 0.000 claims description 83
- 230000003287 optical effect Effects 0.000 claims description 76
- 238000004590 computer program Methods 0.000 claims description 17
- 238000003384 imaging method Methods 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 4
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 38
- 230000015654 memory Effects 0.000 description 23
- 238000012545 processing Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 15
- 238000012634 optical imaging Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000644 propagated effect Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 238000000576 coating method Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000005499 meniscus Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000011347 resin Substances 0.000 description 2
- 229920005989 resin Polymers 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- FOXXZZGDIAQPQI-XKNYDFJKSA-N Asp-Pro-Ser-Ser Chemical compound OC(=O)C[C@H](N)C(=O)N1CCC[C@H]1C(=O)N[C@@H](CO)C(=O)N[C@@H](CO)C(O)=O FOXXZZGDIAQPQI-XKNYDFJKSA-N 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/705—Pixels for depth measurement, e.g. RGBZ
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
- G01S2013/9327—Sensor installation details
- G01S2013/93271—Sensor installation details in the front of the vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
- G01S2013/9327—Sensor installation details
- G01S2013/93275—Sensor installation details in the bumper area
Definitions
- the present application relates to the field of detection technology, and in particular to a detection system, a terminal device, a control detection method and a control device.
- detection systems are playing an increasingly important role on smart terminals, because the detection system can perceive the surrounding environment, and can identify and track moving targets based on the perceived environmental information, as well as static scenes such as lane lines and signs. Identification, and combined with navigator and map data, etc. for path planning. Therefore, detection systems are playing an increasingly important role on smart terminals.
- Angular resolution is an important parameter used to characterize the performance of the detection system.
- the first way is to increase the focal length of the optical imaging system in the detection system, which will increase the overall size of the detection system, which is not conducive to the miniaturization of the detection system.
- the second way is to reduce the field of view of the detection system to increase the angular resolution of the detection system, which will reduce the field of view of the detection system, thereby limiting the application scenarios of the detection system.
- the present application provides a detection system, terminal equipment, control detection method and control device, which are used to improve the angular resolution of the detection system.
- the present application provides a detection system, the detection system includes a pixel array and a light source array, the pixel array includes a first pixel array, the light source array includes a first light source array, the first pixel array includes M ⁇ N pixels, and the first pixel array includes M ⁇ N pixels.
- a light source array includes M ⁇ N light sources corresponding to M ⁇ N pixels, where both M and N are integers greater than 1.
- the pixels in the first pixel array are staggered in the row direction, and the dislocation size of the pixels is smaller than the distance between the centers of two adjacent pixels in the row direction; or, the pixels in the first pixel array are staggered in the column direction, and the pixel The dislocation size is smaller than the distance between the centers of two adjacent pixels in the column direction; the arrangement of the light sources in the first light source array is coupled or matched to the arrangement of the pixels in the first pixel array.
- the light sources in the first light source array are arranged in dislocation in the row direction, and the dislocation size of the light sources is smaller than the distance between the centers of two adjacent light sources in the row direction; or, the light sources in the first light source array are in the column direction
- the dislocation of the light sources is smaller than the distance between the centers of two adjacent light sources in the column direction; the arrangement of the pixels in the first pixel array is coupled or matched with the arrangement of the light sources in the first light source array.
- the pixels in the first pixel array are arranged in a row direction, and the pixel displacement is smaller than the distance between the centers of two adjacent pixels in the row direction; correspondingly, the light sources in the first light source array are arranged in a row direction. Arranged in an upward dislocation, the dislocation size of the light sources is smaller than the distance between the centers of two adjacent light sources in the row direction.
- the pixels in the first pixel array are arranged in dislocation in the column direction, and the dislocation size of the pixels is smaller than the distance between the centers of two adjacent pixels in the column direction; correspondingly, the light sources in the first light source array are in the column direction In the dislocation arrangement, the dislocation size of the light sources is smaller than the distance between the centers of two adjacent light sources in the column direction.
- the first light source array arranged in staggered light sources and the first pixel array arranged in staggered pixels are equivalent to increasing the number of equivalent lines of the detection system within the unit area of the first pixel array, and the number of equivalent lines increases , the number of light spots of echo signals received per unit area of the first pixel array increases, thereby helping to improve the angular resolution of the detection system.
- the angular resolution of the detection system in the row direction (which may be referred to as the first angular resolution) can be improved.
- the angular resolution of the detection system in the column direction (which may be referred to as the second angular resolution) can be improved.
- the first pixel array is part or all of the pixels of the pixel array, and/or the first light source array is part or all of the light sources of the light source array.
- the pixels in the pixel array are obtained by combining at least one photosensitive unit.
- the pixel is obtained by combining two or more photosensitive units, it is helpful to improve the dynamic range of the pixel array.
- the pixels in the first pixel array are arranged at equal intervals in the row direction; or, the pixels in the first pixel array are arranged at non-equal intervals in the row direction; or, the first pixel array
- the pixels in the row direction are partially arranged at equal intervals, and some are arranged at non-equal intervals.
- the offset arrangement of the pixels in the first pixel array in the row direction helps to increase the number of equivalent lines in the row direction, thereby helping to improve the angular resolution of the detection system in the row direction.
- the pixels in the first pixel array are arranged at equal intervals in the column direction; or, the pixels in the first pixel array are arranged at non-equal intervals in the column direction; or, the first pixels The pixels in the array are partially arranged at equal intervals and partially arranged at non-equal intervals in the column direction.
- the offset arrangement of the pixels in the first pixel array in the column direction helps to increase the number of equivalent lines in the column direction, thereby helping to improve the angular resolution of the detection system in the column direction.
- the first pixel array includes m first regions, there are at least two first regions in the m first regions, and the pixels in the at least two first regions are arranged in different ways, m is an integer greater than 1.
- the first pixel array includes n second regions, there are at least two second regions among the n second regions, and the pixels in the at least two second regions are combined by different numbers of photosensitive units , n is an integer greater than 1.
- the first pixel array includes h third regions, there are at least two third regions among the h third regions, and pixels in at least two third regions have different dislocation sizes, h is An integer greater than 1.
- different viewing angles can correspond to different first angles within the entire field of view without changing the binning method of photosensitive units. resolution or second angular resolution.
- the light source in the light source array includes an active area, and the active area is used to emit signal light;
- the light source array includes k areas, there are at least two areas in the k areas, and at least two areas The active area of the light source is different in the relative position of the light source, and k is an integer greater than 1.
- different viewing angles can correspond to different first angle resolutions without changing the structure of the first pixel array. rate or second angular resolution.
- the detection system further includes an optical imaging system
- the light source array is located at a focal plane of an image side of the optical imaging system
- the pixel array is located at a focal plane of an object side of the optical imaging system.
- the light source array is located at the focal plane of the image side of the optical imaging system, and the pixel array is located at the focal plane of the object side of the optical imaging system, the signal light emitted by the light sources in the light source array can be imaged on the corresponding pixels. Further, the arrangement of the light sources in the first light source array and the arrangement of pixels in the first pixel array may be coupled or matched by an optical imaging system.
- the present application provides a control detection method, which can be applied to the above-mentioned first aspect or any detection system of the first aspect.
- the method includes controlling and gating the first pixels in the first pixel array, where the first pixels are part or all of the pixels in the first pixel array; and controlling and gating the first light sources corresponding to the first pixels in the first light source array.
- the method further includes acquiring a first electrical signal from the first pixel, and determining related information of the target according to the first electrical signal; One echo signal is determined, and the first echo signal is obtained by reflecting the first signal light emitted by the first light source by the target in the detection area.
- the first control signal for controlling the gate of the first pixel and/or the first light source is obtained, and the first control signal is sent to the pixel array and/or the light source array, wherein the first control The signal is generated at least according to the target angular resolution.
- the pixels in the pixel array are obtained by combining p ⁇ q photosensitive units, and both p and q are integers greater than 1;
- the number of cloud information is expanded to Q, and Q is an integer greater than 1.
- the angular resolution of the detection system can be further improved by increasing the amount of point cloud information.
- the first angular resolution and the second angular resolution of the detection system can be further improved by increasing the amount of point cloud information.
- the a ⁇ b photosensitive units in the central area of the pixels in the pixel array may be controlled to be gated, and the photosensitive units adjacent to at least one photosensitive unit in the a ⁇ b photosensitive units may be controlled to be gated, wherein,
- the a ⁇ b photosensitive units correspond to a first point cloud information, a is smaller than p, b is smaller than q, and the neighboring photosensitive units output the second point cloud information.
- the amount of point cloud information can be increased.
- the present application provides a control device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
- the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
- Hardware or software includes one or more modules corresponding to the above-mentioned functions.
- control device is, for example, a chip or a chip system or a logic circuit.
- the control device may include: a transceiver module and a processing module.
- the processing module may be configured to support the control device to perform the corresponding functions in the method of the second aspect above, and the transceiving module is used to support the interaction between the control device and the detection system or other modules in the detection system.
- the transceiver module may be an independent receiving module, an independent transmitting module, a transceiver module integrated with transceiver functions, and the like.
- the present application provides a control device, which is used to implement the third aspect or any one of the methods in the third aspect, and includes corresponding functional modules, respectively used to implement the steps in the above method.
- the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
- Hardware or software includes one or more modules corresponding to the above-mentioned functions.
- control device is, for example, a chip or a chip system or a logic circuit.
- the control device may include: an interface circuit and a processor.
- the processor may be configured to support the control device to perform the corresponding functions in the method of the second aspect above, and the interface circuit is used to support the interaction between the control device and other structures in the detection system or the detection system.
- the control device may further include a memory, which may be coupled with the processor, and store necessary program instructions and the like of the control device.
- the present application provides a chip, which includes at least one processor and an interface circuit. Further, optionally, the chip may further include a memory, and the processor is used to execute computer programs or instructions stored in the memory, so that the chip Execute the method in the above second aspect or any possible implementation manner of the second aspect.
- the present application provides a terminal device, where the terminal device includes the first aspect or any one of the detection systems in the first aspect.
- the terminal device may further include a processor, and the processor may be used to control the detection system to detect the detection area.
- the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by the control device, the control device executes the above-mentioned second aspect or the second aspect.
- a computer-readable storage medium in which a computer program or instruction is stored, and when the computer program or instruction is executed by the control device, the control device executes the above-mentioned second aspect or the second aspect.
- the present application provides a computer program product, the computer program product includes a computer program or instruction, and when the computer program or instruction is executed by the control device, the control device executes any of the above-mentioned second aspect or the second aspect. method in a possible implementation.
- Figure 1a is a schematic diagram of a combined photosensitive unit provided by the present application.
- Figure 1b is a schematic diagram of the number of lines of a laser radar provided by the present application.
- Figure 1c is a schematic diagram of a BSI principle provided by the present application.
- Figure 1d is a schematic diagram of a FSI principle provided by the present application.
- Figure 2a is a schematic diagram of a possible application scenario provided by the present application.
- Figure 2b is a schematic diagram of another possible application scenario provided by the present application.
- FIG. 3 is a schematic structural diagram of a detection system provided by the present application.
- FIG. 4 is a schematic diagram of the relationship between a photosensitive unit and a pixel provided by the present application
- Fig. 5a is a schematic structural diagram of a pixel arranged in a misaligned direction in the row direction provided by the present application;
- Fig. 5b is a schematic structural diagram of pixels aligned in the row direction provided by the present application.
- Fig. 5c is a schematic structural diagram of a pixel arranged in a misaligned column direction according to the present application.
- Fig. 5d is a schematic structural diagram of pixels aligned in the column direction provided by the present application.
- Fig. 5e is a schematic structural diagram of another pixel arranged in a misaligned direction in the row direction provided by the present application;
- Fig. 5f is a schematic structural diagram of another pixel arranged in a misaligned column direction according to the present application.
- Fig. 5g is a schematic structural diagram of another pixel dislocation arrangement in the row direction provided by the present application.
- Fig. 5h is a schematic structural diagram of another pixel arranged in a misaligned column direction according to the present application.
- Fig. 6a is a schematic structural diagram of a pixel array provided by the present application.
- Fig. 6b is a schematic structural diagram of another pixel array provided by the present application.
- Fig. 6c is a schematic structural diagram of another pixel array provided by the present application.
- FIG. 7 is a schematic diagram of the relationship between a first light source array and a first pixel array provided by the present application.
- Figure 8a is a schematic structural view of an optical lens provided by the present application.
- Fig. 8b is a schematic structural diagram of another optical lens provided by the present application.
- FIG. 9 is a schematic structural diagram of a terminal device provided by the present application.
- Figure 10 is a schematic diagram of a detection method provided by the present application.
- Fig. 11 is a schematic structural diagram of a control device provided by the present application.
- Fig. 12 is a schematic structural diagram of a control device provided by the present application.
- Binning is a readout method. In this way, the signals (such as photons) sensed by each photosensitive unit in the merged photosensitive unit (or called a pixel) (cell) are added together to form a pixel (Pixel) way to read.
- Binning can generally be divided into binning in the row direction and binning in the column direction. Binning in the row direction is to superimpose the signals of adjacent rows together and read out in the form of one pixel (see Figure 1a below), and Binning in the column direction is to superimpose the signals of adjacent columns together and read in the form of one pixel out.
- the detection system can implement only row binning, or only column binning, or both row and column binning.
- the Binning manner may also be other possible manners, such as Binning along a diagonal direction, which is not limited in the present application.
- the number of lines of the detection system refers to the number of signal lights emitted by the detection system at one time. Different signal lights can detect different positions in the detection area. Referring to Figure 1b, the number of lines of the detection system is 5. It should be understood that the number of lines of the detection system may be greater than 5 or less than 5, and Fig. 1b only uses 5 as an example, which is not limited in the present application.
- Angular resolution also known as scanning resolution, refers to the ability of the detection system to differentiate the minimum distance between two adjacent objects. The smaller the angular resolution, the more the number of light spots that shoot into the detection area, that is, the more points that can detect the target in the detection area, the higher the detection resolution.
- the angular resolution includes a first angular resolution and a second angular resolution, wherein the first angular resolution is an angular resolution in a row direction, and the second angular resolution is an angular resolution in a column direction.
- the first angular resolution ⁇ 1 can be expressed by the following formula 1
- the second angular resolution ⁇ 2 can be expressed by the following formula 2.
- a 1 represents the side length of the row direction of the pixels of the pixel array
- a 2 represents the side length of the column direction of the pixels of the pixel array
- f 1 represents the equivalent focal length of the optical receiving system in the row direction
- f 2 represents the optical receiving system Equivalent focal length in column direction.
- the angular resolution of the detection system is related to the size of the pixels in the pixel array at the receiving end and the focal length of the optical receiving system. If the row direction is consistent with the horizontal direction, and the column direction is consistent with the vertical direction, the angular resolution in the horizontal direction may also be called the first angular resolution, and the angular resolution in the vertical direction may also be called the second angular resolution.
- the angular resolution of the detection system is the same as the size of the minimum field of view of the pixel.
- the first angular resolution of the detection system is the same as the size of the minimum field of view in the row direction of the pixels
- the second angular resolution of the detection system is the same as the size of the minimum field of view in the column direction of the pixels.
- the spatial resolution and the size of the pixels are trade-offs. That is, the smaller the pixel size, the higher the spatial resolution; the larger the pixel size, the lower the spatial resolution.
- BSI means that light enters the pixel array from the back side, see Figure 1c.
- the light is focused on the color filter layer by a microlens with an anti-reflection coating, is divided into three primary color components by the color filter layer, and is introduced into the pixel array.
- the back side corresponds to the front end of line (BEOL) process of the semiconductor manufacturing process.
- FSI means that light enters the pixel array from the front, see Figure 1d.
- the light is focused on the color filter layer by a microlens with an anti-reflection coating, is divided into three primary color components by the color filter layer, and passes through the metal wiring layer, so that parallel light is introduced into the pixel array.
- the front corresponds to the back end of line (BEOL) process of the semiconductor manufacturing process.
- the row address can be the abscissa, and the column address can be the ordinate.
- the rows of the pixel array correspond to the horizontal direction and the columns of the pixel array correspond to the vertical direction as an example.
- the row-column strobe signal can be used to read the data at the specified location in the memory, and the pixel corresponding to the read specified location is the gated pixel.
- the pixels in the pixel array can store the detected signals in corresponding memories.
- the pixels can be enabled to be in an active state by a bias voltage, so that they can respond to echo signals incident on their surfaces.
- the row address can be the abscissa, and the column address can be the ordinate.
- the rows of the pixel array correspond to the horizontal direction and the columns of the pixel array correspond to the vertical direction as an example.
- Gating the light source refers to turning on (or called turning on) the light source, and controlling the light source to emit signal light according to the corresponding power.
- Point cloud information is a collection of points in three-dimensional space. These vectors are usually represented in the form of X, Y, and Z three-dimensional coordinates, and are generally mainly used to represent the associated information of a target. For example, (X, Y, Z) can represent the geometric position, intensity, depth (ie distance) of the target, segmentation results, etc.
- the detection system may be a laser radar.
- Lidar can be installed on vehicles (such as unmanned vehicles, smart cars, electric vehicles, or digital cars, etc.) as vehicle-mounted lidar, please refer to Figure 2a.
- Lidar can be deployed in any direction or any number of directions in front, rear, left and right of the vehicle to capture information about the surrounding environment of the vehicle.
- Figure 2a is an example where the lidar is deployed in front of the vehicle.
- the area that the lidar can perceive can be called the detection area of the lidar, and the corresponding field of view can be called the full field of view.
- the laser radar can acquire the longitude and latitude, speed, orientation, or related information (such as the distance of the target, the moving speed of the target, the attitude of the target or the position of the target) of the target within a certain range (such as other vehicles around) in real time or periodically. grayscale, etc.).
- the lidar or the vehicle can determine the vehicle's position and/or path planning, etc. based on this associated information. For example, use the latitude and longitude to determine the position of the vehicle, or use the speed and orientation to determine the driving direction and destination of the vehicle in the future, or use the distance of surrounding objects to determine the number and density of obstacles around the vehicle.
- an advanced driving assistant system can also be combined to realize assisted driving or automatic driving of the vehicle.
- ADAS advanced driving assistant system
- the principle of the laser radar to detect the associated information of the target is: the laser radar emits signal light in a certain direction, if there is a target in the detection area of the laser radar, the target can reflect the received signal light back to the laser radar (reflected The signal light can be called the echo signal), and the laser radar determines the relevant information of the target according to the echo signal.
- the detection system may be a camera.
- the camera can also be installed on a vehicle (such as an unmanned vehicle, a smart vehicle, an electric vehicle, a digital vehicle, etc.), as a vehicle-mounted camera, please refer to the above-mentioned FIG. 2b.
- the camera can obtain measurement information such as the distance and speed of the target in the detection area in real time or periodically, so as to provide necessary information for lane correction, vehicle distance keeping, reversing and other operations.
- Vehicle-mounted cameras can realize: a) target recognition and classification, such as various lane line recognition, traffic light recognition, and traffic sign recognition; ) to divide, mainly divide vehicles, ordinary road edges, side stone edges, boundaries without visible obstacles, unknown boundaries, etc.; c) the ability to detect lateral moving targets, such as pedestrians and vehicles crossing intersections Detection and tracking; d) positioning and map creation, such as positioning and map creation based on visual simultaneous localization and mapping (SLAM) technology; and so on.
- target recognition and classification such as various lane line recognition, traffic light recognition, and traffic sign recognition
- ) to divide mainly divide vehicles, ordinary road edges, side stone edges, boundaries without visible obstacles, unknown boundaries, etc.
- positioning and map creation such as positioning and map creation based on visual simultaneous localization and mapping (SLAM) technology
- lidar can also be mounted on drones as airborne radar.
- lidar can also be installed in a roadside unit (RSU), as a roadside traffic lidar, which can realize intelligent vehicle-road collaborative communication.
- RSU roadside unit
- lidar can be installed on an automated guided vehicle (AGV).
- AGV automated guided vehicle
- the AGV is equipped with an automatic navigation device such as electromagnetic or optical, and can drive along a prescribed navigation path. It has safety protection and various Transporter with transfer function. They are not listed here.
- the application scenarios can be applied to areas such as unmanned driving, automatic driving, assisted driving, intelligent driving, connected vehicles, security monitoring, remote interaction, surveying and mapping, or artificial intelligence.
- the detection system 300 may include a pixel array 301 and a light source array 302, the pixel array 301 includes a first pixel array 3011, the light source array 302 includes a first light source array 3021, the first pixel array 3011 includes M ⁇ N pixels, and the first light source array 3021 includes M ⁇ N light sources corresponding to M ⁇ N pixels, where both M and N are integers greater than 1.
- the pixels in the first pixel array are staggered in the row direction, and the dislocation size of the pixels is smaller than the distance between the centers of two adjacent pixels in the row direction; or, the pixels in the first pixel array are staggered in the column direction, and the pixel The dislocation size is smaller than the distance between the centers of two adjacent pixels in the column direction; the arrangement of the light sources in the first light source array is coupled or matched to the arrangement of the pixels in the first pixel array.
- the misalignment of the pixels in the first pixel array in the row direction means that each pixel in the i-th row and the corresponding pixel in the j-th row are staggered by a certain amount in the row direction (that is, two pixels in two adjacent rows
- the i-th row and the j-th row are two adjacent rows.
- the staggered arrangement of the light sources in the row direction in the first light source array means that the light sources in the i-th row and the j-th row are staggered by a certain amount in the row direction, and the i-th row and the j-th row are two adjacent rows.
- the misalignment of the pixels in the first pixel array in the column direction means that the pixels in the i-th column and the j-th column are staggered by a certain size in the column direction (that is, if two pixels in two adjacent columns are not aligned, it can be Referring to the following Fig. 5c or Fig. 5f), the i-th column and the j-th column are two adjacent columns.
- the staggered arrangement of the light sources in the column direction in the first light source array means that the light sources in the i-th column and the j-th column are staggered by a certain amount in the column direction, and the i-th column and the j-th column are two adjacent columns.
- the first light source array with the light source staggered arrangement and the first pixel array with the pixels staggered arrangement are equivalent to increasing the number of equivalent lines of the detection system per unit area of the first pixel array (for details, please refer to the following The introduction of the four dislocation arrangements given in ), the number of equivalent lines increases, and the number of spots of echo signals received per unit area of the first pixel array increases, thereby helping to improve the angular resolution of the detection system.
- the angular resolution of the detection system in the row direction (which may be referred to as the first angular resolution) can be improved.
- the angular resolution of the detection system in the column direction (which may be referred to as the second angular resolution) can be improved.
- the M ⁇ N light sources included in the first light source array correspond one-to-one to the M ⁇ N pixels included in the first pixel array.
- the first light source array includes 2 ⁇ 2 light sources
- the first pixel array includes 2 ⁇ 2 pixels
- the 2 ⁇ 2 light sources correspond to 2 ⁇ 2 pixels one-to-one: the light source 11 corresponds to the pixel 11,
- the light source 12 corresponds to the pixel 12
- the light source 21 corresponds to the pixel 21
- the light source 22 corresponds to the pixel 22 .
- the coupling or matching of the arrangement of the light sources in the first light source array and the arrangement of pixels in the first pixel array includes but not limited to: the arrangement of the light sources in the first light source array is the same as that of the second The arrangement of pixels in a pixel array is the same (or referred to as consistent). Further, optionally, the arrangement of the light sources in the first light source array and the arrangement of the pixels in the first pixel array may be coupled or matched through an optical imaging system. It can also be understood that the signal light emitted by the light source in the light source array can be imaged on the pixel corresponding to the light source through the imaging optical system. Regarding the optical imaging system, reference may be made to the following related descriptions, which will not be repeated here.
- FIG. 3 Each structure shown in FIG. 3 is introduced and described below to give an exemplary specific implementation solution.
- the pixel array and the light source array below are not marked.
- the pixel array may be a two-dimensional (two dimensional, 2D) pixel array.
- the pixels in the pixel array may be obtained by combining (binning) at least one photosensitive unit (cell) (or referred to as a pixel).
- the photosensitive unit may be, for example, a single photon detector, and the single photon detector includes but not limited to a SPAD or a digital silicon photomultiplier (silicon photomultiplier, SiPM).
- the single photon detector can be formed by FSI process or BSI process. Usually adopting BSI process can realize smaller SPAD size, have higher PDE and higher energy efficiency.
- the binning method is m ⁇ n.
- FIG. 4 is a schematic structural diagram of a pixel provided in this application.
- the pixel is obtained by binning 4 ⁇ 4 photosensitive units.
- the binning method of the photosensitive unit is 4 ⁇ 4.
- one pixel includes 4 ⁇ 4 photosensitive units.
- the binning method of the photosensitive units shown in FIG. 4 is only an example. In this application, the pixel may only binning the photosensitive units in the row direction, or may only perform binning on the photosensitive units in the column direction.
- the shape of the pixel can be a square, or a rectangle (as shown in the example of FIG. 4 above), or other possible regular shapes (such as a hexagon, a circle or an wait).
- the specific shape of the pixel is related to the photosensitive units that make up the pixel and the binning method of the photosensitive units. If the pixel is a rectangle, the long side of the pixel can be in the row direction, or can also be in the column direction.
- the photosensitive unit is generally a symmetrical regular figure, such as a square (refer to FIG. 4 above), of course, a rectangle may also be used, which is not limited in the present application.
- the first pixel array may be all of the pixel arrays, that is, the pixel array includes M ⁇ N pixels. Based on this, the first pixel array in this application can be replaced with a pixel array.
- the first pixel array may be some pixels of the pixel array. That is, in addition to the first pixel array formed by M ⁇ N pixels, the pixel array may also include other pixels. Based on this, the pixel array may form a regular figure (such as a rectangle, a square, etc.), or may also form an irregular figure, which is not limited in this application.
- the pixels in the first pixel array can be arranged in a dislocation manner.
- the possible dislocation arrangement of the pixels in the first pixel array is exemplarily shown as follows.
- the pixels in the first pixel array are taken as rectangles, and the size of the misalignment is based on the center of the image area.
- the image area refers to the area in the pixel for receiving the echo signal from the detection area (refer to the elliptical area shown in FIG. 5a to FIG. 5d below). It can also be understood that the image corresponding to the echo signal is formed in the image area of the pixel. In other words, the image area is the imaging position of the echo signal.
- offset arrangement at equal intervals in the row direction is aligned in the column direction.
- FIG. 5 a it is a schematic structural diagram of a first pixel array provided by the present application in which pixels are arranged in a row in a misaligned manner.
- the pixels in the first pixel array are arranged at equal intervals in the row direction, and are arranged in a period of N rows, where N is an integer greater than 1.
- N is an integer greater than 1.
- the displacement of any two adjacent pixels in the row direction is ⁇ 1 .
- the displacement ⁇ 1 of any two adjacent pixels in the row direction is smaller than the distance S 1 between the centers of the two adjacent pixels.
- the angular resolution in the row direction can be increased by 3 times based on the offset arrangement of the first pixel array shown in FIG. 5 a compared with the aligned arrangement of the first pixel array shown in FIG. 5 b.
- the minimum field of view range corresponding to the pixel is the field of view range corresponding to the length S1 ; based on the first pixel array in Figure 5a In the arrangement mode, the minimum field of view range corresponding to the pixels is the field of view corresponding to the length S 1 /3.
- the angular resolution based on the dislocation arrangement of pixels in the first pixel array shown in FIG. 5 a is the same as that based on the arrangement of pixels in the first pixel array shown in FIG. 5 b.
- FIG. 5 c it is a schematic structural diagram of a first pixel array provided by the present application in which pixels are arranged in a misaligned manner in the column direction.
- the pixels in the first pixel array are arranged at equal intervals in the column direction, and the pixels are arranged in a staggered manner around M columns.
- M 3
- the displacement of any two adjacent pixels in the column direction is ⁇ 2 .
- the misalignment ⁇ 2 of any two adjacent pixels in the column direction is smaller than the distance H 2 between the centers of the two adjacent pixels.
- the focal length of the receiving optical system in the column direction is f2 , if the pixels in the first pixel array are arranged according to the existing alignment (as shown in Figure 5d), within the range of 2 in the column direction, in the first pixel array
- the equivalent side length of the pixel in the column direction is b 1
- the second angular resolution ⁇ 2 b 1 /f 2
- the second The side length of the pixels in a pixel array in the column direction is b 1 /3
- the second angular resolution ⁇ 2 ′ b 1 /3/f 2 . Therefore, based on the dislocation arrangement of the first pixel array shown in FIG.
- the angular resolution in the column direction can be increased by 3 times. It should be understood that, based on the arrangement of the first pixel array in Figure 5d, within the range of H2 in the row direction, the minimum field of view range corresponding to the pixel is the field of view corresponding to the length H2 ; based on the first pixel array in Figure 5c In the arrangement mode, the minimum field of view range corresponding to the pixels is the field of view corresponding to the length H 2 /3.
- the angular resolution based on the dislocation arrangement of pixels in the first pixel array shown in FIG. 5c is the same as that based on the arrangement of pixels in the first pixel array shown in FIG. 5d.
- dislocation size ⁇ 1 in the above method 1 may be the same as or different from the dislocation size ⁇ 2 in the above method 2 , which is not limited in the present application.
- the staggered arrangement at unequal intervals in the row direction is aligned in the column direction.
- FIG. 5 e it is a schematic structural diagram of another kind of pixel arrangement in the row direction in the first pixel array provided by the present application.
- Pixels in the first pixel array have at least two different dislocation sizes in the row direction, and N rows are arranged in a periodic dislocation arrangement (or referred to as a period of dislocation arrangement), where N is an integer greater than 1.
- N is an integer greater than 1.
- the dislocation size ⁇ 3 of any two adjacent pixels in the row direction is smaller than the distance S 1 between the centers of the adjacent two pixels, and the dislocation size ⁇ 4 of any two adjacent pixels in the row direction is also smaller than the distance between the two adjacent pixels The distance S 1 between the centers of the pixels.
- the first pixel array has at least two different first angular resolutions in the row direction. For example, if there are two different dislocation sizes in the row direction, there are two different first angular resolutions in the row direction; for another example, if there are three different dislocation sizes in the row direction, there are three different dislocation sizes in the row direction Different first angular resolutions.
- the size of the dislocations in the dislocation arrangements with unequal intervals in the row direction may be different from each other (as shown in FIG. 5e ); or may be partially the same and partially different; this application does not limit this.
- the angular resolution in the row direction can be improved based on the staggered arrangement of the first pixel array shown in FIG. 5e compared with the aligned arrangement of pixels in the existing first pixel array shown in FIG. 5b. It should be understood that, based on the arrangement of the first pixel array shown in FIG. 5e, in the row direction, the minimum field of view range corresponding to the pixel is the field of view range corresponding to ⁇ 3 .
- the overall angular resolution of the first pixel array may be obtained based on the two first angular resolutions, for example, a weighted average may be taken for the two first angular resolutions; then For example, the resolution of the central region of the point cloud may be taken as the overall angular resolution of the first pixel array.
- the angular resolution based on the dislocation arrangement of the pixels in the first pixel array shown in FIG. 5e is the same as that based on the arrangement of the pixels in the first pixel array shown in FIG. 5b.
- dislocation arrangement at unequal intervals in the column direction is aligned in the row direction.
- FIG. 5 f it is a schematic structural diagram of another kind of pixel arrangement in the column direction in the first pixel array provided by the present application.
- the pixels in the first pixel array have at least two different dislocation sizes in the column direction, and the dislocation arrangement takes M columns as a period, and M is an integer greater than 1.
- the misalignment of two adjacent pixels in the column direction is ⁇ 5 or ⁇ 6 .
- the dislocation size ⁇ 5 of any two adjacent pixels in the column direction is smaller than the distance H 2 between the centers of the two adjacent pixels
- the dislocation size ⁇ 6 of any two adjacent pixels in the column direction is smaller than the distance between the two adjacent pixels The distance between the centers of H 2 .
- the first pixel array has at least two different second angular resolutions in the column direction. For example, if there are two different dislocation sizes in the column direction, then there are two different second angular resolutions in the column direction; for another example, if there are three different dislocation sizes in the column direction, then in the column direction There are three different second angular resolutions.
- the dislocation size may be different from each other (as shown in FIG. 5f ), or may be partly the same or partly different, which is not limited in this application.
- the angular resolution of the detection system in the column direction can be improved based on the dislocation arrangement of the first pixel array shown in FIG. 5f.
- the minimum field of view range corresponding to the pixel is the field of view corresponding to the length H2 ; based on the first pixel array in Figure 5f In the arrangement mode, the minimum field of view range corresponding to the pixels is the field of view corresponding to the length ⁇ 5 .
- the angular resolution based on the dislocation arrangement of pixels in the first pixel array shown in FIG. 5f is the same as that based on the arrangement of pixels in the first pixel array shown in FIG. 5d.
- the size of the dislocation of the pixels in the above-mentioned first pixel array depends on the application scene of the detection system. For example, scenes requiring smaller angular resolutions have smaller pixel misalignment sizes. For another example, in a scene that requires a larger angular resolution, the pixel misalignment is larger. It should be understood that the smaller the dislocation size, the smaller the angular resolution of the detection system and the greater the improved spatial resolution. In addition, the period (M or N) of the dislocation arrangement of the first pixel array can be designed according to the application requirements of the detection system.
- the pixels in the above four manners are all rectangles as examples. If the pixel is a circle, the above-mentioned side lengths for determining the first angular resolution and the second angular resolution may be replaced by a radius. If the pixel is an ellipse, the length of the side that determines the first angular resolution can be replaced by the length of the major axis (or the length of the minor axis) of the ellipse, and correspondingly, the length of the side that determines the second angular resolution can be replaced by the length of the minor axis of the ellipse Length (or the length of the major axis) substitution.
- the above-mentioned side length for determining the first angular resolution may be replaced by the maximum side length in the row direction
- the side length for determining the second angular resolution may be replaced by the maximum side length in the column direction.
- the offset arrangement of the pixels in the row direction in the first pixel array may also be a combination of the first and second approaches above, that is, some are equally spaced and some are not equally spaced.
- the misalignment sizes in the row direction are ⁇ 7 , ⁇ 8 and ⁇ 7 .
- the overall angular resolution of the first pixel array may be obtained based on the two first angular resolutions, for example, a weighted average of the two first angular resolutions may be taken, where ⁇ 1 The weight of "" can be greater than the weight of ⁇ 1 ""'.
- the staggered arrangement of pixels in the column direction in the first pixel array may also be a combination of the third and fourth methods above, that is, some are equally spaced and some are unequally spaced.
- the magnitudes of dislocations in the column direction are ⁇ 9 , ⁇ 0 and ⁇ 9 .
- the overall angular resolution of the first pixel array may be obtained based on the two second angular resolutions, for example, a weighted average may be taken for the two second angular resolutions, where ⁇ 2 The weight of "" can be greater than the weight of ⁇ 2 ""'.
- the first pixel array may only include one area, that is, the entire first pixel array corresponds to a first angular resolution and a second angular resolution.
- the entire first pixel array adopts the same dislocation arrangement and the same dislocation size, and the binning method of the photosensitive units is also the same.
- the first pixel array may include at least two different areas, and each area may correspond to at least one first angular resolution or at least one second angular resolution. Diagonal resolution. It can also be understood that, within the full field of view of the detection system, it may correspond to at least two different first angular resolutions or at least two different second angular resolutions. It should be understood that when all the pixels in the first pixel array are gated, the corresponding field of view is the full field of view of the detection system.
- the following exemplarily shows a possible situation that the first pixel array includes at least two different regions.
- the first pixel array is divided into multiple first regions based on different dislocation arrangements.
- the first pixel array includes m first regions, there are at least two first regions in the m first regions, and the pixels in the at least two first regions are arranged in different ways, m is an integer greater than 1.
- the first pixel array may adopt a combination of at least two of the above-mentioned four staggered arrangements.
- one dislocation arrangement corresponds to one first region.
- the first region of the first pixel array is divided based on a dislocation arrangement. Which combination to use can be determined according to the application scenario of the detection system.
- the first pixel array includes three first regions (namely the first region 1 , the first region 2 and the first region 3 ) as an example, and each first region corresponds to a dislocation arrangement of pixels.
- the arrangement manners of the pixels corresponding to the three first regions are different from each other.
- the first region 1 can adopt the dislocation arrangement of the above-mentioned mode 1
- the first region 2 can adopt the dislocation arrangement of the above-mentioned mode 2
- the first region 3 can adopt the dislocation arrangement of the above-mentioned mode 3.
- the field of view range of the first area 1 corresponds to the first angular resolution 1 and the second angular resolution 1
- the field of view of the first area 2 corresponds to the first angular resolution 2 and the second angular resolution 2.
- the field of view of the first area 3 corresponds to the first angular resolution 3 and the second angular resolution 3 .
- the first region 1 can adopt the dislocation arrangement of the above-mentioned method 1
- the first region 2 can adopt the dislocation arrangement of the above-mentioned method 2
- the first region 3 can adopt the above-mentioned method 4 of the dislocation arrangement.
- the field of view of the first area 1 corresponds to the first angular resolution 1' and the second angular resolution 1'
- the field of view of the first area 2 corresponds to the first angular resolution 2' and the second
- the angular resolution 2' corresponds to the first angular resolution 3' and the second angular resolution 3' within the field of view of the first region 3 . They are not listed here.
- the shaded area in FIG. 6a can be regarded as an invalid pixel, that is, the photosensitive unit in the shaded area will not be used by default. Further, these light-sensing units in the shaded area can perform some other functions, such as assisting in the detection and data collection of background ambient light intensity.
- the first pixel array is divided into multiple regions based on different binning methods of the photosensitive units.
- the first pixel array includes n second regions, there are at least two second regions among the n second regions, and the pixels in the at least two second regions are combined by different numbers of photosensitive units , n is an integer greater than 1.
- different second regions in the first pixel array have different binning methods of photosensitive units. Based on this, different second regions of the first pixel array can adopt the same misalignment arrangement, and the misalignment in different second regions The size is also the same. It should be noted that, the aspect ratios of the pixels in the first pixel array may be the same, or may also be different. When the aspect ratios of the pixels in the first pixel array are different, the rotationally symmetric optical imaging system can have a better degree of freedom to adjust the angular resolution.
- FIG. 6 b it is a schematic structural diagram of another first pixel array provided by the present application.
- the first pixel array can be divided into two second regions (ie, the second region a and the second region b).
- Both the second area a and the second area b take the dislocation arrangement given in the above method 1 as an example, the dislocation size in the second area a is the same as the dislocation size in the second area b, and the photosensitive unit in the second area a
- the binning method of is different from the binning method of the photosensitive units in the second region b, wherein the first angular resolution corresponding to the field of view of the second region a is different from the first angular resolution corresponding to the field of view of the second region b, And/or, the second angular resolution corresponding to the field of view of the second region a is different from the second angular resolution corresponding to the field of view of the second region b.
- the central field of view of the full field of view of the detection system requires a higher angular resolution, and the peripheral field of view of the full field of view may adopt a slightly smaller angular resolution.
- the central area of the first pixel array can be configured to use binning with fewer photosensitive units, and the edge area can use binning with more photosensitive units.
- different second regions of the first pixel array adopt different binning methods of photosensitive units, so that different first angular resolutions and/or second angular resolutions corresponding to different viewing angle ranges can be realized.
- the m first regions in the above case 1 and the n second regions in the case 2 may overlap respectively.
- the m first areas include the first area 1, the first area 2, and the first area 3,
- the n second areas include the second area A, the second area B, and the second area C, and the first area 1 It coincides with the second region A, the first region 2 coincides with the second region B, and the first region 3 coincides with the second region C.
- the first pixel array is divided into a plurality of third regions based on different dislocation sizes.
- the first pixel array includes h third regions, there are at least two third regions among the h third regions, and pixels in at least two third regions have different dislocation sizes, h is An integer greater than 1.
- different third regions in the first pixel array have different dislocation sizes. Further, optionally, different third regions of the first pixel array may adopt the same dislocation arrangement, and the dislocation sizes in different third regions are different.
- FIG. 6 c it is a schematic structural diagram of another first pixel array provided by the present application.
- the first pixel array is divided into two third areas (ie, the third area A and the third area B), the dislocation size in the third area A is smaller than the dislocation size in the third area B, and the third area A and For the third area B, the dislocation arrangement given in the above-mentioned method 1 is used as an example.
- the first angular resolution corresponding to the field of view of the third area A is smaller than the first angular resolution corresponding to the field of view of the third area B, and the second angular resolution corresponding to the field of view of the third area A is equal to the third
- the field of view of area B corresponds to the second angular resolution.
- the central field of view of the full field of view of the detection system requires a higher angular resolution, and the peripheral field of view of the full field of view may adopt a slightly smaller angular resolution.
- a smaller dislocation size can be set in the central area of the first pixel array, and a larger dislocation size can be set in the edge area.
- the central area of the first pixel array corresponds to the central field of view of the full field of view
- the edge area of the first pixel array corresponds to the edge field of view of the full field of view.
- the binning mode of the photosensitive units in the first pixel array is the same as an example for illustration.
- the above-mentioned case 1 and case 3 can achieve different angular resolutions corresponding to different viewing angles without changing the binning method of photosensitive units.
- the m first regions in the above scenario 1 and the h third regions in the scenario 3 may overlap respectively.
- the m first areas include the first area 1, the first area 2, and the first area 3,
- the h third areas include the third area A, the third area B, and the third area C, and the first area 1 It coincides with the third region A, the first region 2 coincides with the third region B, and the first region 3 coincides with the third region C.
- the n second regions in the above-mentioned case 2 may overlap with the h third regions in the case 3 respectively.
- the n second areas include the second area A, the second area B, and the second area C
- the h third areas include the third area A, the third area B, and the third area C
- the second area A It coincides with the third area A
- the second area B coincides with the third area B
- the second area C overlaps with the third area C.
- first angular resolutions and/or second angular resolutions corresponding to the full field of view may also be a combination of the above situations.
- taking area 1 can be further divided into area 11 and area 12 based on the dislocation size as an example
- the field of view of area 11 corresponds to the first angular resolution ⁇ 111 and the second angular resolution ⁇ 112
- the field of view of area 12 Corresponding to the first angular resolution ⁇ 121 and the second angular resolution ⁇ 122 .
- the region 2 and the region 3 can also be further divided into regions based on the size of the dislocation, which will not be listed here.
- area A can be further divided into area A1 and area A2 based on the dislocation arrangement.
- the field of view of area A1 corresponds to the first angular resolution ⁇ A11 and the second angular resolution ⁇ A12
- the field of view of area A2 corresponds to the first angular resolution
- the resolution is ⁇ A21 and the second angular resolution ⁇ A22 .
- the region B can also be further divided into regions based on the size of the dislocation, which will not be listed here.
- case 1 and case 2 are combined.
- area 1 can be further divided into area 1a and area 1b based on the binning method of photosensitive units.
- the field of view of the region 1a corresponds to the first angular resolution ⁇ 1a1 and the second angular resolution ⁇ 1a2
- the field of view of the region 12 corresponds to the first angular resolution ⁇ 1b1 and the second angular resolution ⁇ 1b2 .
- the region 2 and the region 3 can also be further divided into regions based on the binning method of the photosensitive units, which will not be listed here.
- the area a can be further divided into area a1 and area a2.
- the corresponding first angular resolution is ⁇ a11 and the second angular resolution is ⁇ a12 .
- the corresponding first angular resolution within the field of view is ⁇ a21 and the second angular resolution ⁇ a22 .
- the region b can also be further divided into regions based on the dislocation arrangement, which will not be listed here.
- area A can be further divided into area Aa and area Ab based on the binning method of photosensitive units.
- the field of view of area Aa corresponds to the first angular resolution ⁇ Aa1 and the second angular resolution ⁇ Aa2 .
- the field corresponds to the first angular resolution ⁇ Ab1 and the second angular resolution ⁇ Ab2 .
- the region B can also be further divided into regions based on the binning method of the photosensitive units, which will not be listed here.
- area a can be further divided into area aA and area aB based on the size of the dislocation.
- the corresponding first angular resolution is ⁇ aA1 and the second angular resolution is ⁇ aA2 .
- the region b can also be further divided into regions based on the size of the dislocation, which will not be listed here.
- the region 1 can be further divided into a region 11 and a region 12 based on the size of the dislocation, and the region 11 can be further divided into a region 111 and a region 112 based on the binning of photosensitive units.
- the field of view of the region 111 corresponds to the first angular resolution ⁇ 1111 and the second angular resolution ⁇ 1112
- the field of view of the region 112 corresponds to the first angular resolution ⁇ 1121 and the second angular resolution ⁇ 1122 .
- the regions 2 and 3 can also be further divided into regions based on the size of the misalignment, and the further divided regions can also be further divided into regions based on the binning method of the photosensitive units, which will not be listed here.
- the area A can be further divided into the area A11 and the area A21 based on the dislocation arrangement, further, the area A11 can be divided into the area A11a and the area A21b by the binning method of the photosensitive unit, and the field of view of the area A11a corresponds to the first angular resolution ⁇ A11a1 and the second angular resolution ⁇ A11a2 , the field of view of the region A21b corresponds to the first angular resolution ⁇ 21b1 and the second angular resolution ⁇ 21b2 . It should be understood that other possible combinations are also possible, which will not be listed here.
- the above-mentioned first pixel array may be the same chip. That is to say, different regions of the same chip may correspond to different first angular resolutions and/or second angular resolutions.
- the first pixel array may perform photoelectric conversion on the received echo signal to obtain associated information for determining the target in the detection area. For example, the distance information of the target, the orientation of the target, the speed of the target, and/or the grayscale information of the target, etc.
- the light source array may be a 2D addressable light source array.
- the so-called addressable light source array refers to the light sources in the light source array that can be independently strobed (or referred to as lighting or turning on or energizing), and the strobed light sources can be used to emit signal light.
- the light source array includes a first light source array.
- the first light source array is the entirety of the light source array, that is, the light source array includes M ⁇ N light sources corresponding to M ⁇ N pixels. Based on this, the first light source array in this application can be replaced by a light source array.
- the first light source array may be part of the light sources of the light source array. That is, in addition to the first light source array formed by M ⁇ N light sources, the light source array may also include other light sources. Based on this, the light source array may form a regular pattern, or may also form an irregular pattern, which is not limited in this application.
- the light source in the light source array may be a vertical cavity surface emitting laser (vertical cavity surface emitting laser, VCSEL), an edge emitting laser (edge emitting laser, EEL), an all-solid-state semiconductor laser (diode pumped solid state laser, DPSS ) or fiber lasers.
- VCSEL vertical cavity surface emitting laser
- EEL edge emitting laser
- DPSS all-solid-state semiconductor laser
- the Vcsel may include an optical area (OA), the active area is the area of the Vcsel for emitting signal light, and other areas of the Vcsel do not emit light.
- the active region may be located in the center area of Vcsel, or may also be located in the edge area of Vcsel, or may also be located in other areas of Vcsel, which is not limited by the present application.
- the active area of Vcsel corresponds to the image area of the pixel corresponding to the Vcsel, by adjusting the position of the active area of Vcsel (such as the central coordinates of the active area), the signal light emitted by the active area of Vcsel can be changed in the detected area The position of the image area of the pixel covered by the image of the echo signal reflected by the target.
- the active area is located in the central area of the Vcsel, it is beneficial for the corresponding pixels to receive the echo signals as much as possible, thereby improving the utilization rate of the echo signals.
- the light source array includes k areas, there are at least two areas in the k areas, the active areas of the light sources in the at least two areas are different in the relative positions of the light sources, and k is an integer greater than 1 .
- the spatial arrangement that is, the relative position on the light source
- the active region of the light source is set at the relative position of the light source.
- the first pixel array and the first light source array are strongly related in design of arrangement and specifications. Further, optionally, the arrangement of the light sources in the first light source array matches the arrangement of the pixels in the first pixel array. If the arrangement of the pixels in the first pixel array is the above-mentioned method 1, the arrangement of the light sources in the first light source array can also be as shown in Figure 5a; if the arrangement of the pixels in the first pixel array is the above-mentioned method 2, The arrangement of the light sources in the first light source array can also be as shown in Figure 5c; if the arrangement of the pixels in the first pixel array is the above-mentioned way three, the arrangement of the light sources in the first light source array can also be as shown in Figure 5e Shown; if the arrangement of the pixels in the first pixel array is the above-mentioned way 4, the arrangement of the light sources in the first light source array can also be as shown in FIG.
- the arrangement of the light sources in the first light source array can also be as shown in Figure 5g; if the arrangement of the pixels in the first pixel array is as shown in Figure 5h, the first The arrangement of the light sources in the light source array may also be as shown in Fig. 5h.
- the specific arrangement of light sources in the first light source array please refer to the corresponding arrangement of pixels in the first pixel array.
- the image area in the pixel can be replaced with an active area, which will not be repeated here.
- first light source array and the first pixel array adopt the same dislocation arrangement, but the magnitude of the dislocation of the light source may be different from that of the pixel.
- one light source corresponds to one pixel, and one pixel can be obtained by binning m ⁇ n photosensitive units.
- the following exemplarily shows a correspondence between a light source and a photosensitive unit.
- the first pixel array includes 2 ⁇ 2 pixels (i.e. pixel 11, pixel 12, pixel 21 and pixel 22), and the first light source array includes 2 ⁇ 2 light sources (i.e. light source 11, light source 12, light source 21 and light source 22), each pixel includes 4 ⁇ 4 SPADs as an example.
- the pixel 11 corresponds to the light source 11, the pixel 12 corresponds to the light source 12, the pixel 21 corresponds to the light source 21, the pixel 22 corresponds to the light source 22, and the active area of each light source corresponds to the image area of the corresponding pixel.
- the above-mentioned pixel array after the pixels are arranged in a staggered position can be called a special-shaped structure
- the light source array after the light sources are arranged in a staggered position can also be called a special-shaped structure.
- the detection system may also include an optical imaging system, and the optical imaging system may include a transmitting optical system and a receiving optical system.
- the transmitting optical system and the receiving optical system may be the same, and the transmitting optical system and the receiving optical system may also be different.
- the optical imaging systems on the pixels are all within the protection scope of this application. The following is an example where the transmitting optical system and the receiving optical system are the same.
- the signal light emitted by the light source in the light source array can be shaped and/or collimated by the transmitting optical system and directed to the detection area, and reflected by the target in the detection area to obtain an echo signal, the echo signal After being shaped and/or collimated by the receiving optical system, it is received by corresponding pixels in the pixel array. 7, the signal light emitted by the active area of the light source 11 is transmitted to the detection area after being propagated by the emission optical system, and the echo signal is obtained by reflection from the target in the detection area, and the echo signal can be displayed on the image of the corresponding pixel 11.
- the signal light emitted by the active area of the light source 12 is propagated by the emission optical system and directed to the detection area, and the echo signal is obtained by reflection from the target in the detection area, and the echo signal can be imaged in the image area of the corresponding pixel 12
- the signal light emitted by the active area of the light source 21 is transmitted to the detection area after being propagated by the emission optical system, and the echo signal is obtained through the target reflection in the detection area, and the echo signal can be imaged in the image area of the corresponding pixel 21;
- the light source The signal light emitted by the active area of 22 is propagated by the emission optical system and directed to the detection area, and is reflected by the target in the detection area to obtain an echo signal, and the echo signal can be imaged in the image area of the corresponding pixel 22 .
- the one-to-one alignment of the emission field of view of the light source in the light source array and the reception field of view of the pixel in the pixel array can be realized based on the optical principle of focal plane imaging. That is, the emitting field of view of each light source in the light source array corresponds to the receiving field of view of each pixel in the pixel array in one-to-one space. In other words, one pixel corresponds to one receiving field of view, one light source corresponds to one emitting field of view, and the receiving field of view and the emitting field of view are aligned one by one in space.
- each light source in the light source array is located on the object plane of the imaging optical system, and the photosensitive surface of each pixel in the pixel array is located on the image plane of the imaging optical system.
- the light source in the light source array is located on the object-side focal plane of the transmitting optical system, and the photosensitive surface of the pixel in the pixel array is located on the image-side focal plane of the receiving optical system.
- the signal light emitted by the light source in the light source array propagates to the detection area through the emission optical system, and the echo signal obtained by reflecting the signal light from the target in the detection area can be imaged on the image focal plane through the receiving optical system.
- the transmitting optical system and the receiving optical system generally use the same optical lens.
- the transmitting optical system and the receiving optical system are relatively simple and can be modularized, so that the detection system can achieve small volume and high integration.
- FIG. 8a it is a schematic structural diagram of an optical lens provided by the present application.
- the optical lens includes at least one lens, which may be, for example, a lens.
- FIG. 8a shows that the optical lens includes 4 lenses as an example.
- the optical axis of the optical lens refers to the straight line passing through the spherical center of each lens shown in FIG. 8a.
- the optical lens may be rotationally symmetrical about the optical axis.
- the lens in the optical lens can be a single spherical lens, or a combination of multiple spherical lenses (such as a combination of concave lenses, a combination of convex lenses, or a combination of convex and concave lenses, etc.).
- the combination of multiple spherical lenses helps to improve the imaging quality of the detection system and reduce the aberration of the optical imaging system.
- convex lenses include biconvex lenses, plano-convex lenses, and meniscus lenses
- concave lenses include biconvex lenses, plano-concave lenses, and meniscus lenses. In this way, it helps to improve the multiplexing rate of the optical devices of the detection system, and facilitates the installation and adjustment of the detection system.
- the horizontal equivalent focal length and the vertical equivalent focal length of an optical lens that is rotationally symmetric about the optical axis are the same.
- the lens in the optical lens may also be a single aspheric lens or a combination of multiple aspheric lenses, which is not limited in this application.
- the material of the lens in the optical lens may be an optical material such as glass, resin, or crystal.
- the lens material is resin, it helps to reduce the mass of the detection system.
- the material of the lens is glass, it helps to further improve the imaging quality of the detection system.
- the optical lens includes at least one lens made of glass material.
- FIG. 8 b it is a schematic structural diagram of another optical lens provided by the present application.
- the optical lens is a micro lens array (micro lens array, MLA).
- MLA micro lens array
- the microlens array can collimate and/or shape the signal light from the light source array, and transmit the collimated and/or shaped signal light to the detection area.
- the microlens array can achieve a signal light collimation of 0.05°-0.1°.
- the structure of the above-mentioned optical lens can be used as a transmitting optical system, or can also be used as a receiving optical system, or both the transmitting optical system and the receiving optical system adopt the structure of the above-mentioned optical lens.
- the transmitting optical system and the receiving optical system may also be other possible structures, such as a micro-optical system pasted on the surface of the light source array and the surface of the pixel array, which is not limited in this application.
- the focal length of the receiving optical system can be changed with the change of the field angle of the detection system, it is also possible to achieve different first angular resolutions corresponding to different fields of view and/or by changing the focal length of the receiving optical system Second angular resolution.
- the aspect ratio of the light sources in the first light source array is the same as that of the corresponding pixels in the first pixel array; based on this, the focal lengths of the transmitting optical system and the receiving optical system may be the same.
- the aspect ratio of the light sources in the light source array is equal to a 1 : a 2 ; based on this, the focal length ratio of the transmitting optical system and the receiving optical system is equal to a 2 : a 1 .
- the first light source array and the first pixel array are spatially mapped one by one through the receiving optical system and the emitting optical system, or called coupling or matching.
- the detection system may also include a control module.
- the control module can be a central processing unit (central processing unit, CPU), and can also be other general-purpose processors (such as microprocessors, or any conventional processors), field programmable gate arrays (field programmable gate arrays, FPGA), signal data processing (digital signal processing, DSP) circuit, application specific integrated circuit (application specific integrated circuit, ASIC), transistor logic device, or other programmable logic device, or any combination thereof.
- CPU central processing unit
- FPGA field programmable gate arrays
- DSP digital signal processing
- ASIC application specific integrated circuit
- transistor logic device or other programmable logic device, or any combination thereof.
- control module when the detection system is applied to a vehicle, can be used to plan the driving path according to the determined associated information of the detection area, such as avoiding obstacles on the driving path.
- the detection system in any of the foregoing embodiments may be a laser radar, such as a pure solid-state laser radar.
- the present application may further provide a terminal device.
- FIG. 9 it is a schematic structural diagram of a terminal device provided by the present application.
- the terminal device 900 may include the detection system 901 in any of the foregoing embodiments.
- the terminal device may further include a processor 902, and the processor 902 is configured to invoke a program or an instruction to control the detection system 901 to detect the detection area.
- the processor 902 may also receive the electrical signal obtained by photoelectrically converting the echo signal from the detection system 901, and determine the relevant information of the target according to the electrical signal.
- the terminal device may further include a memory 903, and the memory 903 is used to store programs or instructions.
- the terminal device may also include other components, such as a wireless communication device and the like.
- Processor 902 may include one or more processing units.
- the processor 902 may include an application processor (application processor, AP), a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a digital signal processor (digital signal processor, DSP), etc.
- application processor application processor
- GPU graphics processing unit
- ISP image signal processor
- DSP digital signal processor
- different processing units may be independent devices, or may be integrated in one or more processors.
- the memory 903 includes but not limited to random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable only Read memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
- the storage medium may also be a component of the processor.
- the processor and storage medium can be located in the ASIC.
- the processor 902 may also plan the driving route of the terminal device according to the determined associated information of the target, such as avoiding obstacles on the driving route.
- the terminal device can be, for example, a vehicle (such as an unmanned car, a smart car, an electric car, or a digital car, etc.), a robot, a surveying and mapping device, a drone, a smart home device (such as a TV, a sweeping robot, a smart desk lamp, etc.) , audio system, intelligent lighting system, electrical control system, home background music, home theater system, intercom system, or video surveillance, etc.), intelligent manufacturing equipment (such as industrial equipment), intelligent transportation equipment (such as AGV, unmanned transport vehicle , or trucks, etc.), or smart terminals (mobile phones, computers, tablets, handheld computers, desktops, headphones, audio, wearable devices, vehicle-mounted devices, virtual reality devices, augmented reality devices, etc.), etc.
- a vehicle such as an unmanned car, a smart car, an electric car, or a digital car, etc.
- a robot such as a robot, a surveying and mapping device, a drone, a smart home device (such as a TV, a
- control detection method please refer to the introduction of 10.
- the control detection method can be applied to the detection system shown in any one of the embodiments in Fig. 3 to Fig. 8b above. It can also be understood that the following detection methods can be implemented based on the detection system shown in any one of the embodiments in FIG. 3 to FIG. 8 b. Alternatively, the detection control method may also be applied to the terminal device shown in FIG. 9 above. It can also be understood that the detection control method can be implemented based on the terminal device shown in FIG. 9 above.
- the control detection method may be executed by a control device, which may belong to the detection system, or may also be a control device independent of the detection system, such as a chip or a chip system.
- the control device may be a domain processor in the vehicle, or may also be an electronic control unit (electronic control unit, ECU) in the vehicle, etc.
- the detection method includes the following steps:
- step 1001 the control device controls to gate the first pixel in the first pixel array.
- the first pixel is part or all of the pixels in the first pixel array.
- Step 1002 the control device controls to gate the first light source corresponding to the first pixel in the first light source array.
- the first light source is also a part of the light source corresponding to the first pixel in the first light source array; if the first pixel is all the pixels in the first pixel array, then the first A light source is also all of the first light source array.
- step 1001 and step 1002 do not represent a sequence, and generally, step 1001 and step 1002 are executed synchronously.
- the control device may generate a first control instruction according to the target angular resolution, and send the first control instruction to the pixel array, so as to control gating of the first pixel in the first pixel array. And/or, the control device sends a first control instruction to the light source array, so as to control the gate of the first light source corresponding to the first pixel in the first light source array.
- the target angular resolution may be generated or acquired by the upper layer of the detection system (that is, the layer that can obtain the requirements or application scenarios of the detection system, such as the application layer) according to the requirements (or application scenarios) of the detection system.
- the value of the target angular resolution is small.
- the value of the target angular resolution is relatively large.
- the value of the angular resolution of the target is relatively large.
- the angular resolution of the target is small.
- the gating mode of the light sources in the first light source array can use non-adjacent rows or non-adjacent columns to work at the same time, for example, the first row is selected at the first time, the third row is selected at the second time, etc. . In this way, it helps to reduce optical crosstalk.
- the angular resolution of the column direction can be changed by controlling and modifying and setting the equivalent wire harness in the column direction when the detection system is working;
- Direction of the equivalent wire harness which can achieve angular resolution to change the direction of the row.
- the control device can expand the point cloud information corresponding to one pixel during data processing. for multiple.
- the pixels in the first pixel array are obtained by combining p ⁇ q photosensitive units, and both p and q are integers greater than 1; when the control device determines that the detection distance is less than the threshold, the corresponding pixel The point cloud information is extended to Q, and Q is an integer greater than 1. It can also be understood that, when the control device determines that the detection distance is smaller than the threshold, it starts the point cloud extension function.
- the threshold can be preset or calibrated and stored in the detection system. It should be understood that when the detection distance is less than the threshold, it indicates that the detection system is performing short-distance detection, and at this time, a higher first angular resolution and/or a higher second angular resolution is usually required.
- control device may control the gating of the a ⁇ b photosensitive units in the central area of the pixel, and control the gating of the photosensitive units adjacent to at least one of the a ⁇ b photosensitive units, wherein, a ⁇ b
- the b photosensitive units correspond to a first point cloud information, a is smaller than p, b is smaller than q, and the neighboring photosensitive units output the second point cloud information.
- the pixels in the first pixel array adopt a 4 ⁇ 4 binnning method, and during the working process of the detection system, one pixel can output one point cloud information.
- the control device can control the 4 SPADs (SPAD11, SPAD12, SPAD21 and SPAD22) in the central area of the gate pixel to receive echo signals, that is, the signals sensed by the 4 SPADs in the central area of the pixel are added together Output a first point cloud information.
- one piece of point cloud information may be expanded into multiple pieces.
- the adjacent SPADs of the four SPADs in the central area of FIG. 7 are SPAD1-SPAD8 respectively, that is, there are eight SPADs in the adjacent ones.
- SPAD1-SPAD8 and the four SPADs in the four corners are not selected.
- the point cloud information of at least one SPAD in SPAD1-SPAD8 (which may be referred to as the second point cloud information) may be added.
- the space coordinates of the added second point cloud information can be determined according to the real point cloud information through preset operations, such as taking the average of intensity or distance or some reasonable interpolation calculation.
- the point cloud information includes but not limited to spatial coordinates, intensity, distance and so on.
- the four SPADs in the central area can output one first point cloud information
- the generation strategy of the other four second point cloud information can be: SPAD1, SPAD2, SPAD11
- the spatial coordinates can be the spatial coordinates of the center points of these four SPADs
- the distance and intensity information can be the average value of the data collected by SPAD11 and SPAD21 in the first column of the central area, that is, SPAD1 and SPAD2 are always the same It will output effective single photon counting value, and obtain a second point cloud information
- the spatial coordinates can be the spatial coordinates of the center points of these four SPADs
- the distance and intensity information take the average value of the data collected by SPAD21 and SPAD22 in the second row of
- the row direction may be consistent with the horizontal direction
- the column direction may be consistent with the vertical direction
- FIG. 11 and FIG. 12 are schematic structural diagrams of a possible control device provided in the present application. These control devices can be used to implement the method shown in FIG. 10 in the above method embodiment, so the beneficial effects of the above method embodiment can also be realized.
- the control device may be the control module in the above-mentioned detection system, or it may also be the processor in the terminal device in FIG. 9 , or it may be other independent control devices (such as chips).
- the control device 1100 includes a processing module 1101 , and may further include a transceiver module 1102 .
- the control device 1100 is used to implement the method in the above method embodiment shown in FIG. 10 .
- the processing module 1101 is used to control the first pixel in the first pixel array and the first pixel in the first light source array through the transceiver module 1102.
- processing module 1101 in the embodiment of the present application may be implemented by a processor or processor-related circuit components, and the transceiver module 1102 may be implemented by an interface circuit and other related circuit components.
- the present application further provides a control device 1200 .
- the control device 1200 may include a processor 1201 , and further, optionally, may also include an interface circuit 1202 .
- the processor 1201 and the interface circuit 1202 are coupled to each other. It can be understood that the interface circuit 1202 may be an input and output interface.
- the control device 1200 may further include a memory 1203 for storing computer programs or instructions executed by the processor 1201 .
- the processor 1201 is used to execute the functions of the above-mentioned processing module 1101
- the interface circuit 1202 is used to execute the functions of the above-mentioned transceiver module 1102 .
- the present application provides a chip.
- the chip may include a processor and an interface circuit. Further, optionally, the chip may also include a memory, and the processor is used to execute computer programs or instructions stored in the memory, so that the chip performs any of the above-mentioned possible implementations in FIG. 10. method.
- processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
- CPU central processing unit
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general-purpose processor can be a microprocessor, or any conventional processor.
- the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
- Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
- An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
- the storage medium may also be a component of the processor.
- the processor and storage medium can be located in the ASIC.
- the ASIC can be located in the control device.
- the processor and the storage medium can also be present in the control device as separate components.
- all or part of them may be implemented by software, hardware, firmware or any combination thereof.
- software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
- a computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part.
- the computer can be a general purpose computer, special purpose computer, computer network, control device, user equipment or other programmable device.
- Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center.
- a computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
- Available media can be magnetic media, such as floppy disks, hard disks, and magnetic tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Description
Claims (19)
- 一种探测系统,其特征在于,包括像素阵列和光源阵列,所述像素阵列包括第一像素阵列,所述第一像素阵列包括M×N个像素,所述光源阵列包括第一光源阵列,所述第一光源阵列包括与所述M×N个像素对应的M×N个光源,所述M和N均为大于1的整数;所述第一像素阵列中的像素在行方向上错位排列,所述像素的错位大小小于相邻两个像素在所述行方向的中心之间的距离;或者,所述第一像素阵列中的像素在列方向上错位排列,所述像素的错位大小小于相邻两个像素在所述列方向的中心之间的距离;所述第一光源阵列中的光源的排列方式与所述第一像素阵列中的像素的排列方式耦合或匹配。
- 如权利要求1所述的探测系统,其特征在于,所述第一像素阵列为所述像素阵列的部分像素或者全部像素,和/或,所述第一光源阵列为所述光源阵列的部分光源或全部光源。
- 如权利要求1或2所述的探测系统,其特征在于,所述像素阵列中的像素是通过至少一个感光单元合并得到的。
- 如权利要求1至3任一项所述的探测系统,其特征在于,所述第一像素阵列中的像素在行方向上错位排列的方式包括以下任一项或两项的组合:所述第一像素阵列中的像素在行方向等间隔的错位排列;所述第一像素阵列中的像素在行方向非等间隔的错位排列。
- 如权利要求1至3任一项所述的探测系统,其特征在于,所述第一像素阵列中的像素在列方向上错位排列的方式包括以下任一项或两项的组合:所述第一像素阵列中的像素在列方向等间隔的错位排列;所述第一像素阵列中的像素在列方向非等间隔的错位排列。
- 如权利要求1至5任一项所述的探测系统,其特征在于,所述第一像素阵列包括m个第一区域,所述m个第一区域中存在至少两个第一区域,所述至少两个第一区域中的像素的错位排列方式不同,所述m为大于1的整数。
- 如权利要求1至6任一项所述的探测系统,其特征在于,所述第一像素阵列包括n个第二区域,所述n个第二区域中存在至少两个第二区域,所述至少两个第二区域中的像素由不同数量的感光单元合并的,所述n为大于1的整数。
- 如权利要求1至7任一项所述的探测系统,其特征在于,所述第一像素阵列包括h个第三区域,所述h个第三区域中存在至少两个第三区域,所述至少两个第三区域中的像素的错位大小不同,所述h为大于1的整数。
- 如权利要求1至8任一项所述的探测系统,其特征在于,所述光源阵列中的光源包括有源区,所述有源区用于发射信号光;所述光源阵列包括k个区域,所述k个区域中存在至少两个区域,所述至少两个区域中的光源的有源区在光源的相对位置不同,所述k为大于1的整数。
- 如权利要求1至9任一项所述的探测系统,其特征在于,所述探测系统还包括成像光学系统;所述光源阵列位于所述成像光学系统的像方的焦平面,所述像素阵列位于所述成像光学系统的物方的焦平面。
- 一种终端设备,其特征在于,包括如权利要求1至10任一项所述的探测系统。
- 一种控制探测方法,其特征在于,应用于探测系统,所述探测系统包括像素阵列和光源阵列,所述像素阵列包括第一像素阵列,所述第一像素阵列包括M×N个像素,所述光源阵列包括第一光源阵列,所述第一光源阵列包括与所述M×N个像素对应的M×N个光源,所述M和N均为大于1的整数;所述第一像素阵列中的像素在行方向的错位排列,所述像素的错位大小小于相邻两个像素在所述行方向的中心之间的距离;或者,所述第一像素阵列中的像素在列方向上错位排列,所述像素的错位大小小于相邻两个像素在所述列方向的中心之间的距离;所述第一光源阵列中的光源的排列方式与所述第一像素阵列中的像素的排列方式耦合或匹配;所述方法包括:控制选通所述第一像素阵列中的第一像素,所述第一像素为所述第一像素阵列中的部分像素或全部像素;控制选通所述第一光源阵列中与所述第一像素对应的第一光源。
- 如权利要求12所述的方法,其特征在于,所述方法还包括:获取来自所述第一像素的第一电信号,所述第一电信号为所述第一像素根据接收到的第一回波信号确定的,所述第一回波信号为探测区域中的目标对所述第一光源发射的第一信号光反射得到的;根据所述第一电信号确定所述目标的关联信息。
- 如权利要求12或13所述的方法,其特征在于,所述控制选通所述第一像素阵列中的第一像素,包括:获取第一控制信号,所述第一控制信号用于控制选通所述第一像素和/或第一光源,所述第一控制信号至少根据目标角分辨率生成;向所述像素阵列和/或所述光源阵列发送所述第一控制信号。
- 如权利要求12至14任一项所述的方法,其特征在于,所述第一像素阵列中的像素由p×q个感光单元合并得到的,所述p和q均为大于1的整数;所述方法还包括:确定探测距离小于阈值,将像素对应的点云信息扩展为Q个,所述Q为大于1的整数。
- 如权利要求15所述的方法,其特征在于,所述确定探测距离小于阈值,将像素对应的点云信息扩展为Q个,包括:控制选通所述像素的中心区域的a×b个感光单元,所述a×b个感光单元对应一个第一点云信息,所述a小于p,所述b小于q;控制选通与所述a×b个感光单元中的至少一个感光单元近邻的感光单元,所述近邻的感光单元输出第二点云信息。
- 一种控制装置,其特征在于,包括用于执行如权利要求12至16中的任一项所述方法的模块。
- 一种控制装置,其特征在于,包括至少一个处理器和接口电路,所述控制装置用于执行如权利要求12至16中的任一项所述方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或指令被通信装置执行时,使得所述通信装置执行如权利要求12至16中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21959682.2A EP4400817A1 (en) | 2021-10-08 | 2021-10-08 | Detection system, terminal device, control detection method, and control apparatus |
JP2024520782A JP2024536373A (ja) | 2021-10-08 | 2021-10-08 | 検出システム、端末デバイス、検出を制御するための方法、制御装置、及びコンピュータプログラム |
PCT/CN2021/122544 WO2023056585A1 (zh) | 2021-10-08 | 2021-10-08 | 一种探测系统、终端设备、控制探测方法及控制装置 |
CN202180101959.2A CN117916621A (zh) | 2021-10-08 | 2021-10-08 | 一种探测系统、终端设备、控制探测方法及控制装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/122544 WO2023056585A1 (zh) | 2021-10-08 | 2021-10-08 | 一种探测系统、终端设备、控制探测方法及控制装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023056585A1 true WO2023056585A1 (zh) | 2023-04-13 |
Family
ID=85803828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/122544 WO2023056585A1 (zh) | 2021-10-08 | 2021-10-08 | 一种探测系统、终端设备、控制探测方法及控制装置 |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4400817A1 (zh) |
JP (1) | JP2024536373A (zh) |
CN (1) | CN117916621A (zh) |
WO (1) | WO2023056585A1 (zh) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050061951A1 (en) * | 2001-04-27 | 2005-03-24 | Campbell Scott P. | Optimization of alignment among elements in an image sensor |
CN105554420A (zh) * | 2014-10-24 | 2016-05-04 | 全视科技有限公司 | 拥有具有交错光电二极管的像素单元的图像传感器 |
CN105789261A (zh) * | 2016-04-29 | 2016-07-20 | 京东方科技集团股份有限公司 | 像素阵列及其制造方法和有机发光二极管阵列基板 |
US20170187936A1 (en) * | 2015-12-23 | 2017-06-29 | Stmicroelectronics (Research & Development) Limite | Image sensor configuration |
CN106973273A (zh) * | 2015-10-27 | 2017-07-21 | 联发科技股份有限公司 | 图像探测方法与图像探测装置 |
CN111787248A (zh) * | 2020-07-14 | 2020-10-16 | 深圳市汇顶科技股份有限公司 | 图像传感器、终端设备以及成像方法 |
CN112729566A (zh) * | 2020-12-15 | 2021-04-30 | 上海集成电路研发中心有限公司 | 一种探测器成像装置 |
CN112912766A (zh) * | 2021-02-02 | 2021-06-04 | 华为技术有限公司 | 一种探测装置、控制方法、融合探测系统及终端 |
-
2021
- 2021-10-08 WO PCT/CN2021/122544 patent/WO2023056585A1/zh active Application Filing
- 2021-10-08 CN CN202180101959.2A patent/CN117916621A/zh active Pending
- 2021-10-08 EP EP21959682.2A patent/EP4400817A1/en active Pending
- 2021-10-08 JP JP2024520782A patent/JP2024536373A/ja active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050061951A1 (en) * | 2001-04-27 | 2005-03-24 | Campbell Scott P. | Optimization of alignment among elements in an image sensor |
CN105554420A (zh) * | 2014-10-24 | 2016-05-04 | 全视科技有限公司 | 拥有具有交错光电二极管的像素单元的图像传感器 |
CN106973273A (zh) * | 2015-10-27 | 2017-07-21 | 联发科技股份有限公司 | 图像探测方法与图像探测装置 |
US20170187936A1 (en) * | 2015-12-23 | 2017-06-29 | Stmicroelectronics (Research & Development) Limite | Image sensor configuration |
CN105789261A (zh) * | 2016-04-29 | 2016-07-20 | 京东方科技集团股份有限公司 | 像素阵列及其制造方法和有机发光二极管阵列基板 |
CN111787248A (zh) * | 2020-07-14 | 2020-10-16 | 深圳市汇顶科技股份有限公司 | 图像传感器、终端设备以及成像方法 |
CN112729566A (zh) * | 2020-12-15 | 2021-04-30 | 上海集成电路研发中心有限公司 | 一种探测器成像装置 |
CN112912766A (zh) * | 2021-02-02 | 2021-06-04 | 华为技术有限公司 | 一种探测装置、控制方法、融合探测系统及终端 |
Also Published As
Publication number | Publication date |
---|---|
EP4400817A1 (en) | 2024-07-17 |
CN117916621A (zh) | 2024-04-19 |
JP2024536373A (ja) | 2024-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11838689B2 (en) | Rotating LIDAR with co-aligned imager | |
WO2023092859A1 (zh) | 激光雷达发射装置、激光雷达装置及电子设备 | |
US20240210528A1 (en) | Receiving Optical System, Lidar System, and Terminal Device | |
WO2023056585A1 (zh) | 一种探测系统、终端设备、控制探测方法及控制装置 | |
WO2024036582A1 (zh) | 一种发射模组、接收模组、探测装置及终端设备 | |
WO2023015562A1 (zh) | 一种激光雷达及终端设备 | |
WO2023050398A1 (zh) | 激光雷达发射装置、激光雷达装置及电子设备 | |
CN212749253U (zh) | Tof测距装置 | |
US10958846B2 (en) | Method, device and system for configuration of a sensor on a moving object | |
WO2023056848A9 (zh) | 一种控制探测方法、控制装置、激光雷达及终端设备 | |
WO2024044997A1 (zh) | 一种光学接收模组、接收系统、探测装置及终端设备 | |
WO2023123150A1 (zh) | 一种控制方法、激光雷达及终端设备 | |
WO2024044905A1 (zh) | 一种探测装置及终端设备 | |
WO2023201596A1 (zh) | 一种探测装置及终端设备 | |
WO2023019442A1 (zh) | 一种探测系统、终端设备及探测控制方法 | |
WO2023155048A1 (zh) | 一种探测装置、终端设备及分辨率的调控方法 | |
WO2024041034A1 (zh) | 一种显示模组、光学显示系统、终端设备及成像方法 | |
WO2023123447A1 (zh) | 一种扫描模组、探测装置及终端设备 | |
WO2022188185A1 (zh) | 探测系统和可移动平台 | |
TWI820637B (zh) | 一種探測裝置及終端設備 | |
WO2023225902A1 (zh) | 一种发射模组、探测装置及终端设备 | |
Kawamata et al. | Calibration method of the monocular omnidirectional stereo camera | |
TW202349060A (zh) | 感測模組 | |
CN111337948A (zh) | 障碍物检测方法、雷达数据生成方法、装置及存储介质 | |
JPWO2020008684A1 (ja) | 物体認識装置、物体認識システム、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21959682 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180101959.2 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2024520782 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021959682 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021959682 Country of ref document: EP Effective date: 20240410 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |