CN210572750U - TOF sensor and machine vision system - Google Patents

TOF sensor and machine vision system Download PDF

Info

Publication number
CN210572750U
CN210572750U CN201920860932.1U CN201920860932U CN210572750U CN 210572750 U CN210572750 U CN 210572750U CN 201920860932 U CN201920860932 U CN 201920860932U CN 210572750 U CN210572750 U CN 210572750U
Authority
CN
China
Prior art keywords
sensing
different
tof sensor
area
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201920860932.1U
Other languages
Chinese (zh)
Inventor
梅健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruyu Intelligent Technology Suzhou Co ltd
Original Assignee
Ruyu Intelligent Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruyu Intelligent Technology Suzhou Co ltd filed Critical Ruyu Intelligent Technology Suzhou Co ltd
Priority to CN201920860932.1U priority Critical patent/CN210572750U/en
Application granted granted Critical
Publication of CN210572750U publication Critical patent/CN210572750U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model relates to a TOF sensor and a machine vision system, the TOF sensor includes: the pixel array at least comprises more than two sensing areas with different sensing precisions, and the sensing area with the higher sensing precision outputs stronger sensing signals. The TOF sensor can reduce cost on the premise of meeting the requirement of detection precision.

Description

TOF sensor and machine vision system
Technical Field
The utility model relates to a sensing technology field especially relates to a TOF sensor, a machine vision system.
Background
Machine vision is a branch of the rapid development of artificial intelligence. In brief, machine vision is to use a machine to replace human eyes for measurement and judgment. The machine vision system converts the shot target into an image signal through a sensor, transmits the image signal to a special image processing system to obtain the form information of the shot target, and converts the form information into a digital signal according to the information of pixel distribution, brightness, color and the like; the image system performs various calculations on these signals to extract the features of the target, and then controls the operation of the on-site equipment according to the result of the discrimination. For example, sweeping robots, automatic driving, map modeling, and the like all need to achieve the purposes of object recognition, obstacle avoidance, and the like through a vision system.
Generally, the robot is concerned more about scenes with the height equivalent to the height of the robot in the advancing direction, and only needs to know to a certain extent about the environment of a side or a full scene, so that the precision requirement is low. For example, in obstacle avoidance operation of a sweeping robot, timely braking or steering needs to be performed when the robot encounters an obstacle in the forward direction, and only whether the robot has the obstacle in the forward direction needs to be judged, but the requirement on the visual accuracy of the obstacle is not high.
Therefore, different detection accuracy requirements are imposed for different areas within the visual range of the robot. In the prior art, the sensors are designed according to the highest precision requirement in the visual range of the robot, the detection precision in the whole visual range meets the highest precision requirement, and although the visual precision requirement can be realized, the cost is high. Or, algorithm correction needs to be performed on the local area in the later period, so that the precision is improved, and the realization difficulty is high.
How to further reduce the product cost on the premise of meeting the requirement of machine vision precision is a problem to be solved urgently at present.
SUMMERY OF THE UTILITY MODEL
The utility model aims to solve the technical problem that a TOF sensor and machine vision system are provided, under the prerequisite that satisfies the sensing required precision of practical application scene, reduce product cost.
In order to solve the problem, the utility model provides a TOF sensor, including the pixel array, the pixel array includes the sensing area that has different sensing precision more than two at least, and the sensing signal of the regional output of the great sensing of sensing precision is stronger.
Optionally, the size of the pixel unit of the two or more sensing areas with different sensing accuracies is positively correlated with the sensing accuracy of each sensing area.
Optionally, the number of the photosensitive elements in the pixel unit of the two or more sensing areas with different sensing precisions is positively correlated with the sensing precision of each sensing area
Optionally, the light sensing performance of the pixel units of the two or more sensing areas with different sensing accuracies is positively correlated with the sensing accuracy of each sensing area.
Optionally, the sensing regions with different sensing accuracies are distributed at intervals.
Optionally, the sensing areas with different sensing accuracies have the same pixel unit; the sensing device further comprises a processing unit, the processing unit takes the sum of the sensing signals of more than one pixel unit in the sensing area as an effective sensing signal to process, and the number of the pixel units corresponding to the effective sensing signal is positively correlated with the sensing precision of the sensing area.
The technical scheme of the utility model a machine vision system is still provided, include: the TOF sensor of any of the above; the visual range of the machine vision system comprises more than two visual areas with different precision requirements, the different visual areas correspond to different sensing areas of a pixel array of the TOF sensor, and the sensing precision of each sensing area corresponds to the precision requirement of each visual area.
The utility model discloses a pixel array of TOF sensor has the sensing area of different sensing precision, and the structure through adjustment pixel unit distributes or detects the energy distribution of light, under the prerequisite that satisfies local sensing precision, reduces product cost.
Drawings
Fig. 1 is a schematic structural diagram of a pixel array of a TOF sensor according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a pixel array of a TOF sensor according to an embodiment of the invention;
fig. 3 is a schematic diagram of a pixel array of a TOF sensor according to an embodiment of the present invention and a detection light distribution area;
fig. 4 is a schematic structural diagram of a pixel array of a TOF sensor according to an embodiment of the invention;
fig. 5 is a schematic view of a machine vision range of a sweeping robot according to an embodiment of the present invention.
Detailed Description
The detailed description of the TOF sensor and the robot vision system according to the present invention will be made with reference to the accompanying drawings.
Please refer to fig. 1, which is a schematic structural diagram of a TOF sensor pixel array according to an embodiment of the present invention.
The TOF sensor comprises a pixel array, and the pixel array is formed by arranging a plurality of pixel unit arrays. In this embodiment, the pixel array includes two sensing regions with different sensing accuracies, namely a first sensing region 101 and a second sensing region 102. Each sensing area comprises a plurality of pixel units arranged in an array. The sensing accuracy refers to the accuracy of a distance detection value obtained by a sensing signal obtained by a pixel unit in the corresponding sensing area. Within a specific sensing distance range, the higher the sensing precision, the more accurate the obtained distance detection value.
The larger the sensing accuracy, the larger the signal-to-noise ratio of the sensing signal output from the sensing region. In this embodiment, during operation of the TOF sensor, the pixel array receives reflected light and outputs a sensing signal. The sensing signal output by each pixel unit in the first sensing region 101 is larger than the sensing signal output by the second sensing region 102, or the noise of the sensing signal output by each pixel unit in the first sensing region 101 is smaller than the noise of the sensing signal output by the second sensing region 102, so that the object distance in the detection range corresponding to the first sensing region 101 has higher detection accuracy.
The sensing area distribution of each sensing precision of the pixel array of the TOF sensor is mainly set according to the precision requirement of each sensing area in the detection range, so that the reflected light of the sensing area with high sensing precision requirement is received by the sensing area with high sensing precision, and the reflected light of the sensing area with low sensing precision requirement is received by the sensing area with low sensing precision.
In some embodiments, the size of the pixel unit of the two or more sensing areas with different sensing accuracies is positively correlated with the sensing accuracy of each sensing area. Each pixel sensing unit includes a light sensing element, such as a photodiode, for converting an optical signal into an electrical signal. The larger the size of the pixel sensing unit, the larger the area of the photosensitive element or more photosensitive elements can be formed, so that more photoelectrons are received, and a larger sensing signal is generated. Referring to fig. 2, in this embodiment, the size of the pixel unit 1011 in the first sensing region 101 is twice as large as the size of the pixel unit 1021 in the second sensing region 102.
In some embodiments, the number of photosensitive elements in a pixel unit of a sensing region is positively correlated with the sensing accuracy of each sensing region. In one embodiment, the photosensitive elements have the same size, and the sensing accuracy can be adjusted by adjusting the number of the photosensitive elements in the pixel units of different sensing areas to adjust the size of the sensing signal output by the pixel unit. For example, in this embodiment, each pixel unit in the first sensing region 101 includes two parallel photodiodes, and each pixel unit in the second sensing region 102 includes only one photodiode.
In order to facilitate the formation of the pixel array, in a specific embodiment, the pixel array is composed of a plurality of standard pixel units with the same structure; the actual pixel units of the different sensing areas may include at least one standard pixel unit, and the sensing accuracy is positively correlated to the number of the standard pixel units in the actual pixel unit, for example, the sensing accuracy may be proportional to N times the number of the standard pixel units in the actual pixel unit. The TOF sensor further comprises a processing unit, wherein the processing unit takes the sum of the sensing signals of more than one standard pixel unit in the sensing area as an effective sensing signal to process, namely, the effective sensing signal output by the actual pixel unit is processed. The number of the standard pixel units corresponding to the effective sensing signals is positively correlated with the sensing precision of the sensing area.
For example, in one embodiment, each pixel unit of the first sensing region 101 may include at least two standard pixel units connected in parallel, and each pixel unit of the second sensing region 102 includes only one standard pixel unit. The pixel unit with higher sensing precision comprises at least two standard pixel units which are connected in parallel, and an effective sensing signal actually output by the pixel unit is the sum of sensing signals output by the at least two standard pixel units. Because the sensing signals output by each standard pixel sensing unit not only comprise photoelectric sensing signals but also comprise randomly occurring noise signals, the addition of the sensing signals output by more than two standard pixel units can offset part of noise, so that the signal-to-noise ratio of the sensing signals output by the pixel units is increased, and the sensing precision is further improved. In general, the effective signal of two parallel standard pixel units is 2 times of the effective signal intensity of a single standard pixel unit, and the noise signal of two parallel standard pixel units is 2 times of the noise signal of a single standard pixel unit1/2Multiple, therefore, the signal-to-noise ratio of the output signals of two parallel standard pixel cells is 2 of the output signal of a single standard pixel cell1/2And (4) doubling.
In other embodiments, different sensing accuracies may also be achieved by using pixel cells with different photo-sensitivity properties in different sensing regions. For example, in a sensing area with lower sensing precision, a common silicon diode is used as the light sensing element in the pixel unit, and in a sensing area with higher sensing precision, a light sensing element with higher photoelectric conversion efficiency, such as a Single Photon Avalanche Diode (SPAD), an avalanche diode (APD), or a silicon photomultiplier (SIPM), which is made of a heterojunction or a special light sensing material, can be used to increase the magnitude of the output sensing signal under the same light flux. In this case, even if the pixel units of different sensing regions have the same size, higher sensing accuracy can be obtained by using the pixel unit with stronger light sensing performance.
In the above specific embodiment, the pixel units actually outputting effective sensing signals in different sensing areas have different structures, and the pixel units in the sensing area with higher sensing accuracy output larger sensing signals under the condition of the same luminous flux, so as to improve the detection accuracy of the detection range corresponding to the sensing area. In the detection range of the TOF sensor, the sensor precision which does not pass through can be obtained for different areas, and the requirements of different areas on the sensor precision are met.
The TOF sensor further comprises a light emitting module for emitting pulsed detection light. In some embodiments, the detection light emitted by the light emitting module has at least two light distribution areas with different light intensities, the detection light of each light distribution area is reflected and then received by the sensing areas with different sensing accuracies in the pixel array, and the light distribution area with higher light intensity corresponds to the sensing area with higher sensing accuracy.
Referring to fig. 3, in an embodiment of the present invention, the pixel array 310 of the TOF sensor includes a first sensing region 311 and a second sensing region 312, and the sensing accuracy of the first sensing region 311 is greater than that of the second sensing region 312. TOF sensor still includes light emitting module, light emitting module sends the detection beam 320 and includes that first light distribution is regional 321 and second light distribution is regional 322, the light intensity of first light distribution is regional 321 is greater than the light intensity of second light distribution is regional 322, light energy in the light intensity is the unit area. In this embodiment, the pixel units in the first sensing region 311 and the second sensing region 312 have the same structure. The light intensity distribution of the light emitting module corresponds to the light intensity distribution of the reflected light, and the intensity distribution of the reflected light received by different regions of the pixel array 310 is different, so that the sensing accuracy of different sensing regions is different. The higher the light intensity of the detection light is, the higher the light intensity of the reflected light received by the TOF sensor is, the higher the signal-to-noise ratio is, and therefore the detection precision of the object distance can be improved. In this embodiment, the detection light of the first light distribution region 321 is reflected and received by the first sensing region 311 of the pixel array 310, and the detection light of the second light distribution region 322 is reflected and received by the second sensing region 312 of the pixel array 310, so that the intensity of the reflected light received by the first sensing region 311 is greater than the intensity of the reflected light received by the second sensing region 312, and thus the second sensing region 312 has higher sensing accuracy.
In some embodiments, the light source module 320 may include a light emitting array including a plurality of light emitting units, and the light emitting array includes at least two array regions having different arrangement densities of the light emitting units, and an array region with a higher arrangement density corresponds to a light distribution region with a higher light intensity. Referring to fig. 3, the first light distribution area 321 corresponds to more detection light emitted by the light emitting units, and thus the light intensity is higher. The light intensity distribution of the detection light can be adjusted by adjusting the density arrangement of the light emitting units in the light emitting array. In other embodiments, the light emitting units of the light source module 320 are arranged in an array, and light intensity distributions in different areas are formed by interference of light emitted between the light emitting units.
In other embodiments, the light source module 320 may further include at least two different light-gathering areas, where the different light-gathering areas correspond to light distribution areas with different light intensities. Specifically, the redistribution of the detection light energy can be realized by a lens with multiple focal lengths, and different focal lengths correspond to different light distribution areas. For example, the focal length of the lens corresponding to the first light distribution area 321 is greater than the focal length of the lens corresponding to the second light distribution area 321, and the field angle of the telephoto lens is smaller and better for focusing, so that the detected light intensity in the first light distribution area 321 is greater than the detected light intensity in the second light distribution area 321.
In other embodiments, the light energy distribution of the detection light can be adjusted by combining the arrangement density of the light emitting units and the focal length of the lens.
In the case that the detection light emitted by the light emitting module has at least two light distribution areas with different light intensities, the sensing areas with different sensing accuracies of the TOF sensor may all adopt pixel units with the same structure. In other specific embodiments, the sensing precision of each sensing area may also be further adjusted by using the pixel unit structure of the pixel array according to the precision requirement, for example, the pixel units in the sensing areas with different sensing precision have different pixel unit sizes, different numbers of photosensitive elements of the pixel units, or the photosensitive performance of the pixel units, and the like. In one embodiment, if the sensing accuracy of the corresponding sensing area can be improved by 2 times due to the increase of the detected light intensity, and the sensing accuracy can be improved by 2 times due to the pixel unit structure in the sensing area, the improvement of 2 times or 4 times can be realized according to the actual requirement, and finally, the sensing accuracy can be comprehensively used, so that the sensing accuracy of each sensing area can have a larger adjustment range.
In other embodiments, the sensing accuracy of the different regions can be adjusted by adjusting the sensor exposure time and the light emitting time of the light source module 320. For example, more reflected light energy can be collected by a long exposure time and a synchronous light emitting time for the middle area of the pixel array, and the sensing precision is higher; the edge area is realized by shorter exposure time and synchronous light emitting time, the collected reflected light energy is relatively less, and the sensing precision is lower; thus, the high precision of the middle area and the reduced precision of the edge area can be realized.
The higher the sensing accuracy of the TOF sensor, the higher the corresponding cost. And the utility model discloses a pixel array of TOF sensor has the sensing area of different sensing precision, and the structure through adjustment pixel unit distributes or detects the energy distribution of light, under the prerequisite that satisfies local sensing precision, reduces product cost.
Fig. 4 is a schematic diagram of a pixel array 400 according to another embodiment of the present invention.
The pixel array 400 includes a plurality of first sensing regions 401 and a plurality of second sensing regions 402, and a sensing accuracy of the first sensing regions 401 is greater than a sensing accuracy of the second sensing regions 402. The first sensing area 401 and the second sensing area 402 are spaced apart from each other. The TOF sensor of the specific embodiment is suitable for application scenes such as obstacle scanning of a whole scene, and due to the fact that distance information of each part of an obstacle does not need to be accurately acquired, even if only distance detection values with high accuracy can be acquired at intervals in a detection range, whether the obstacle exists or not and the distance of the obstacle can be timely judged by the pixel array in the specific embodiment.
The specific embodiment of the utility model provides a forming method of TOF sensor still, including forming pixel array, pixel array includes the sensing area that has different sensing precision more than two at least, and the sensing signal of the sensing area output that the sensing precision is big more is stronger.
In some embodiments, pixel units having pixel sizes positively correlated with sensing accuracy of each sensing region are formed in different sensing regions.
In other embodiments, pixel units with the quantity of photosensitive elements positively correlated with the sensing precision of each sensing area are formed in different sensing areas.
In other embodiments, pixel units with photosensitive performance positively correlated to sensing accuracy of each sensing area are formed in different sensing areas.
In some embodiments, the TOF sensor further includes a light emitting module, where the detection light emitted by the light emitting module has at least two light distribution regions with different light intensities, and the detection light of each light distribution region can be reflected and received by the sensing regions with different sensing accuracies in the pixel array, where the light distribution region with the higher light intensity corresponds to the sensing region with the higher sensing accuracy.
In some embodiments, the light emitting module includes a light emitting unit array, and the light emitting unit array includes at least two array regions having an arrangement density of light emitting units, and the array regions having a higher arrangement density correspond to light distribution regions having a higher light intensity.
In some embodiments, the light emitting module includes light condensing regions having at least two different focal lengths, and the different light condensing regions respectively correspond to light distribution regions having different light intensities.
In some embodiments, the sensing regions of different sensing accuracies are spaced apart.
In some embodiments, pixel units having the same sensing accuracy are formed in sensing regions of different sensing accuracies; the sensing device further comprises a processing unit, wherein the processing unit takes the sum of the sensing signals of more than one pixel unit in the sensing area as an effective sensing signal to process, and the number of the pixel units corresponding to the effective sensing signal is positively correlated with the sensing precision of the sensing area.
The utility model discloses a specific embodiment still provides a machine vision system, including among the above-mentioned specific embodiment TOF sensor.
The machine vision system can be applied to various devices needing application and vision thereof, such as sweeping robots, unmanned planes and the like.
The visual field of the machine vision system may include more than two visual zones of different accuracy requirements. The different vision regions correspond to different sensing regions of a pixel array of the TOF sensor, and the sensing accuracy of each sensing region corresponds to the accuracy requirement of each vision region. The visual range of the machine vision system is the spatial range detectable by the TOF sensor.
The sensing precision and the position distribution of each sensing area of the pixel array of the TOF sensor can be reasonably set according to the precision requirement and the position distribution of each visual area.
Please refer to fig. 5, which is a schematic view of a robot vision range of a sweeping robot 500 with machine vision.
In this specific embodiment, the sweeping robot 500 includes a TOF sensor 510, and detection areas of the TOF sensor 510 are a first area 501, a second area 502 and a third area 503 in a direction perpendicular to a horizontal plane. The first area 501 is an area with the same height as the sweeping robot 500 or slightly higher than the sweeping robot, the second area 502 is an area higher than the sweeping robot 500, and the third area 503 is an area lower than the sweeping robot 500. Because the sweeping robot 500 needs to be able to avoid obstacles obstructing its advance in time, the robot vision system of the sweeping robot 500 is concerned about whether there are obstacles in the first area 501 and the accurate distance of the obstacles, and is not concerned about whether there are obstacles in the second area 502 and the third area 503, and whether there are objects in the two areas has no great influence on the sweeping robot. Therefore, the detection accuracy for the first region 501 is required to be high, and the detection accuracy for the second region 502 and the third region 503 is low. Thus, the pixel array of the TOF sensor 510 may have a high-precision sensing region and a low-precision sensing region, corresponding to the first region 501, the second region 502, and the third region 503, which have higher precision requirements, and have lower precision requirements, respectively, in the visual range. Therefore, the cost of the adopted TOF sensor can be reduced, and the cost of the sweeping robot is further reduced.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of improvements and decorations can be made without departing from the principle of the present invention, and these improvements and decorations should also be regarded as the protection scope of the present invention.

Claims (7)

1. A TOF sensor, comprising:
the pixel array at least comprises more than two sensing areas with different sensing precisions, and the sensing area with the higher sensing precision outputs stronger sensing signals.
2. The TOF sensor of claim 1, wherein the size of the pixel cells of the two or more sensing regions having different sensing accuracies is positively correlated to the sensing accuracy of each sensing region.
3. The TOF sensor of claim 1, wherein the number of light sensing elements within a pixel cell of the two or more sensing regions having different sensing accuracies is positively correlated to the sensing accuracy of each sensing region.
4. The TOF sensor of claim 1, wherein the light sensing performance of the pixel cells of the two or more sensing regions with different sensing accuracies is positively correlated to the sensing accuracy of each sensing region.
5. The TOF sensor of claim 1, wherein sensing regions of different sensing accuracy are spaced apart.
6. The TOF sensor of claim 1, wherein sensing regions of different sensing accuracy have the same pixel cell; the sensing device further comprises a processing unit, the processing unit takes the sum of the sensing signals of more than one pixel unit in the sensing area as an effective sensing signal to process, and the number of the pixel units corresponding to the effective sensing signal is positively correlated with the sensing precision of the sensing area.
7. A machine vision system, comprising:
the TOF sensor of any one of claims 1 to 6;
the visual range of the machine vision system comprises more than two visual areas with different precision requirements, the different visual areas correspond to different sensing areas of a pixel array of the TOF sensor, and the sensing precision of each sensing area corresponds to the precision requirement of each visual area.
CN201920860932.1U 2019-06-10 2019-06-10 TOF sensor and machine vision system Active CN210572750U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201920860932.1U CN210572750U (en) 2019-06-10 2019-06-10 TOF sensor and machine vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201920860932.1U CN210572750U (en) 2019-06-10 2019-06-10 TOF sensor and machine vision system

Publications (1)

Publication Number Publication Date
CN210572750U true CN210572750U (en) 2020-05-19

Family

ID=70635277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201920860932.1U Active CN210572750U (en) 2019-06-10 2019-06-10 TOF sensor and machine vision system

Country Status (1)

Country Link
CN (1) CN210572750U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110161529A (en) * 2019-06-10 2019-08-23 炬佑智能科技(苏州)有限公司 TOF sensor and forming method thereof, NI Vision Builder for Automated Inspection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110161529A (en) * 2019-06-10 2019-08-23 炬佑智能科技(苏州)有限公司 TOF sensor and forming method thereof, NI Vision Builder for Automated Inspection

Similar Documents

Publication Publication Date Title
US11422256B2 (en) Distance measurement system and solid-state imaging sensor used therefor
CN109212538B (en) Time-of-flight depth mapping with disparity compensation
US11598857B2 (en) Integrated lidar image-sensor devices and systems and related methods of operation
JP6644892B2 (en) Light detection distance measuring sensor
CN111830530B (en) Distance measuring method, system and computer readable storage medium
US20180203122A1 (en) Gated structured imaging
US7022966B2 (en) System and method of light spot position and color detection
CN211014630U (en) Laser radar device and motor vehicle system
US20080212066A1 (en) Method for the detection of an object and optoelectronic apparatus
CN111796295B (en) Collector, manufacturing method of collector and distance measuring system
US20210333371A1 (en) Lidar system with fog detection and adaptive response
US20200103526A1 (en) Time of flight sensor
US9395296B1 (en) Two-dimensional optical spot location using a one-dimensional detector array
EP3908853A1 (en) Extended dynamic range and reduced power imaging for lidar detector arrays
CN111965658A (en) Distance measuring system, method and computer readable storage medium
CN112912765A (en) Lidar sensor for optically detecting a field of view, operating device or vehicle having a lidar sensor, and method for optically detecting a field of view
CN210572750U (en) TOF sensor and machine vision system
CN210572752U (en) TOF sensor and machine vision system
CN106405566A (en) High-measurement-precision laser radar distance measurement method
JP2003247809A (en) Distance information input device
CN213091889U (en) Distance measuring system
KR20120103860A (en) Sensor module for the optical measurement of distance
CN111965659A (en) Distance measuring system, method and computer readable storage medium
WO2020190920A1 (en) Dynamic range improvements in lidar applications
CN111796296A (en) Distance measuring method, system and computer readable storage medium

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant