CN117935000A - Dtof fusion ranging method and device and laser radar - Google Patents

Dtof fusion ranging method and device and laser radar Download PDF

Info

Publication number
CN117935000A
CN117935000A CN202311751124.9A CN202311751124A CN117935000A CN 117935000 A CN117935000 A CN 117935000A CN 202311751124 A CN202311751124 A CN 202311751124A CN 117935000 A CN117935000 A CN 117935000A
Authority
CN
China
Prior art keywords
pixel
gray
fusion
ranging
pixel array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311751124.9A
Other languages
Chinese (zh)
Inventor
谢俊忠
李红刚
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Adaps Photonics Technology Co ltd
Original Assignee
Shenzhen Adaps Photonics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Adaps Photonics Technology Co ltd filed Critical Shenzhen Adaps Photonics Technology Co ltd
Priority to CN202311751124.9A priority Critical patent/CN117935000A/en
Publication of CN117935000A publication Critical patent/CN117935000A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides dtof fusion ranging method, device and laser radar, wherein the method comprises the following steps: acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray map corresponding to a pixel array, and identifying a key region in the gray map; determining and storing pixel position information corresponding to the key region in a pixel array; according to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data; and calculating the distance information of the key region according to the fusion histogram data. By collecting the gray level diagram of the target ranging scene and identifying the depth calculation of fusion between the key area in the gray level diagram and the histogram data, the influence of environmental noise on the ranging accuracy can be reduced, the ranging frame rate is not influenced, and the ranging accuracy is improved.

Description

Dtof fusion ranging method and device and laser radar
Technical Field
The invention relates to the technical field of distance detection, in particular to dtof fusion ranging method and device and a laser radar.
Background
In the existing laser radar ranging, the SPAD (single photon avalanche diode ) is used as a unit, or the Pixel is used as a unit to acquire histogram data after the binning technology is adopted, so as to calculate the distance. The above scheme may cause distortion of the partially used scene ranging function due to the limitation of the performance of SPAD or the binding specification. Such as: in strong ambient light, the histogram has a poor signal-to-noise ratio, and even if additional algorithms are added, the actual distance may not be calculated correctly.
At the same time, the amount of statistical histogram data based on a single SPAD or Pixel is enormous. According to different TOF sensor designs, a histogram generates hundreds or even thousands of data, each pixel generates a histogram, if a TOF sensor has n pixels, according to m data of a histogram, the statistical histogram is a two-dimensional array of n x m, and each row is a histogram of one pixel. This two-dimensional array is more massive in order to achieve higher detection distances, accuracy and resolution, and these data transmissions occupy a huge data bandwidth, limiting the ranging frame rate.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention aims to provide dtof fusion ranging method, device and laser radar, which aims to improve ranging accuracy without affecting the ranging frame rate.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the first aspect of the present invention provides a dtof fusion ranging method, including:
Acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray map corresponding to a pixel array, and identifying a key region in the gray map;
determining and storing pixel position information corresponding to the key region in a pixel array;
According to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data;
and calculating the distance information of the key region according to the fusion histogram data.
In one embodiment, the identifying the key region in the gray scale map specifically includes:
And carrying out object identification or image partitioning on the gray level image, and taking each object or each partition as the key area.
In one embodiment, the object recognition on the gray scale map specifically refers to:
And identifying the pixels with adjacent positions and same or similar gray values as the same object according to the gray value of each pixel in the gray map.
In one embodiment, the image partitioning of the gray scale map specifically refers to:
and dividing the pixels with the same or similar gray values into the same subarea according to the gray value of each pixel in the gray map.
In one embodiment, the acquiring gray data acquired by each pixel for the target ranging scene, so as to obtain a gray map corresponding to the pixel array specifically includes:
Acquiring histogram data acquired by each pixel for a target ranging scene;
Obtaining gray data of each pixel by histogram data;
And obtaining a gray scale image corresponding to the pixel array.
In one embodiment, the acquiring gray data acquired by each pixel for the target ranging scene, so as to obtain a gray map corresponding to the pixel array specifically includes:
Receiving laser reflected from a target object for the first time, and acquiring gray data acquired by each pixel for a target ranging scene;
obtaining a gray level image corresponding to the pixel array;
and receiving laser reflected from the target object for the second time, and acquiring histogram data acquired by each pixel for the target ranging scene.
In one embodiment, gray data acquired by each pixel of a first pixel array on a target ranging scene is acquired, histogram data acquired by each pixel of a second pixel array on the target ranging scene is acquired, and the first pixel array and the second pixel array are two identical pixel arrays;
obtaining a gray scale image corresponding to the first pixel array;
The fusing processing is performed on the histogram data of the pixels belonging to the same key region according to the pixel position information of the key region, and specifically includes:
Associating the corresponding positions of the first pixel array and the second pixel array;
and according to the pixel position information of the key area of the second pixel array, carrying out fusion processing on the histogram data of the pixels belonging to the same key area of the second pixel array.
In one embodiment, the fusion process specifically refers to:
And superposing count values of the same time bin in the plurality of groups of histogram data in a one-to-one correspondence manner.
In one embodiment, the target ranging scenario includes: a strong ambient light scene and a weak laser power scene.
The second aspect of the present invention provides a dtof fusion ranging method, comprising the steps of:
acquiring an RGB image acquired for a target ranging scene, identifying a key area in the RGB image, and acquiring histogram data acquired for the target ranging scene by a pixel array;
determining and storing pixel position information corresponding to the key region in a pixel array;
According to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data;
and calculating the distance information of the key region according to the fusion histogram data.
A third aspect of the present invention provides a dtof fusion ranging apparatus comprising:
the data acquisition module is used for acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray image corresponding to the pixel array, and identifying a key area in the gray image;
The calibration storage module is used for determining and storing pixel position information corresponding to the key area in the pixel array;
The fusion calculation module is used for carrying out fusion processing on the histogram data of the pixels belonging to the same key region according to the pixel position information of the key region to obtain fusion histogram data; and calculating the distance information of the key region according to the fusion histogram data.
A third aspect of the invention provides a lidar comprising a dtof fusion distance-measuring device as described above.
The beneficial effects of the invention are as follows: the dtof fusion ranging method, device and laser radar are provided, and the influence of environmental noise on the ranging accuracy can be reduced, the ranging frame rate is not influenced, and the ranging accuracy is improved at the same time by collecting the gray level diagram of a target ranging scene and identifying the key region in the gray level diagram to be fused with the histogram data for depth calculation.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flowchart of dtof a fusion ranging method according to an embodiment of the present invention;
FIG. 2 is a gray scale view of a target ranging scene according to an embodiment of the present invention;
FIG. 3 is a ranging histogram of a target ranging scene according to an embodiment of the present invention;
FIG. 4 is a ranging histogram of another target ranging scenario according to an embodiment of the present invention;
FIG. 5 is a diagram of a histogram data fusion process in an embodiment of the present invention;
FIG. 6 is another flow chart of dtof fusion ranging method in an embodiment of the present invention;
fig. 7 is a block diagram of dtof fusion ranging apparatus in an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved by the embodiments of the present invention more clear, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. In addition, the connection may be for a fixing function or for a circuit communication function.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are merely for convenience in describing embodiments of the invention and to simplify the description by referring to the figures, rather than to indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus are not to be construed as limiting the invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present invention, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
The dtof fusion ranging method provided by the embodiment of the invention is applied to a laser radar based on a time of flight (TOF) technology, the laser radar comprises a dtof fusion ranging device, and the dtof fusion ranging device at least comprises a data acquisition module, and is used for acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray map corresponding to a pixel array, and identifying a key region in the gray map; the calibration storage module is used for determining and storing pixel position information corresponding to the key area in the pixel array; the fusion calculation module is used for carrying out fusion processing on the histogram data of the pixels belonging to the same key region according to the pixel position information of the key region to obtain fusion histogram data; and calculating the distance information of the key region according to the fusion histogram data.
In the existing laser radar ranging, the SPAD (single photon avalanche diode ) is used as a unit, or the Pixel is used as a unit to acquire histogram data after the binning technology is adopted, so as to calculate the distance. The above scheme may cause distortion of the partially used scene ranging function due to the limitation of the performance of SPAD or the binding specification. Such as: in strong ambient light, the histogram has a poor signal-to-noise ratio, and even if additional algorithms are added, the actual distance may not be calculated correctly. Meanwhile, the data volume of the statistical histogram based on single SPAD or Pixel is huge, and if the data volume of the statistical histogram is larger for realizing higher detection distance, precision and resolution, the data transmission occupies huge data bandwidth, so that the ranging frame rate is limited. Therefore, how to solve this problem is described below by the dtof fusion ranging method applied to the dtof fusion ranging apparatus described above, so as to improve ranging accuracy while not affecting the ranging frame rate.
As shown in fig. 1, fig. 1 is a flowchart of a dtof fusion ranging method according to one embodiment of the present invention, where the method specifically includes the following steps:
s101, acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray map corresponding to a pixel array, and identifying a key region in the gray map.
In this embodiment, after ranging is started, the transmitter is controlled to transmit a laser pulse signal to the target ranging scene, gray data in the target ranging scene is collected through the pixel array, a gray image as shown in fig. 2 is obtained based on the gray data, the collected gray image is subjected to image recognition, and key areas are identified, wherein the number of the key areas can be one or more, for example, the area where the detected target is located, the foreground object area except the background, and the like.
The specific target ranging scene is a scene with a ranging signal-to-noise ratio smaller than a preset signal-to-noise ratio lower limit, and comprises a strong environment light scene, a weak laser power scene and the like, and the ranging function is distorted under the part of the using scene due to the limitation of the performance of SPAD or the binding specification.
In the target ranging scene, histogram data is also acquired for each pixel in the pixel array that is converted to output based on the received reflected light during the exposure time. In a strong ambient light scene (the ambient light intensity is greater than the preset upper limit intensity) and a weak laser power scene (the laser power is less than the preset lower limit power), because the ambient light is too strong or the laser power is too weak, the histogram obtained by taking pixels as a unit is shown in fig. 3 and 4, the signal to noise ratio of the histogram constructed at the moment is not ideal, so that a correct histogram peak value cannot be found, the probability of error of the calculated distance is increased, and further processing is needed to be performed by combining a gray level diagram on the basis of the histogram data so as to improve the ranging accuracy.
S102, determining and storing pixel position information corresponding to the key region in the pixel array.
And determining and storing pixel position information corresponding to the key region in the pixel array through the position relation between the pixel array in the distance measuring device and the gray level map based on the position of the key region in the gray level map. The pixel position information corresponding to the key area may be the pixel coordinate of each corresponding pixel, or if the key area is a rectangular area, the pixel position information may also be the diagonal coordinate of the pixels in the area, for example (the pixel coordinate of the upper left corner pixel, the pixel coordinate of the lower right corner pixel), or (the pixel coordinate of the lower left corner pixel, the pixel coordinate of the upper right corner pixel), so as to save the data storage occupation space.
S103, according to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data.
Based on the pixel position information corresponding to the key region obtained by recognizing the gray map, the pixel array can be divided into regions, for example, when the pixel position information of the key region is the pixel coordinate of each corresponding pixel, the pixel position information according to the pixel belonging to the same key region can be stored in a set, and after the histogram data of each pixel is obtained, the histogram data of the pixels in the same set are fused to obtain fused histogram data.
As shown in fig. 5, the histogram data of the pixels pixel_1, pixel_2, pixel_n in the same critical area on the left side has low signal-to-noise ratio, and even if the peak time Bin can be found, the peak position may drift on the adjacent time Bin because the Bin Number, that is, the Number of count values is small, resulting in insufficient ranging accuracy; and after the histogram data output by the pixel_1, the pixel_2 and the pixel_n are fused to obtain a fused histogram on the right side, the background noise is wholly raised, but the position of the peak time box is relatively fixed, and only slight drift exists, so that the effective data, namely the count value corresponding to the peak time box, is more prominent, and the peak searching can be more accurately performed after the background noise is removed. After the fusion processing, the influence of distance errors caused by time bin drift is reduced, and the distance is calculated more accurately and precisely through the fused histogram depth information.
And S104, calculating the distance information of the key region according to the fusion histogram data.
After the fusion histogram data with higher signal to noise ratio is obtained through multi-pixel integration, accurate peak searching processing is carried out on the fusion histogram data, and the distance information of the key region can be calculated based on a peak time box. According to the embodiment, on the basis of recognizing the key region by the gray level map, the histograms of the pixels belonging to the same key region are fused into one histogram data to be transmitted, so that the time loss of data transmission and the magnitude and complexity of data processing are greatly reduced, the distance real-time information can be acquired at a higher frequency in unit time, and the brushing rate and sensitivity are improved; and based on the position information of the key region provided by the gray level map, the influence of environmental noise on the ranging accuracy can be reduced to a great extent after the fusion processing, and the ranging accuracy is improved while the ranging frame rate is not influenced.
In one embodiment, the identifying the key region in the gray scale map specifically includes:
And carrying out object identification or image partitioning on the gray level image, and taking each object or each partition as the key area.
In this embodiment, the object recognition may be performed by using the image recognition technology to identify the key area in the gray scale image, and each object in the gray scale image is used as a key area, for example, in fig. 2, the identified teacup, thermos cup, table, etc. may be used as a key area respectively; the image partition can be performed according to the parameters of each pixel in the gray level map, the pixels with the close parameters are divided into the same partition, for example, the background area in fig. 2 is used as a key area, the foreground area is divided into a plurality of key areas according to gray values, and the like, so that the key information in the gray level map is extracted, and reliable reference information is provided for improving the ranging accuracy.
In one embodiment, the object recognition on the gray scale map specifically refers to:
And identifying the pixels with adjacent positions and same or similar gray values as the same object according to the gray value of each pixel in the gray map.
In this embodiment, object recognition is performed on the gray level map based on the gray level value and the pixel position of each pixel, and pixels with adjacent positions and the same or similar gray level values (the difference value of the gray level values is smaller than a preset threshold value) are recognized as the same object, so that different objects in the gray level map are accurately distinguished as key information by simultaneously restricting the positions and the gray levels, so that the accurate distance values of the different objects can be obtained through subsequent fusion processing.
In one embodiment, the image partitioning of the gray scale map specifically refers to:
and dividing the pixels with the same or similar gray values into the same subarea according to the gray value of each pixel in the gray map.
In this embodiment, the gray scale map is partitioned based on the gray scale value of each pixel, and all pixels with the same or similar gray scale values (the difference value of the gray scale values is smaller than the preset threshold value) are regarded as the same partition, that is, the same distance to the target object, so that the distance value at the same depth in the target ranging scene can be accurately detected after the fusion processing.
In one embodiment, the acquiring gray data acquired by each pixel for the target ranging scene, so as to obtain a gray map corresponding to the pixel array specifically includes:
Acquiring histogram data acquired by each pixel for a target ranging scene;
Obtaining gray data of each pixel by histogram data;
And obtaining a gray scale image corresponding to the pixel array.
In this embodiment, each pixel acquires corresponding histogram data of the target ranging scene during ranging, converts the histogram data to obtain gray level data of each pixel, and realizes the direct acquisition of the gray level map of the target ranging scene by the ranging module through the conversion of the histogram data and the gray level data.
The specific process of converting the histogram data into the gray data is that count values of all time bins in the histogram data of each pixel are added to obtain an accumulated value of photon energy as the gray value of each pixel, and then a corresponding gray map is constructed. The object with higher reflectivity has larger accumulated photon energy value received by pixels within a certain exposure time, for example, under the same detection condition, the accumulated count value of pixels corresponding to a white area is necessarily larger than that of pixels corresponding to a black area, and the accumulated count value is the same as the change rule of gray values, so that histogram data can be directly converted into gray data by utilizing the characteristics, an additional hardware device is not required to be added, and the product cost is saved.
Because the histogram data and the gray scale image data can be obtained simultaneously through one pixel array, the position information of the key area in the gray scale image is consistent with the position information in the pixel array, and the pixel position information can be quickly determined and stored directly through gray scale image pixels covered by the key area, so that the distance measurement efficiency is improved.
In one embodiment, the acquiring gray data acquired by each pixel for the target ranging scene, so as to obtain a gray map corresponding to the pixel array specifically includes:
Receiving laser reflected from a target object for the first time, and acquiring gray data acquired by each pixel for a target ranging scene;
obtaining a gray level image corresponding to the pixel array;
and receiving laser reflected from the target object for the second time, and acquiring histogram data acquired by each pixel for the target ranging scene.
In this embodiment, when ranging, the gray level map and the depth map of the target ranging scene are acquired twice, the first detection light is emitted first, and gray level data acquired by each pixel on the target ranging scene is acquired based on the received reflected light signal, so as to obtain the gray level map corresponding to the pixel array, and the conversion and generation process of the specific gray level map can adopt the same manner as that of the previous embodiment; and then, emitting the second detection light to acquire the histogram data of each pixel acquired for the target ranging scene, and carrying out the subsequent fusion ranging process. According to the embodiment, the gray map data and the histogram data are respectively obtained through one pixel array, so that when ranging is performed for multiple times in a fixed scene, the gray map converted by the first detection is only required to be stored, the method can be suitable for each fusion ranging in the subsequent scene, the conversion generation process of the gray map is not required to be performed each time, the ranging time in the same scene is saved, and the ranging efficiency is improved.
In one embodiment, the acquiring gray data acquired by each pixel for the target ranging scene, so as to obtain a gray map corresponding to the pixel array specifically includes:
acquiring gray data acquired by each pixel of a first pixel array on a target ranging scene, and acquiring histogram data acquired by each pixel of a second pixel array on the target ranging scene, wherein the first pixel array and the second pixel array are two identical pixel arrays;
obtaining a gray scale image corresponding to the first pixel array;
The fusing processing is performed on the histogram data of the pixels belonging to the same key region according to the pixel position information of the key region, and specifically includes:
Associating the corresponding positions of the first pixel array and the second pixel array;
and according to the pixel position information of the key area of the second pixel array, carrying out fusion processing on the histogram data of the pixels belonging to the same key area of the second pixel array.
In this embodiment, data acquisition is performed on the target ranging scene through two identical pixel arrays at the same time, where the first pixel array acquires gray data of the target ranging scene, the second pixel array acquires histogram data of the target ranging scene, and a corresponding gray map is further obtained based on the gray data acquired by the first pixel array, and the conversion and generation process of the specific gray map may also adopt the same manner as in the previous embodiment. By arranging two identical pixel arrays to acquire the depth map and the gray map respectively, parallel acquisition of the depth map and the gray map can be realized, and the frame rate is improved.
Since the histogram data and the gray map data can be obtained through the two pixel arrays respectively, the pixel positions of the first pixel array and the second pixel array need to be correlated, so that the position information of the key region in the gray map can be mapped into the second pixel array in a correlated way, and then the pixel position information corresponding to the key region in the second pixel array is determined, so that the corresponding histogram data can be accurately fused.
In one embodiment, step S104 includes:
invoking a pixel coordinate set covered by each key area;
Acquiring histogram data of a plurality of pixels belonging to the same key region according to the pixel coordinate set;
And carrying out fusion processing on the histogram data of the pixels to obtain a group of fusion histogram data of each key region.
In this embodiment, the pixel position information of different key areas is stored in the same pixel coordinate set, the pixel coordinate set covered by each key area is directly called, the histogram data of a plurality of pixels belonging to the same key area are obtained based on the pixel coordinates, the multiple groups of histogram data of the plurality of pixels are fused, and a group of fused histogram data of each key area is obtained by superposition. By fusing and outputting a plurality of groups of histogram data of pixels corresponding to the same key region into one group of histogram data, the time loss of data transmission and the magnitude and complexity of data processing are reduced, and meanwhile, the distance error caused by time bin drift is reduced, so that the distance value obtained by fusing the histogram data is more accurate.
In one embodiment, the fusion process specifically refers to:
And superposing count values of the same time bin in the plurality of groups of histogram data in a one-to-one correspondence manner.
In this embodiment, when the histogram data of a plurality of pixels belonging to the same critical area are fused, the count values belonging to the same time bin in a plurality of sets of histogram data are superimposed in a one-to-one correspondence manner, and are used as statistics values of the time bins corresponding to the fused histogram data, so that a set of fused histogram data with more image count peaks and high signal-to-noise ratio is output, and the ranging accuracy is improved while the ranging frame rate is not affected.
The invention also correspondingly provides a dtof fusion ranging method, as shown in fig. 6, which comprises the following steps:
S601, acquiring an RGB image acquired for a target ranging scene, identifying a key area in the RGB image, and acquiring histogram data acquired for the target ranging scene by a pixel array;
S602, determining and storing pixel position information corresponding to the key region in a pixel array;
S603, carrying out fusion processing on the histogram data of pixels belonging to the same key region according to the pixel position information of the key region to obtain fusion histogram data;
s604, calculating the distance information of the key region according to the fusion histogram data.
In this embodiment, the gray level map collected by the pixel array is not used for fusion ranging, but the RGB camera is used for directly photographing the target ranging scene, so as to collect the RGB map of the target ranging scene, and meanwhile, the pixel array is used for collecting the histogram data of the target ranging scene, and because the gray level map is a single-channel graph, that is, each pixel has 1 component, and each pixel of RGB has 3 components, compared with the gray level map, the RGB has additional color information, and the RGB map with the color information is directly used for carrying out key region identification, so that more accurate region identification can be realized, and the positioning accuracy of the key region is improved.
And then mapping the position of a key region in the RGB image based on a position mapping relation between the RGB image and the pixel array, and obtaining and storing pixel position information corresponding to the key region in the pixel array. For example, the position mapping relation between the RGB image and the pixel array can be determined and stored by collecting the internal and external parameters of the RGB camera and the TOF sensor in advance; or the pixel position relation can be calibrated and fitted by acquiring a plurality of groups of calibration data of the RGB camera and the TOF sensor in a calibration mode, so that the position mapping relation between the RGB image and the pixel array is obtained. The position information of the key areas in the RGB image can be mapped into the pixel array through the position mapping relation obtained in advance, the pixel position information corresponding to the key areas is obtained and stored, so that the pixel data to be fused of each key area is determined during fusion processing, and accurate distance measurement of the key areas is achieved.
Similar to the previous embodiment, after determining the pixel position information of the key region in the pixel array, the histogram data of the pixels belonging to the same key region may be fused to obtain fused histogram data, and the distance information of the key region may be calculated, which may refer to the fusion process and the distance calculation process in the previous embodiment, and will not be described herein in detail
It should be noted that, there is not necessarily a certain sequence between the steps, and those skilled in the art will understand that, in different embodiments, the steps may be performed in different orders, that is, may be performed in parallel, may be performed interchangeably, or the like.
The invention further provides dtof a fusion ranging device correspondingly, as shown in fig. 7, fig. 7 is a structural diagram of a dtof fusion ranging device in an embodiment of the invention, which comprises a data acquisition module 701, a calibration storage module 702 and a fusion calculation module 703, wherein the data acquisition module 701, the calibration storage module 702 and the fusion calculation module 703 are sequentially connected. The data acquisition module 701 is configured to acquire gray data acquired by each pixel for a target ranging scene, thereby obtaining a gray map corresponding to a pixel array, and identifying a key region in the gray map; the calibration storage module 702 is configured to determine and store pixel position information corresponding to the key area in the pixel array; a fusion calculation module 703, configured to perform fusion processing on histogram data of pixels belonging to the same key area according to the pixel position information of the key area, so as to obtain fused histogram data; and calculating the distance information of the key region according to the fusion histogram data. Since the above method embodiment has been described in detail for the dtof fusion ranging process, reference may be made specifically to the above corresponding method embodiment, and details are not described here.
The invention also correspondingly provides a laser radar which comprises the dtof fusion ranging device. Since the above method embodiment has been described in detail for the dtof fusion ranging process, reference may be made specifically to the above corresponding method embodiment, and details are not described here.
In summary, the invention provides a dtof fusion ranging method, a dtof fusion ranging device and a laser radar, wherein the method comprises the following steps: acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray map corresponding to a pixel array, and identifying a key region in the gray map; determining and storing pixel position information corresponding to the key region in a pixel array; according to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data; and calculating the distance information of the key region according to the fusion histogram data. By collecting the gray level diagram of the target ranging scene and identifying the depth calculation of fusion between the key area in the gray level diagram and the histogram data, the influence of environmental noise on the ranging accuracy can be reduced, the ranging frame rate is not influenced, and the ranging accuracy is improved.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several equivalent substitutions and obvious modifications can be made without departing from the spirit of the invention, and the same should be considered to be within the scope of the invention.

Claims (10)

1. A dtof fusion ranging method, characterized by comprising the following steps:
Acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray map corresponding to a pixel array, and identifying a key region in the gray map;
determining and storing pixel position information corresponding to the key region in a pixel array;
According to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data;
and calculating the distance information of the key region according to the fusion histogram data.
2. The dtof fusion ranging method according to claim 1, wherein the identifying the key region in the gray scale map specifically includes:
And carrying out object identification or image partitioning on the gray level image, and taking each object or each partition as the key area.
3. The dtof fusion ranging method according to claim 2, wherein the object recognition is performed on the gray scale map, specifically:
according to the gray value of each pixel in the gray map, identifying the pixels with adjacent positions and same or similar gray values as the same object;
the image partitioning is performed on the gray level image, specifically:
and dividing the pixels with the same or similar gray values into the same subarea according to the gray value of each pixel in the gray map.
4. The dtof fusion ranging method according to claim 1, wherein the acquiring gray-scale data acquired by each pixel for the target ranging scene, so as to obtain a gray-scale map corresponding to the pixel array, specifically includes:
Acquiring histogram data acquired by each pixel for a target ranging scene;
Obtaining gray data of each pixel by histogram data;
And obtaining a gray scale image corresponding to the pixel array.
5. The dtof fusion ranging method according to claim 1, wherein the acquiring gray-scale data acquired by each pixel for the target ranging scene, so as to obtain a gray-scale map corresponding to the pixel array, specifically includes:
Receiving laser reflected from a target object for the first time, and acquiring gray data acquired by each pixel for a target ranging scene;
obtaining a gray level image corresponding to the pixel array;
and receiving laser reflected from the target object for the second time, and acquiring histogram data acquired by each pixel for the target ranging scene.
6. The dtof fusion ranging method according to claim 1, wherein the acquiring gray-scale data acquired by each pixel for the target ranging scene, so as to obtain a gray-scale map corresponding to the pixel array, specifically includes:
acquiring gray data acquired by each pixel of a first pixel array on a target ranging scene, and acquiring histogram data acquired by each pixel of a second pixel array on the target ranging scene, wherein the first pixel array and the second pixel array are two identical pixel arrays;
obtaining a gray scale image corresponding to the first pixel array;
The fusing processing is performed on the histogram data of the pixels belonging to the same key region according to the pixel position information of the key region, and specifically includes:
Associating the corresponding positions of the first pixel array and the second pixel array;
and according to the pixel position information of the key area of the second pixel array, carrying out fusion processing on the histogram data of the pixels belonging to the same key area of the second pixel array.
7. The dtof fusion ranging method as defined in claim 1, wherein the fusion process specifically includes:
And superposing count values of the same time bin in the plurality of groups of histogram data in a one-to-one correspondence manner.
8. A dtof fusion ranging method, characterized by comprising the following steps:
acquiring an RGB image acquired for a target ranging scene, identifying a key area in the RGB image, and acquiring histogram data acquired for the target ranging scene by a pixel array;
determining and storing pixel position information corresponding to the key region in a pixel array;
According to the pixel position information of the key region, carrying out fusion processing on the histogram data of pixels belonging to the same key region to obtain fusion histogram data;
and calculating the distance information of the key region according to the fusion histogram data.
9. Dtof fusion ranging device, characterized in that it comprises:
the data acquisition module is used for acquiring gray data acquired by each pixel for a target ranging scene, so as to obtain a gray image corresponding to the pixel array, and identifying a key area in the gray image;
The calibration storage module is used for determining and storing pixel position information corresponding to the key area in the pixel array;
The fusion calculation module is used for carrying out fusion processing on the histogram data of the pixels belonging to the same key region according to the pixel position information of the key region to obtain fusion histogram data; and calculating the distance information of the key region according to the fusion histogram data.
10. A lidar comprising the dtof fusion ranging device of claim 9.
CN202311751124.9A 2023-12-19 2023-12-19 Dtof fusion ranging method and device and laser radar Pending CN117935000A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311751124.9A CN117935000A (en) 2023-12-19 2023-12-19 Dtof fusion ranging method and device and laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311751124.9A CN117935000A (en) 2023-12-19 2023-12-19 Dtof fusion ranging method and device and laser radar

Publications (1)

Publication Number Publication Date
CN117935000A true CN117935000A (en) 2024-04-26

Family

ID=90767644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311751124.9A Pending CN117935000A (en) 2023-12-19 2023-12-19 Dtof fusion ranging method and device and laser radar

Country Status (1)

Country Link
CN (1) CN117935000A (en)

Similar Documents

Publication Publication Date Title
CN110596722B (en) System and method for measuring flight time distance with adjustable histogram
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
CN111830530B (en) Distance measuring method, system and computer readable storage medium
CN110596725B (en) Time-of-flight measurement method and system based on interpolation
CN101449181B (en) Distance measuring method and distance measuring instrument for detecting the spatial dimension of a target
CN110596724B (en) Method and system for measuring flight time distance during dynamic histogram drawing
CN110596723B (en) Dynamic histogram drawing flight time distance measuring method and measuring system
CN112731425B (en) Histogram processing method, distance measurement system and distance measurement equipment
US11914078B2 (en) Calibration of a depth sensing array using color image data
CN110687541A (en) Distance measuring system and method
CN111766596A (en) Distance measuring method, system and computer readable storage medium
CN110709722A (en) Time-of-flight camera
CN108845332B (en) Depth information measuring method and device based on TOF module
CN111045029A (en) Fused depth measuring device and measuring method
CN111352120B (en) Flight time ranging system and ranging method thereof
CN111796295B (en) Collector, manufacturing method of collector and distance measuring system
CN115166699A (en) SiPM receiver, dynamic threshold value adjusting method of laser radar and laser radar
CN110780312A (en) Adjustable distance measuring system and method
WO2022206031A1 (en) Method for determining noise level, lidar, and ranging method
CN114488173A (en) Distance detection method and system based on flight time
CN213091889U (en) Distance measuring system
WO2021179583A1 (en) Detection method and detection device
CN113325439A (en) Depth camera and depth calculation method
CN111796296A (en) Distance measuring method, system and computer readable storage medium
CN116559846A (en) Photon detection method and device for all-solid-state laser radar and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination