WO2021114080A1 - Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2021114080A1
WO2021114080A1 PCT/CN2019/124280 CN2019124280W WO2021114080A1 WO 2021114080 A1 WO2021114080 A1 WO 2021114080A1 CN 2019124280 W CN2019124280 W CN 2019124280W WO 2021114080 A1 WO2021114080 A1 WO 2021114080A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
crosstalk
pixels
echo image
edge
Prior art date
Application number
PCT/CN2019/124280
Other languages
English (en)
Chinese (zh)
Inventor
何一雄
Original Assignee
深圳市速腾聚创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市速腾聚创科技有限公司 filed Critical 深圳市速腾聚创科技有限公司
Priority to CN201980050264.9A priority Critical patent/CN112639516B/zh
Priority to PCT/CN2019/124280 priority patent/WO2021114080A1/fr
Publication of WO2021114080A1 publication Critical patent/WO2021114080A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • This application relates to a laser radar ranging method, device, computer equipment and storage medium.
  • Lidar can provide real-time and accurate three-dimensional scene information, has the inherent advantages of environmental perception, large ranging range and high accuracy, and is widely used in security monitoring, surveying and mapping exploration, traffic management, autonomous driving and other fields.
  • the lidar transmits a detection signal to the object to be measured, receives the echo signal reflected by the object to be measured, and then calculates the distance of the object to be measured based on the phase difference between the echo signal and the detection signal.
  • Flash lidar is an all-solid-state scanning lidar. There are no moving parts, and the system has good stability and reliability.
  • the receiving end adopts a pixel array to receive the echo signal reflected by the object, and the pixel array calculates the distance of the object to be measured according to the echo signal and the detection signal. As the detection distance increases, a single pixel on the receiving end will receive echo signals reflected by different objects, which will cause crosstalk between pixels, which will affect the accuracy of lidar ranging.
  • a lidar ranging method, device, computer device, and storage medium that can improve the ranging accuracy of an object to be measured are provided.
  • a lidar ranging method includes:
  • the posture information of the object is calculated according to the corrected echo image.
  • a laser radar ranging device includes:
  • the acquisition module is used to acquire the echo image received by the lidar
  • a judging module for judging whether there are crosstalk pixels in the echo image
  • the correction module is used to correct the crosstalk pixels when the echo image has crosstalk pixels to obtain a corrected echo image
  • the calculation module is used to calculate the posture information of the object according to the corrected echo image.
  • a computer device including a memory and one or more processors, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the one or more processors execute The following steps:
  • the posture information of the object is calculated according to the corrected echo image.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors perform the following steps:
  • the posture information of the object is calculated according to the corrected echo image.
  • Fig. 1 is an application environment diagram of a lidar ranging method according to one or more embodiments.
  • FIG. 2 is a schematic diagram of the crosstalk phenomenon between pixels according to one or more embodiments.
  • Fig. 3 is a schematic flowchart of a lidar ranging method according to one or more embodiments.
  • Fig. 4 is a schematic flow chart of the step of judging whether there are crosstalk pixels in an echo image according to one or more embodiments.
  • FIG. 5 is a schematic flow chart of the step of acquiring the first edge pixel point of the depth image according to one or more embodiments.
  • FIG. 6 is a schematic flowchart of the step of judging whether the first edge pixel meets the preset crosstalk condition according to one or more embodiments.
  • Fig. 7 is a block diagram of a depth image of a 3 ⁇ 3 pixel array formed by a first edge pixel point and all adjacent pixels in one or more embodiments.
  • FIG. 8 is a block diagram of a depth image of a 3 ⁇ 3 pixel array formed by a first edge pixel point and all adjacent pixels in another embodiment.
  • FIG. 9 is a schematic flowchart of the step of judging whether there are crosstalk pixels in the echo image in another embodiment.
  • Fig. 10 is a schematic diagram of a detection signal emitted by a lidar according to one or more embodiments.
  • Fig. 11 is a schematic diagram of the working principle of lidar under multi-modulation frequency switching according to one or more embodiments.
  • Fig. 12 is a block diagram of a corrected echo image obtained after dividing a crosstalk pixel into a plurality of sub-pixels according to one or more embodiments.
  • FIG. 13 is a block diagram of a corrected echo image obtained by dividing the crosstalk pixel points into multiple sub-pixel points in another embodiment.
  • FIG. 14 is a schematic diagram of dividing the echo image from high to low according to the detection accuracy requirement according to one or more embodiments to obtain the divided image area.
  • Fig. 15 is a block diagram of a laser radar ranging device according to one or more embodiments.
  • Figure 16 is a block diagram of a computer device according to one or more embodiments.
  • the lidar ranging method provided in this application can be applied to the application environment as shown in FIG. 1.
  • the lidar 102 transmits a detection signal to the object to be measured, and receives the echo signal reflected by the object to be measured through the pixel array at the receiving end. After the lidar 102 processes the echo signal, an echo image is obtained.
  • the lidar 102 sends the echo image to the computer device 104.
  • the computer device 104 determines whether there are crosstalk pixels in the echo image. When there are crosstalk pixels in the echo image, the computer device 104 corrects the crosstalk pixels to obtain a corrected echo image.
  • the computer device 104 calculates the posture information of the object according to the corrected echo image.
  • the angular resolution depends on the field of view of the receiving lens and the number of pixels in the pixel array at the receiving end.
  • the field of view of the receiving lens of the flash lidar is 60° ⁇ 45°
  • the number of pixels in the pixel array is 320 ⁇ 240
  • the angular resolution in the horizontal and vertical directions is both 0.1875°.
  • Inter-pixel crosstalk affects the accuracy of flash lidar for distance measurement of object boundaries, increases the error rate of depth images, destroys the quality of point clouds, and greatly limits the use of flash lidar in areas such as automatic driving, object recognition, and environmental modeling. application.
  • the lidar ranging method provided in this application is used to solve the above-mentioned inter-pixel crosstalk problem.
  • the depth camera also has the above-mentioned inter-pixel crosstalk problem.
  • the distance measurement method provided in this application can also be used to solve the problem of crosstalk between pixels of a depth camera.
  • a lidar ranging method is provided. Taking the method applied to the computer device in FIG. 1 as an example for description, the method includes the following steps:
  • Step 302 Obtain the echo image received by the lidar.
  • the lidar can be a flash lidar.
  • the transmitting array at the transmitting end of the lidar emits a detection signal to illuminate the field of view.
  • the detection signal is reflected by objects in the field of view and then returns to the echo signal.
  • the pixel array at the receiving end receives the echo signal, and the lidar processes the echo.
  • the signal gets an echo image, and the echo image is sent to the computer device.
  • Each pixel in the receiving end of the lidar corresponds to a field of view of an angular range, and the echo signal reflected by the same object in the field of view can be received by multiple corresponding pixels.
  • the echo image can be an image with grayscale information and spatial information of the entire field of view acquired by the lidar in the detection process, including grayscale images and depth images.
  • Step 304 Determine whether there are crosstalk pixels in the echo image.
  • the optional one is to obtain the first edge pixel of the depth image. Determine whether the first edge pixel meets the preset crosstalk condition. If the preset crosstalk condition is met, the first edge pixel is determined to be the crosstalk pixel.
  • the operating frequency of the lidar is different. Compare the ranging values of all pixels under different operating frequencies. If they are different, the pixels with different distance measurement values are determined to be crosstalk pixels.
  • the computer equipment can use one way of judging crosstalk pixels alone, or it can use multiple ways of judging crosstalk pixels at the same time.
  • Step 306 Correct the crosstalk pixel points to obtain a corrected echo image.
  • the echo signal received by the crosstalk pixel includes echo signals reflected by at least two objects.
  • the distance measurement value calculated according to the echo signal received by the crosstalk pixel is wrong and cannot truly reflect the distance information of the detected object. Therefore, it is necessary to correct the pixel crosstalk points that are judged to have crosstalk problems to avoid obtaining false detection information.
  • One option is to delete crosstalk pixel information.
  • One option is to copy the information of the spare pixel points adjacent to the pixel crosstalk point into the pixel crosstalk point.
  • An optional one is to divide the pixel crosstalk point into a plurality of sub-pixel points, and copy the information of the pixels adjacent to the sub-pixel points into the sub-pixel points.
  • Step 308 Calculate the posture information of the object according to the corrected echo image.
  • the corrected echo image has corrected or partially corrected the error information of the crosstalk pixels, which improves the accuracy of the echo image. Taking the highly accurate echo image as the calculation data source, the posture information of the object obtained after the calculation is also more accurate.
  • the ITOF type is pulse integration ranging, which periodically continuously emits wide pulse width detection signals, collects echo signals in different integration time windows, and calculates the flight time through proportional relations to obtain distance information.
  • DTOF type periodically and continuously transmit a narrow pulse width, high peak power detection signal, detect the echo signal to get the flight time, and then calculate the distance information.
  • the optional one is coherent detection. The echo signal and the local reference signal are beaten or coherently superimposed at the receiving end and received by the receiving end under the condition of wavefront matching. According to the received coherent signal, the Calculate the distance information.
  • Step 310 Keep the echo image unchanged.
  • step 304 If it is determined in step 304 that there is no crosstalk pixel, the echo signals received by the pixel at the receiving end are all accurate, and no further correction is required.
  • Step 312 Calculate the pose information of the object according to the echo image.
  • the posture information of the object obtained after calculation.
  • the posture information of the object obtained after calculation.
  • the pose information of an object refer to step 308.
  • the computer device obtains the echo image received by the lidar, and then determines whether there are crosstalk pixels in the echo image, determines and recognizes the crosstalk pixels that have inter-pixel crosstalk among the pixels, and the crosstalk pixels receive different objects
  • the reflected echo signal and the echo signal received by the crosstalk pixel point cannot accurately correspond to the distance information of the measured object, which has a greater impact on the accuracy of the lidar range.
  • correct the crosstalk pixels correct the pixel crosstalk points according to the judgment result, correct or partially correct the error information of the crosstalk pixels, and effectively reduce the effect of pixel crosstalk on the accuracy of the echo image The effect of this is to get an accurate echo image.
  • the object pose information is calculated according to the corrected echo image. Since there are no crosstalk pixels in the echo image, the information in the echo image used for calculation is accurate. The object pose obtained by the calculation The accuracy of information has been improved, and the accuracy and reliability of lidar detection has been improved.
  • the echo image includes a depth image, as shown in FIG. 4, step 304: the step of determining whether there are crosstalk pixels in the echo image specifically includes:
  • Step 402 Obtain the first edge pixel of the depth image.
  • the edge pixels of the depth image There are many ways to obtain the edge pixels of the depth image.
  • the optional one is to extract the edge feature of the object by using the gray image to obtain the second edge pixel of the object. Then, according to the corresponding relationship between the gray image and the depth image, the second edge pixel of the gray image is mapped to the depth image, and the corresponding pixel in the depth image is the first edge pixel.
  • the optional one is to directly use an edge detection operator on the depth image to obtain the first edge pixel of the depth image.
  • the depth image is not affected by the illumination direction of the detection signal at the transmitter and the surface reflection characteristics of the object to be measured, and there is no shadow, which can more accurately express the three-dimensional information of the object.
  • the detection of edge pixels is also more accurate.
  • Pixel crosstalk often occurs at the edges of at least two objects at different distances that are staggered.
  • the probability of detecting pixel crosstalk in the first edge pixel points obtained is high, and it is directly from the depth image. Obtaining the first edge pixel points is straightforward and simple, reducing the difficulty of calculation and improving the speed of calculation.
  • the echo image includes pixels that normally receive echo signals and pixels that have crosstalk. Because pixel crosstalk often occurs on the edges of multiple objects, judging edge pixels can simplify processing steps, reduce the amount of calculation, and improve calculating speed.
  • Step 404 Determine whether the first edge pixel meets a preset crosstalk condition.
  • the pixel crosstalk is judged according to the first edge pixel point obtained in the previous step. Because not all the first edge pixels in the depth image have the problem of pixel crosstalk, when the first edge pixel only detects the edge of an object, there is no pixel crosstalk problem. As shown in Figure 2, pixel point 1 detects the edge of object A, but there is no pixel crosstalk at this point. The first edge pixel points obtained need to be judged one by one to confirm whether they are pixel crosstalk points.
  • the step of judging whether the first edge pixel meets the preset crosstalk condition further includes: obtaining the edge ranging value and the gradient direction of the first edge pixel; obtaining all the first edge pixels on the same side as the gradient direction The first average ranging value of surrounding pixels; obtain the second average ranging value of all surrounding pixels on the opposite side of the first edge pixel to the gradient direction; when the edge ranging value is equal to the first average ranging value
  • the absolute value of the difference is greater than the preset distance threshold, and when the absolute value of the difference between the edge ranging value and the second average ranging value is greater than the preset distance threshold, the first edge pixel meets the preset crosstalk condition.
  • the surrounding pixels include pixels adjacent to the first edge pixel.
  • the first ranging average value of surrounding pixels on the same side of the gradient direction is the edge ranging value of the first edge pixel.
  • the two values are similar, indicating that the edge ranging value is compared to the correct first ranging average
  • the first edge pixel and the surrounding pixels on the same side of the gradient direction detect the same object. Therefore, no pixel crosstalk occurs at the first edge pixel point, and its edge ranging value is correct and reliable.
  • the second ranging average value of surrounding pixels on the opposite side of the gradient direction is equal to the edge ranging value of the first edge pixel. If the two values are similar, then the first edge pixel and the opposite side of the gradient direction The surrounding pixels detect the same object, and no pixel crosstalk occurs at the first edge pixel.
  • the first edge pixel point is a crosstalk pixel point.
  • Step 406 When the first edge pixel meets the preset crosstalk condition, the first edge pixel is determined as the crosstalk pixel of the echo image.
  • the first edge pixel points obtained are judged one by one, when the edge distance value of the first edge pixel point is greatly different from the first and second distance average values, which is greater than the preset distance threshold.
  • the first edge pixel is a crosstalk pixel.
  • Step 408 When the first edge pixel does not meet the preset crosstalk condition, the first edge pixel is not a crosstalk pixel of the echo image.
  • the computer device obtains the first edge pixel of the depth image, and determines whether the first edge pixel meets the preset crosstalk condition one by one. If the preset crosstalk condition is satisfied, the first edge pixel is confirmed to be a crosstalk pixel point.
  • the depth image is not affected by the radiation direction of the detection signal at the transmitter and the surface reflection characteristics of the object to be measured, and there is no shadow, it can more accurately express the three-dimensional information of the object; the detection of edge pixels is also more accurate, and pixel crosstalk It often occurs at the edges of at least two objects at different distances that are staggered, and the probability of detecting pixel crosstalk in the acquired first edge pixels is high, which improves the processing efficiency of computer equipment; and directly obtains the first edge pixels from the depth image , Direct and simple, reduce the difficulty of calculation and increase the speed of calculation.
  • step 402 Obtain the first edge pixel of the depth image, which can be obtained according to the corresponding relationship between the gray image and the depth image, first obtain the second edge pixel from the gray image Point, and then map to the depth image to find the corresponding pixel point.
  • This step specifically includes:
  • Step 502 Obtain second edge pixels of the grayscale image.
  • Computer equipment identifies pixels with obvious brightness changes in grayscale images. Significant changes in image attributes usually reflect important events and changes in attributes. These changes can include (a) discontinuities in depth and (b) discontinuities in surface orientation. Continuous, (c) changes in material properties and (d) changes in scene lighting.
  • Operators for edge detection can include Sobel operators, Laplacian operators, Canny operators, and so on. Since the edge characteristics of the depth image and the gray image are consistent, the computer device can find the corresponding pixel in the depth image of the echo image according to the second edge pixel, and then use the corresponding pixel as the first edge pixel of the depth image .
  • the Sobel operator is used for edge detection for description.
  • the Sobel operator contains two sets of 3 ⁇ 3 matrices T 1 and T 2 , corresponding to the horizontal and vertical directions of the grayscale image, respectively.
  • the Sobel operator and the gray-scale image are convolved in a plane to obtain the approximate value of the horizontal and vertical brightness difference in the gray-scale image.
  • the two sets of matrices of Sobel operator are:
  • G X and G Y represent the gray values of the horizontal and vertical images, respectively, and the calculation formulas of G X and G Y are as follows:
  • the calculation formula for the gradient amplitude of a single pixel in a grayscale image can be as follows:
  • the computer device compares the gradient magnitude of each pixel in the grayscale image with a preset gradient threshold; when the gradient magnitude of a certain pixel is greater than the preset gradient threshold, the pixel is the second edge pixel.
  • the second edge pixel is represented by P 0 (X 0 ,Y 0 ), and the gradient direction of the second edge pixel is:
  • Gray-scale images include depth information and surface information, and contain more discontinuous information, and there are fewer cases of missing edge pixels. At the same time, the operator for edge detection of grayscale images has been verified, and the calculation results are reliable.
  • Step 504 Find a pixel point corresponding to the second edge pixel point in the depth image.
  • the echo image obtained after lidar detection contains grayscale image and depth image, so the pixels of the grayscale image and the pixels of the depth image have a corresponding relationship. There may be multiple methods for obtaining the corresponding relationship between the grayscale image and the depth image.
  • One option is to extract the feature points of the object in the grayscale image and the depth image, such as the straight line or corner of the roadside, the end point of the lamppost, etc., and align the feature points of the corresponding object in the grayscale image and the depth image , Get the mapping of gray image and depth image. This method of determining the corresponding relationship only needs to match the feature points, and the calculation amount is small, and the calculation speed is high.
  • One option is to extract the features of the object in the gray image and the depth image, such as the border of the object, and align all the pixels in the border of the corresponding object in the gray image and the depth image to obtain the gray image and Depth image mapping. This method of determining the corresponding relationship matches all the pixels in the object frame, and the calculation accuracy is high.
  • each pixel of the grayscale image and the depth image contains time information, and the pixels with the same time information are aligned, that is, The mapping of grayscale image and depth image can be obtained.
  • This method of determining the correspondence relationship makes full use of the time correspondence relationship between the images obtained by the lidar, utilizes the existing data, simplifies the calculation process, and has a high calculation accuracy.
  • the second edge pixel in the gray image is mapped to the depth image to obtain the corresponding pixel.
  • Step 506 Determine the corresponding pixel as the first edge pixel of the depth image.
  • the corresponding pixel point obtained by the second edge pixel point mapping can be determined as the first edge pixel point of the depth image.
  • the computer device uses the gray image to extract the edge feature of the object, obtain the second edge pixel of the object, and then map the second edge pixel of the gray image according to the corresponding relationship between the gray image and the depth image In the depth image, the corresponding pixel in the depth image is the first edge pixel.
  • grayscale images include depth information and surface information, such as image attribute information, object depth information, object surface information, and scene lighting information, the discontinuous information contained is more comprehensive, which can reduce the missed detection rate of edge pixels.
  • the operator for edge detection of grayscale images has been verified, and the reliability of the calculation results is good, which improves the accuracy of edge pixel detection.
  • step 404 the step of judging whether the first edge pixel meets the preset crosstalk condition specifically includes:
  • Step 602 Obtain the edge ranging value and the gradient direction of the first edge pixel.
  • the gradient direction of the second edge pixel can be calculated by formula (5), the second edge pixel has a corresponding relationship with the first edge pixel, and the gradient direction is the gradient direction of the first edge pixel. .
  • the specific method can refer to the method of obtaining the gradient direction of the second edge pixel point by the gray image.
  • the ranging value is one of the object pose information, so the method of obtaining the edge ranging value of the first edge pixel point can refer to the multiple methods of calculating the object pose information in the foregoing embodiment, which will not be repeated here. .
  • Step 604 Obtain a first average ranging value of all surrounding pixels on the same side of the first edge pixel and the gradient direction.
  • Step 606 Obtain a second average ranging value of all surrounding pixels on the opposite side of the first edge pixel to the gradient direction.
  • the average ranging value of all surrounding pixels on the same side as the gradient direction is the first average ranging value.
  • the distance measurement value of each surrounding pixel is obtained, and the first average distance measurement value is obtained by averaging the distance measurement values of all surrounding pixels.
  • the surrounding pixels include pixels adjacent to the first edge pixel.
  • the average ranging value of all surrounding pixels on the side opposite to the gradient direction is the second average ranging value, and the calculation method can be the same as the first average ranging value.
  • Step 608 When the absolute value of the difference between the edge ranging value and the first average ranging value is greater than the preset distance threshold, and when the absolute value of the difference between the edge ranging value and the second average ranging value is greater than the preset distance threshold When, the first edge pixel meets the preset crosstalk condition.
  • the first ranging average value of surrounding pixels on the same side of the gradient direction is the edge ranging value of the first edge pixel.
  • the two values are similar, indicating that the edge ranging value is compared to the correct first ranging average
  • the first edge pixel and the surrounding pixels on the same side of the gradient direction detect the same object. Therefore, no pixel crosstalk occurs at the first edge pixel point, and its edge ranging value is correct and reliable.
  • the second ranging average value of surrounding pixels on the opposite side of the gradient direction is equal to the edge ranging value of the first edge pixel. If the two values are similar, then the first edge pixel and the opposite side of the gradient direction The surrounding pixels detect the same object, and no pixel crosstalk occurs at the first edge pixel.
  • the determination of the preset distance threshold is related to the distance measurement accuracy of the lidar. The higher the distance measurement accuracy, the smaller the preset distance threshold; it is also related to the distance measurement distance of the lidar. The longer the distance measurement, the higher the preset distance threshold. Big.
  • the surrounding pixels are pixels adjacent to the first edge pixel; if the first edge pixel has no pixel crosstalk, the first edge pixel has the same side and/or the opposite side of the gradient direction. , It is the same object detected by the first edge pixel, that is, the distance measurement value obtained should be equal. Taking into account the error and loss, the distance measurement values of the surrounding pixels on the same side and/or the opposite side of the gradient direction of the first edge pixel are averaged.
  • the edge ranging value When the edge ranging value is significantly different from the first ranging average value and the second ranging average value, for example, greater than the preset distance threshold, it indicates that the first edge pixel point is around the same side and the opposite side of the gradient direction
  • the objects detected by the pixels are all different, and the first edge pixel is a crosstalk pixel.
  • the information continuity of the depth image on both sides of the dividing line has changed, so the pixels on the dividing line are all edge pixels that have changed, and the pixels on the dividing line can be further judged whether they meet the preset crosstalk conditions.
  • the specific judgment method can refer to the aforementioned method. If the first edge pixel meets the preset crosstalk condition, it is more likely that the pixel points on the dividing line also meet the preset crosstalk condition.
  • a depth image of a 3 ⁇ 3 pixel array formed by the first edge pixel point and all adjacent pixels is taken as an example for description.
  • point P is the first edge pixel
  • the arrow direction on the P point is the gradient direction of the first edge pixel.
  • All surrounding pixels of point P are sequentially identified as 1, 2, 3, 4, 6, 7, 8, 9.
  • the dividing line passing through the P point is set perpendicular to the gradient direction of the P point, and the dividing line passes through 3 points, P points and 7 points, and the information continuity of the depth image on both sides of the dividing line has changed.
  • the surrounding pixels with the same gradient direction as the P point are 1, 2, and 4, and the surrounding pixels with the opposite gradient direction of the P point are 6, 8, and 9.
  • the computer equipment calculates and obtains the distance measurement values of pixels 1 , 2, and 4 respectively as R 1, R 2 and R 4; calculates the average value of the distance measurement values of the pixels 1, 2, 4 to obtain the first average distance measurement value ( R 1 +R 2 +R 4 )/3.
  • the computer equipment calculates and obtains the distance measurement values of pixels 6 , 8, and 9 respectively as R 6, R 8 and R 9, calculates the average value of the distance measurement values of pixels 6, 8, and 9 to obtain the second average distance measurement value ( R 6 +R 8 +R 9 )/3. At the same time, the computer equipment calculates and obtains the distance measurement value of point P as R P.
  • D 1 ⁇ e preset distance threshold
  • D 2 ⁇ e it is considered that P point and pixel points 6, 8 , 9 is on the same object surface, and there is no pixel crosstalk
  • D 1 > e and D 2 > e point P meets the preset crosstalk condition, and it is considered that point P is neither on the same surface of the object as pixels 1, 2, and 4, And it is not on the same surface of the object as pixels 6, 8, and 9, and there is pixel crosstalk at point P.
  • the dividing line also passes through the pixel points 3 and 7.
  • the pixel points 3 and 7 may also have the problem of pixel crosstalk. Therefore, the above method is adopted to determine whether the pixel points 3 and 7 meet the preset crosstalk conditions.
  • a depth image of a 3 ⁇ 3 pixel array formed by the first edge pixel point and all adjacent pixels is taken as an example for description.
  • point P is the first edge pixel
  • the arrow direction on the P point is the gradient direction of the first edge pixel.
  • All surrounding pixels of point P are sequentially identified as 1, 2, 3, 4, 6, 7, 8, 9.
  • the dividing line passing through the P point is set perpendicular to the gradient direction of the P point.
  • the dividing line passes through the 2 points, P points and 8 points, and the information continuity of the depth image on both sides of the dividing line has changed.
  • the surrounding pixels with the same gradient direction as the P point are 1, 4, and 7, and the surrounding pixels with the opposite gradient direction of the P point are 3, 6, and 9.
  • the computer equipment calculates and obtains the distance measurement values of pixels 1 , 4, and 7 respectively as R 1, R 4 and R 7; calculates the average value of the distance measurement values of pixels 1, 2, and 4 to obtain the first average distance measurement value ( R 1 +R 4 +R 7 )/3.
  • the computer equipment calculates and obtains the distance measurement values of pixels 3 , 6, and 9 respectively as R 3, R 6 and R 9, calculates the average value of the distance measurement values of pixels 3, 6, and 9 to obtain the second average distance measurement value ( R 3 +R 6 +R 9 )/3. At the same time, the computer equipment calculates and obtains the distance measurement value of point P as R P.
  • D 1 ⁇ e preset distance threshold
  • point P is considered to be on the same surface as pixels 1, 4, and 7 and there is no pixel crosstalk
  • D 2 ⁇ e point P is considered to be the same as pixel points 3, 6 , 9 is on the same object surface, there is no pixel crosstalk
  • D 1 > e and D 2 > e point P meets the preset crosstalk condition, and it is considered that point P is neither on the same surface of the object as pixels 1, 4, and 7, And it is not on the same surface of the object as pixels 3, 6, and 9, and pixel crosstalk exists at point P.
  • the computer device obtains the edge ranging value and the gradient direction of the first edge pixel point, and sets the dividing line passing through the P point perpendicular to the gradient direction of the P point, and the depth image information on both sides of the dividing line is continuous
  • the performance has changed.
  • the pixels that the dividing line crosses are more likely to be located at the junction of two objects, and the possibility of pixel crosstalk is high.
  • the computer device obtains the edge ranging value by obtaining the first average ranging value of all surrounding pixels on the same side of the gradient direction of the first edge pixel and the second average ranging value of all surrounding pixels on the opposite side of the gradient direction. The difference between the value and the first average ranging value, and the difference between the edge ranging value and the second average ranging value.
  • the surrounding pixels are the pixels adjacent to the first edge pixel; if the first edge pixel has no pixel crosstalk, the surrounding pixels on the same side and/or opposite side of the gradient direction of the first edge pixel , It is the same object detected by the first edge pixel, that is, the obtained distance measurement value should be similar.
  • the edge ranging value is significantly different from the first ranging average value and the second ranging average value, for example, greater than the preset distance threshold, it indicates that the first edge pixel point is around the same side and the opposite side of the gradient direction
  • the objects detected by the pixels are all different, and the first edge pixel is a crosstalk pixel.
  • the first edge pixels that may have pixel crosstalk are judged one by one, which improves the recognition accuracy of crosstalk pixels; and the judgment accuracy can also be adjusted by adjusting the preset distance threshold to meet the accuracy of different systems.
  • the application of lidar for measuring distance has good versatility.
  • step 204 determine the echo image
  • the steps of whether there are crosstalk pixels specifically include the following steps:
  • Step 902 When the operating frequency of the lidar is the first modulation frequency, obtain the first ranging value of each pixel in the echo image.
  • Step 904 When the operating frequency of the lidar is the second modulation frequency, obtain the second ranging value of each pixel in the echo image.
  • step 906 the first distance measurement value and the second distance measurement value of each pixel are calculated to obtain a distance measurement difference value.
  • step 908 when the distance measurement difference is not zero, the pixel points whose distance measurement difference is not zero are the crosstalk pixels of the echo image.
  • Step 910 When the distance measurement difference is zero, the pixel with the distance measurement difference of zero is not a crosstalk pixel of the echo image.
  • the detection signal is modulated by the carrier wave and then emitted, and the distance information is obtained by calculating the phase difference between the echo signal and the detection signal.
  • the calculation method can be as follows:
  • the detection signal emitted by the lidar is expressed as:
  • the echo signal received by the lidar is expressed as:
  • a 1 represents the offset of the detection signal
  • a 2 represents the modulation amplitude of the detection signal
  • f represents the modulation frequency of the detection signal
  • A represents the amplitude of the echo signal
  • B represents the offset of the echo signal due to background illumination.
  • the background illumination can be the illumination outside the transmitter itself
  • represents the flight time, that is, the time difference between the detection signal and the echo signal
  • the correlation function between the power of the detection signal and the echo signal is expressed as:
  • the calculation formula for the distance measurement value of the detected object is as follows:
  • d represents the distance measurement value of the object
  • c represents the speed of light
  • the echo signal can be expressed by formula (9), according to formula (10)-(15) ) To obtain the calculation formula of the ranging value, as shown below:
  • c represents the speed of light
  • represents the flight time
  • the ranging value d is only related to the flight time ⁇ , and the ranging value has nothing to do with the modulation frequency of the echo signal.
  • the pixel receives the echo signals reflected by at least two objects. At this time, it is a double-echo or multi-echo situation. Take the pixel receiving two echo signals as an example.
  • the formula (9) Amend to:
  • a 1 represents the amplitude of the first echo signal
  • ⁇ 1 represents the time difference between the first echo signal and the detection signal
  • B 1 represents the offset of the first echo signal
  • a 2 represents the second echo signal.
  • ⁇ 2 represents the time difference between the second echo signal and the detection signal
  • B 2 represents the offset of the second echo signal.
  • the ranging value is related to the modulation frequency.
  • Two modulation frequencies are arbitrarily selected in the modulation frequency band of the laser radar, namely the first modulation frequency f1 and the second modulation frequency f2.
  • the operating frequency of the lidar is the first modulation frequency
  • the first ranging value of each pixel in the echo image is obtained.
  • the operating frequency of the lidar is the second modulation frequency
  • the second ranging value of each pixel in the echo image is obtained.
  • the lidar is working, it alternately switches between the two modulation frequencies. After the first modulation frequency f1 works for several cycles, it switches to the second modulation frequency f2 and works for several cycles.
  • 3 cycles are modulated at the first modulation frequency f1, and 3 cycles are modulated at the second modulation frequency f2 after switching.
  • the first modulation frequency f1 modulates the detection signal
  • the first ranging value d1 is calculated according to formula (18) and C0-C3.
  • the first ranging value of each pixel can be calculated by the above method.
  • the second modulation frequency f2 modulates the detection signal
  • the second ranging value d2 is calculated according to formula (18) and C0'-C3'. In the same way, during the working period of the second modulation frequency f2, the second ranging value of each pixel can be calculated by the above method.
  • the ranging value d is only related to the flight time ⁇ , and the ranging value has nothing to do with the modulation frequency of the echo signal; in the case of double echo (or multiple echo), the measurement The distance value d is related to the modulation frequency. Therefore, by switching between different modulation frequencies during the operation of the lidar, the pixel points with pixel crosstalk can detect the same object with different ranging values; the pixel without pixel crosstalk can detect the same object with the same ranging value.
  • the difference between the first ranging value and the second ranging value of each pixel point is obtained to obtain the ranging difference.
  • the distance difference is zero, the first and second distance values are equal, and the distance value is not affected by the modulation frequency.
  • This pixel is not an echo image.
  • Crosstalk pixels When the distance measurement difference is not zero, the first distance measurement value and the second distance measurement value are not equal, and the distance measurement value changes with the modulation frequency, and this pixel is a crosstalk pixel in the echo image.
  • the distance measurement difference of all pixels is judged one by one, and all the crosstalk pixels in the echo image are obtained.
  • the lidar is made to include at least two different modulation frequencies when working, and the computer device obtains the first measurement of each pixel when the first modulation frequency is working. Distance value, the second distance value of each pixel when the second modulation frequency is working, and then the difference between the first distance value and the second distance value of each pixel under different modulation frequencies is obtained. .
  • pixel crosstalk occurs, it will receive echo signals reflected by at least two objects, which is equivalent to a pixel receiving multiple echoes.
  • the ranging value is related to the modulation frequency; use this feature to determine one by one Whether the pixel is a crosstalk pixel.
  • the ranging difference is zero, the first ranging value and the second ranging value are equal, the ranging value is not affected by the modulation frequency, and the pixel is not a crosstalk pixel in the echo image; if the ranging difference is not If it is zero, the first ranging value and the second ranging value are not equal, the ranging value changes with the modulation frequency, and this pixel is a crosstalk pixel in the echo image.
  • This method is used to determine whether there are pixel crosstalk points in the echo image. By adjusting the modulation circuit of the laser radar transmitting end, the working modulation frequency can be switched as needed. During the normal operation of the laser radar, the ranging value can be obtained for judgment.
  • step 206: correcting the crosstalk pixel to obtain the corrected echo image specifically includes: selecting a pixel adjacent to the crosstalk pixel as a spare pixel; copying the information of the spare pixel into Crosstalk the pixels to get the corrected echo image.
  • the computer equipment selects a pixel without pixel crosstalk as the spare pixel; then copies the information of the spare pixel into the crosstalk pixel to replace the information of the crosstalk pixel , Get the corrected echo image.
  • point P is a crosstalk pixel
  • pixel points 3 and 7 are located on the dividing line perpendicular to the gradient direction of point P, as mentioned above, there is a greater possibility that it is also a crosstalk pixel; therefore, The pixel points 1, 2, 4 or the pixel points 6, 8, 9 adjacent to the P point can be selected as the spare pixel points. If the pixel point 1 is selected as the spare pixel point, the information detected by the pixel point 1 is copied into the point P, and the original information of the point P is replaced.
  • the computer device selects a pixel adjacent to the crosstalk pixel as a spare pixel, and copies the information of the spare pixel without pixel crosstalk into the crosstalk pixel to correct the crosstalk pixel; using this
  • the method corrects all the crosstalk pixels and obtains the corrected echo image.
  • Crosstalk pixels are often located at the junction of two or more objects in the echo image.
  • the error information in the crosstalk pixels is replaced with the information of the adjacent spare pixels; because there is no pixel crosstalk in the spare pixels,
  • the positions of the crosstalk pixel and the spare pixel are adjacent, and the object detected by the spare pixel is one of multiple objects detected by the crosstalk pixel.
  • Adjacent non-pixel crosstalk pixels are selected as spare pixels, which is equivalent to extending the boundary of an object detected by the spare pixel by one pixel, and the boundary of the remaining objects is returned by one pixel.
  • the operation is simple and after correction
  • the echo image does not affect the normal detection of objects, and does not affect the accuracy of object detection.
  • this correction method can reduce the error rate of echo images without reducing the number of pixels, thereby improving the calculation accuracy of object pose information and improving the accuracy of lidar range finding.
  • step 206: correcting crosstalk pixels to obtain a corrected echo image specifically includes:
  • the computer device After obtaining the crosstalk pixels, the computer device directly deletes the information of the crosstalk pixels and corrects the crosstalk pixels. This method is used to correct all the crosstalk pixels to obtain a corrected echo image. This effectively avoids the impact of crosstalk pixels on the accuracy of the echo image, and improves the accuracy of the calculation of the object's pose information. At the same time, deleting crosstalk pixel information is simple and straightforward, and no other calculation operations are required, reducing the burden on computer equipment and speeding up the calculation speed.
  • step 206: correcting the crosstalk pixels to obtain the corrected echo image specifically includes: dividing the crosstalk pixels into a plurality of sub-pixels; copying the information of the pixels adjacent to the sub-pixels into the sub-pixels Pixels, get the corrected echo image.
  • the computer device can divide the crosstalk pixel into a plurality of sub-pixels according to a preset division method; the preset division method can be divided into N ⁇ N sub-pixels, or along the gradient direction of the crosstalk pixel and the direction of the dividing line Split.
  • the adjacent pixel of each sub-pixel is not a crosstalk pixel, and the information of the adjacent pixel is copied into the sub-pixel; the information of the crosstalk pixel is split and replaced to obtain a corrected echo image.
  • the P point is a crosstalk pixel
  • the arrow direction is the gradient direction of the P point
  • a dividing line is set perpendicular to the gradient direction through the P point.
  • the crosstalk pixel is divided into 4 sub-pixels according to the gradient direction and the direction of the dividing line, namely P1, P2, P3, P4, that is, there are two sub-pixels on one side of the dividing line and two on the other side. Sub-pixels.
  • the adjacent pixel point of the sub-pixel point P1 is the pixel point 4, and the information of the pixel point 4 is copied into the sub-pixel point P1.
  • the information of the pixel point 2 is copied into the sub-pixel point P2, the information of the pixel point 6 is copied into the sub-pixel point P3, and the information of the pixel point 8 is copied into the sub-pixel point P4.
  • the crosstalk pixel is divided into 4 sub-pixels of 2 ⁇ 2, which are respectively P1, P2, P3, and P4.
  • the information of pixel point 2 is copied into sub-pixel points P1 and P2, and the information of pixel point 8 is copied into sub-pixel points P3 and P4.
  • the information of the pixel point 4 is copied into the sub-pixel points P1 and P3, and the information of the pixel point 6 is copied into the sub-pixel points P2 and P4.
  • the computer device further divides the crosstalk pixels into sub-pixels, and copies the information of adjacent pixels without pixel crosstalk into the sub-pixels to correct the crosstalk pixels; this method is used to correct the crosstalk pixels. All crosstalk pixels are corrected to obtain a corrected echo image.
  • the crosstalk pixel is usually located at the junction of multiple objects in the echo image, the adjacent pixel of the crosstalk pixel has no pixel crosstalk, and the detected object is one of the multiple objects detected by the crosstalk pixel.
  • the crosstalk pixel is divided into multiple sub-pixels, the information in each sub-pixel is replaced with the information of adjacent pixels without pixel crosstalk, which is equivalent to dividing multiple sub-pixels into pixels corresponding to multiple objects according to the positional relationship In the point set, the boundary between the sub-pixel points divided into different objects is the boundary of different objects.
  • the crosstalk pixel is divided into multiple sub-pixels and then divided. The boundaries of different objects in the crosstalk pixel are more refined and accurate; since each sub-pixel contains less information, even if it is copied There are some deviations in the information of adjacent pixels, which has little effect on the entire echo image ranging result.
  • the echo image may be divided into regions. Compared with the peripheral area, the center area of the echo image requires higher detection accuracy.
  • the upper part of the echo image is mainly higher than the detection result at the installation position of the lidar, and the resolution requirement is lower; therefore, the detection accuracy requirements can be met.
  • the echo image is divided into three areas from high to low, as shown in Figure 14, area A, area B, and area C. Area A is the central area of the echo image, and area C is the area above the central area. The rest is area B.
  • Crosstalk pixels in area A can be corrected by copying the information of adjacent pixels after being divided into sub-pixels; pixels in area B can be corrected by copying the information of spare pixels; pixels in area C , You can use the information to delete the crosstalk pixels for correction.
  • Using multiple correction methods at the same time and combining them with each other can not only ensure the accuracy of the echo image obtained after correction, but also reduce the amount of calculation of the entire computer equipment, reduce the system pressure, and increase the calculation rate.
  • a lidar ranging device including: an acquisition module 1502, a judgment module 1504, a correction module 1506, and a calculation module 1508, wherein:
  • the acquisition module 1502 is used to acquire the echo image received by the lidar.
  • the judging module 1504 is used for judging whether there are crosstalk pixels in the echo image.
  • the correction module 1506 is used to correct the crosstalk pixels when the echo image has crosstalk pixels to obtain a corrected echo image.
  • the calculation module 1508 is used to calculate the posture information of the object according to the corrected echo image.
  • the determining module 1504 is also used to obtain the first edge pixel of the depth image; determine whether the first edge pixel meets the preset crosstalk condition; when the first edge pixel meets the preset crosstalk condition, change The first edge image point is determined as the crosstalk pixel point of the echo image.
  • the judgment module 1504 is also used to obtain the second edge pixel of the grayscale image; find the corresponding pixel of the second edge pixel in the depth image; determine the corresponding pixel as the first pixel of the depth image Edge pixels.
  • the judging module 1504 is also used to obtain the edge ranging value and the gradient direction of the first edge pixel; to obtain the first average measurement of all surrounding pixels on the same side of the first edge pixel and the gradient direction. Distance value; get the second average distance value of all surrounding pixels on the opposite side of the first edge pixel and the gradient direction; when the absolute value of the difference between the edge distance value and the first average distance value is greater than the preset The distance threshold and when the absolute value of the difference between the edge distance measurement value and the second average distance measurement value is greater than the preset distance threshold value, the first edge pixel point meets the preset crosstalk condition.
  • the surrounding pixels include pixels adjacent to the first edge pixel.
  • the judging module 1504 is also used to obtain the first ranging value of each pixel in the echo image when the working frequency of the lidar is the first modulation frequency; when the working frequency of the lidar is the first When the second modulation frequency is used, the second ranging value of each pixel in the echo image is obtained; the difference between the first ranging value and the second ranging value of each pixel is obtained to obtain the ranging difference; When the distance difference is not zero, the pixels with the distance difference not being zero are the crosstalk pixels of the echo image.
  • the correction module 1506 is also used to select a pixel adjacent to the crosstalk pixel as a spare pixel; copy the information of the spare pixel into the crosstalk pixel to obtain a corrected echo image.
  • the correction module 1506 is also used to delete crosstalk pixel information to obtain a corrected echo image.
  • the correction module 1506 is further configured to divide the crosstalk pixel into a plurality of sub-pixels, and copy the information of the adjacent pixels of the sub-pixels into the sub-pixels to obtain a corrected echo image.
  • Each module in the above-mentioned lidar ranging device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device in one embodiment, is provided, and its internal structure diagram may be as shown in FIG. 16.
  • the computer equipment includes a processor, a memory, a communication interface and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store echo images and object pose information.
  • the communication interface of the computer equipment is used to connect and communicate with the lidar.
  • the computer-readable instructions are executed by the processor to realize a lidar ranging method.
  • FIG. 16 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the steps in each of the foregoing method embodiments. step.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Procédé de télémétrie par radar laser consistant : à obtenir une image d'écho reçue par un radar laser (302) ; à déterminer si l'image d'écho présente un pixel de diaphonie ou non (304) ; si l'image d'écho présente un pixel de diaphonie, à corriger le pixel de diaphonie afin d'obtenir une image d'écho corrigée (306) ; et à effectuer un calcul en fonction de l'image d'écho corrigée afin d'obtenir des informations de pose d'objet (308).
PCT/CN2019/124280 2019-12-10 2019-12-10 Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage WO2021114080A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980050264.9A CN112639516B (zh) 2019-12-10 2019-12-10 激光雷达测距方法、装置、计算机设备和存储介质
PCT/CN2019/124280 WO2021114080A1 (fr) 2019-12-10 2019-12-10 Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/124280 WO2021114080A1 (fr) 2019-12-10 2019-12-10 Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage

Publications (1)

Publication Number Publication Date
WO2021114080A1 true WO2021114080A1 (fr) 2021-06-17

Family

ID=75283636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124280 WO2021114080A1 (fr) 2019-12-10 2019-12-10 Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN112639516B (fr)
WO (1) WO2021114080A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4206728A1 (fr) * 2021-12-30 2023-07-05 Suteng Innovation Technology Co., Ltd Procédé et appareil de détermination de point d'interférence, support de stockage et lidar multicanal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755100B (zh) * 2023-08-17 2024-02-02 深圳市速腾聚创科技有限公司 激光雷达设备及其测距调节方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105842678A (zh) * 2014-10-14 2016-08-10 现代自动车株式会社 车辆中用于对激光雷达数据进行滤波的系统及其方法
US20180074196A1 (en) * 2016-05-27 2018-03-15 Harvey Weinberg Hybrid flash lidar system
CN109343025A (zh) * 2018-08-14 2019-02-15 清华大学 一种激光雷达的发射系统、探测系统及探测方法
CN110031823A (zh) * 2019-04-22 2019-07-19 上海禾赛光电科技有限公司 可用于激光雷达的噪点识别方法以及激光雷达系统
WO2019204438A1 (fr) * 2018-04-17 2019-10-24 Continental Automotive Systems, Inc. Circuit d'atténuation d'interférence mutuelle pour récepteurs de pixels de lidar

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9182492B2 (en) * 2010-07-29 2015-11-10 Waikatolink Ltd Apparatus and method for measuring the distance and/or intensity characteristics of objects
DE102016122645A1 (de) * 2016-11-24 2018-05-24 Valeo Schalter Und Sensoren Gmbh Empfangseinrichtung für eine optische Detektionsvorrichtung, Detektionsvorrichtung und Fahrerassistenzsystem
CN107292900A (zh) * 2017-07-05 2017-10-24 黄河科技学院 一种基于Canny算法的图像边缘检测方法和装置
CN107290700B (zh) * 2017-08-08 2020-12-04 上海联影医疗科技股份有限公司 一种相位校正方法、装置及磁共振系统
CN108898139B (zh) * 2018-06-04 2022-06-10 上海大学 一种下雨环境下的激光雷达数据抗干扰处理方法及其实验装置
CN110333514B (zh) * 2018-10-12 2021-11-30 深圳市速腾聚创科技有限公司 多回波激光雷达测距方法及多回波激光雷达
CN110031821B (zh) * 2019-03-25 2020-11-17 白犀牛智达(北京)科技有限公司 一种车载避障激光雷达波形提取方法、激光雷达及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105842678A (zh) * 2014-10-14 2016-08-10 现代自动车株式会社 车辆中用于对激光雷达数据进行滤波的系统及其方法
US20180074196A1 (en) * 2016-05-27 2018-03-15 Harvey Weinberg Hybrid flash lidar system
WO2019204438A1 (fr) * 2018-04-17 2019-10-24 Continental Automotive Systems, Inc. Circuit d'atténuation d'interférence mutuelle pour récepteurs de pixels de lidar
CN109343025A (zh) * 2018-08-14 2019-02-15 清华大学 一种激光雷达的发射系统、探测系统及探测方法
CN110031823A (zh) * 2019-04-22 2019-07-19 上海禾赛光电科技有限公司 可用于激光雷达的噪点识别方法以及激光雷达系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4206728A1 (fr) * 2021-12-30 2023-07-05 Suteng Innovation Technology Co., Ltd Procédé et appareil de détermination de point d'interférence, support de stockage et lidar multicanal

Also Published As

Publication number Publication date
CN112639516B (zh) 2023-08-04
CN112639516A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
JP7398506B2 (ja) ローカライゼーション基準データを生成及び使用する方法及びシステム
CN109791052B (zh) 使用数字地图对点云的数据点进行分类的方法和系统
US11042966B2 (en) Method, electronic device, and storage medium for obtaining depth image
US8717228B2 (en) Method and device for detecting target object, and radar apparatus
WO2021114080A1 (fr) Procédé et appareil de télémétrie par radar laser, dispositif informatique et support de stockage
US10107899B1 (en) System and method for calibrating light intensity
US10410070B2 (en) Step detection device and step detection method
KR20150069927A (ko) 카메라 및 레이저 센서의 캘리브레이션 장치 및 캘리브레이션 방법
US20200193624A1 (en) Method and apparatus for dimensioning objects
JP2023068009A (ja) 地図情報作成方法
JPWO2018212284A1 (ja) 測定装置、測定方法およびプログラム
CN117970366A (zh) 一种大测量范围激光雷达

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956046

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956046

Country of ref document: EP

Kind code of ref document: A1