WO2022114311A1 - Method and apparatus for lidar sensor information correction and up-sampling using multiple images - Google Patents

Method and apparatus for lidar sensor information correction and up-sampling using multiple images Download PDF

Info

Publication number
WO2022114311A1
WO2022114311A1 PCT/KR2020/017217 KR2020017217W WO2022114311A1 WO 2022114311 A1 WO2022114311 A1 WO 2022114311A1 KR 2020017217 W KR2020017217 W KR 2020017217W WO 2022114311 A1 WO2022114311 A1 WO 2022114311A1
Authority
WO
WIPO (PCT)
Prior art keywords
lidar sensor
lidar
sensor measurement
image
information
Prior art date
Application number
PCT/KR2020/017217
Other languages
French (fr)
Korean (ko)
Inventor
박민규
윤주홍
김제우
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2022114311A1 publication Critical patent/WO2022114311A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to lidar information processing technology, and more particularly, to a method and apparatus for improving the density and accuracy of lidar or various depth sensors by utilizing image information when two or more pieces of image information are given. .
  • the lidar sensor is a device that can acquire distance information for 360 degrees in real time, and is one of the sensors that are prioritized when manufacturing an autonomous driving device or autonomous flying device.
  • the present invention has been devised to solve the above problems, and an object of the present invention is to improve the accuracy of the lidar sensor and the density by increasing the amount of information.
  • Two or more cameras To provide a method and apparatus for correcting and up-sampling lidar information using
  • a method for processing lidar information includes: acquiring measurement values of lidar sensors; acquiring multiple images; sampling some of the measured lidar sensor measurement values; projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively; projecting the sampled lidar sensor measurement values to a second image among multiple images, respectively; calculating a similarity between a projection position of each lidar sensor measurement value to a first image and a projection position to a second image; and selecting a lidar sensor measurement value having the largest calculated similarity.
  • a method of processing lidar information includes the steps of randomly adding a normal vector of a selected lidar sensor measurement value to generate a plurality of lidar sensor measurement values; sampling some of the generated lidar sensor measurement values; projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively; projecting the sampled lidar sensor measurement values to a second image among multiple images, respectively; calculating a similarity between a projection position of each lidar sensor measurement value to a first image and a projection position to a second image; The method may further include selecting a lidar sensor measurement value having the largest calculated similarity.
  • the process may be performed again from the generation step.
  • the similarity may be calculated using the projection position and the normal vector.
  • a method of processing lidar information includes: projecting a measurement value of a lidar sensor to which a normal vector is added to an image; propagating the projected lidar sensor measurements to a plurality of adjacent pixels within the image; calculating similarities between propagated lidar sensor measurements; The method may further include adding a lidar sensor measurement value having the greatest similarity.
  • the propagation step may add white noise to the normal vector and the projected position of the projected lidar sensor measurement value to generate lidar sensor measurement values of a plurality of adjacent pixels.
  • a lidar information processing method includes: projecting an added lidar sensor measurement value into an image; propagating the projected lidar sensor measurements to a plurality of adjacent pixels within the image; calculating similarities between propagated lidar sensor measurements; The method may further include adding a lidar sensor measurement value having the greatest similarity, and if the highest similarity among similarities calculated in the calculation step does not exceed a threshold, the projecting step may be re-performed.
  • a lidar information processing apparatus includes: a first acquirer configured to acquire lidar sensor measurement values; a second acquisition unit for acquiring multiple images; and sampling some of the measured lidar sensor measurement values, respectively projecting the sampled lidar sensor measurement values to a first image among multiple images, and projecting the sampled lidar sensor measurement values to a second image among the multiple images, respectively and a correction unit that calculates a similarity between the projection position of each lidar sensor measurement value onto the first image and the projection position on the second image, and selects the lidar sensor measurement value having the largest calculated similarity.
  • 1 is a conceptual diagram showing the projection of lidar sensor information to multiple images
  • FIG. 3 is a view provided for explaining a method for calibrating lidar sensor information according to an embodiment of the present invention
  • FIG. 4 is a view provided for the description of a lidar sensor information up-sampling method according to another embodiment of the present invention.
  • FIG. 5 is a block diagram of an apparatus for processing lidar sensor information according to another embodiment of the present invention.
  • a method of correcting lidar sensor information and up-sampling using multiple images is presented.
  • the accuracy is improved by correcting the error of the lidar or various depth sensors by using the image information, and the density by up-sampling the information of the lidar or various depth sensors way to improve
  • a low-cost lidar sensor has a distance error of about several cm, and the measured location information can be projected by each camera as shown in FIG. 1 .
  • K is the internal parameter of the camera
  • R and T are the camera posture and position information, respectively.
  • correction of the lidar sensor measurement value is performed under the following assumptions.
  • the two projected pixels have the highest similarity within the adjacent area. Therefore, if a pixel with a higher similarity exists in the vicinity, the 3D coordinate value of the liga can be corrected using the position information of the pixel.
  • a process of increasing the similarity between projected pixels is performed while moving the 3D value of the lidar to an adjacent position.
  • 3 is a diagram provided to explain a method for calibrating lidar sensor information according to an embodiment of the present invention.
  • step S110 first, based on the given initial position information of the lidar, random sampling around the 3D space of the lidar is performed, and a candidate group of coordinate values to be updated is obtained.
  • the measured values of the lidar sensor sampled in step S110 are projected onto multiple images, respectively (S120). Since there are two multi-images, in step S120, the lidar sensor measurement values are projected as two images, a first image and a second image, respectively.
  • step S130 the similarity between the projection position of each lidar sensor measurement value to the first image and the projection position to the second image is calculated. Then, a lidar sensor measurement value having the greatest similarity calculated in step S130 is selected (S140). The sample is updated by step S140.
  • a normal vector of the measured value of the lidar sensor selected in step S140 is randomly added to generate a plurality of measured values of the lidar sensor, and some of the measured values of the lidar sensor are randomly sampled ( S150 ).
  • the LiDAR sensor measurement values sampled in step S150 are respectively projected onto multiple images (first image and second image) (S120), and the projected position of each lidar sensor measurement value onto the first image and the second image
  • the degree of similarity between the projected positions of the s is calculated ( S130 ).
  • the similarity is calculated using the projected position and the normal vector in the repeated similarity calculation. Accordingly, it is possible to compare according to the shape of the normal vector rather than the square shape, and through this, it is possible to more precisely correct the LIDAR measurement value.
  • step S130 the measured value of the lidar sensor having the greatest similarity calculated in step S130 is selected and updated (S140), and the subsequent steps are repeated in step S150.
  • Updating and repeating the procedure may be implemented until the greatest similarity among the similarities calculated in step S130 exceeds a threshold. That is, when the maximum similarity exceeds the threshold in step S130, the measured value of the lidar sensor selected in step S140 becomes the corrected lidar sensor measurement.
  • the multi-image is an image of two cameras, which corresponds to the minimum required information. If an image is generated using a larger number of cameras, the accuracy of correction may be further improved.
  • FIG. 4 is a diagram provided to explain a method for up-sampling lidar sensor information according to another embodiment of the present invention.
  • a measurement value of the lidar sensor to which the normal vector is added is obtained (S210).
  • the lidar sensor measurement value to which the normal vector is added may be generated by using the above-described LiDAR sensor information correction, or may be generated separately.
  • the lidar sensor measurement value obtained in step S210 is projected onto an image (S20), and the lidar sensor measurement value projected in operation S220 is propagated to a plurality of adjacent pixels in the image (S230).
  • the propagation in step S230 corresponds to the process of generating lidar sensor measurement values of a plurality of adjacent pixels by adding white noise to the projection position and normal vector of the lidar sensor measurement value projected in operation S220.
  • step S250 the lidar sensor measurement value added in step S250 is projected into an image (S20), and the lidar sensor measurement value projected in step S220 is propagated to a plurality of adjacent pixels in the image (S230), step S230 Calculating the similarities between the measured values of the LiDAR sensor propagated in (S240), and adding the measured value of the LiDAR sensor having the greatest similarity (S250) is repeated.
  • the repetition of the procedure may be implemented until the greatest similarity among the similarities calculated in step S230 exceeds a threshold. That is, the number of repetitions may be limited until the maximum similarity exceeds the threshold in step S240.
  • FIG. 5 is a block diagram of an apparatus for processing lidar sensor information according to another embodiment of the present invention.
  • the lidar sensor information processing apparatus includes a multi-image acquisition unit 310 , a lidar information acquisition unit 320 , a lidar information correction unit 330 , and a lidar information up unit. - It is configured to include a sampling unit (340).
  • the multi-image acquisition unit 310 acquires image information necessary for correction and up-sampling of lidar sensor information from a camera (not shown).
  • the lidar information acquisition unit 320 acquires lidar information to be corrected and up-sampling from a lidar sensor (not shown).
  • the lidar information correcting unit 330 corrects the lidar sensor information obtained by the lidar information obtaining unit 320 by using the image information obtained through the multi-image obtaining unit 310 .
  • the lidar information up-sampling unit 340 up-samples the lidar sensor information acquired by the lidar information acquiring unit 320 by using the image information acquired through the multi-image acquiring unit 310 .
  • the lidar information correcting unit 330 and the lidar information up-sampling unit 340 can be selectively implemented. That is, the technical idea of the present invention can be applied even when only one of the two is included, that is, only correcting or up-sampling of lidar information.
  • the technical idea of the present invention can be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment.
  • the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium.
  • the computer-readable recording medium may be any data storage device readable by the computer and capable of storing data.
  • the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like.
  • the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

Provided are method and apparatus for LiDAR sensor information correction and up-sampling using multiple images. The LiDAR information processing method according to an embodiment of the present invention corrects and up-samples LiDAR information using two or more cameras to allow both increased accuracy and increased density of the LiDAR sensor due to the increase in the amount of information, thus obviating the need for using a high-priced LiDAR sensor or multiple LiDAR sensors.

Description

다중 영상을 이용한 라이다 센서 정보 보정 및 업-샘플링 방법 및 장치LIDAR sensor information calibration and up-sampling method and apparatus using multiple images
본 발명은 라이다 정보 처리 기술에 관한 것으로, 더욱 상세하게는 두 장 이상의 영상 정보가 주어졌을 때 영상 정보를 활용하여, 라이다 또는 다양한 깊이 센서의 조밀도와 정확도를 향상시키는 방법 및 장치에 관한 것이다.The present invention relates to lidar information processing technology, and more particularly, to a method and apparatus for improving the density and accuracy of lidar or various depth sensors by utilizing image information when two or more pieces of image information are given. .
라이다 센서는 360도 전방위에 대한 거리 정보를 실시간으로 획득가능한 장치로 자율주행 장치 또는 자율비행 장치를 제작할 때 우선적으로 고려되는 센서 중 하나이다.The lidar sensor is a device that can acquire distance information for 360 degrees in real time, and is one of the sensors that are prioritized when manufacturing an autonomous driving device or autonomous flying device.
하지만, 라이다 정보가 매 시간 획득할 수 있는 관측치는 영상 정보 대비 현저히 부족하기 때문에 획득한 정보 자체가 듬성듬성 떨어져있다는 단점이 있어, 주변 환경이나 사물의 자세한 외형 정보를 알기는 어렵고, 고품질의 데이터 획득을 위해서는 매우 고가의 제품을 여러 개 사용 할 수밖에 없다.However, since the number of observations that can be acquired every hour of lidar information is significantly insufficient compared to image information, the acquired information itself is sparse. In order to acquire it, you have no choice but to use several very expensive products.
또한, 센서가 갖는 오차가 존재하기 때문에 만약 정밀한 지도 복원 등에 라이다 센서를 사용코자 한다면 깊이 정보에 대한 보정 또한 필요하다.In addition, since there is an error of the sensor, if the lidar sensor is to be used for precise map restoration, correction of depth information is also required.
본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 목적은, 라이다 센서의 정확도(accuracy) 향상과 정보량 증가에 의한 조밀도(density) 향상을 위한 방안으로, 2대 이상의 카메라를 활용하여 라이다 정보를 보정하고 업-샘플링하는 방법 및 장치를 제공함에 있다.The present invention has been devised to solve the above problems, and an object of the present invention is to improve the accuracy of the lidar sensor and the density by increasing the amount of information. Two or more cameras To provide a method and apparatus for correcting and up-sampling lidar information using
상기 목적을 달성하기 위한 본 발명의 일 실시예에 따른, 라이다 정보 처리 방법은, 라이다 센서 측정 값들을 획득하는 단계; 다중 영상을 획득하는 단계; 측정된 라이다 센서 측정 값들 중 일부를 샘플링하는 단계; 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제1 영상에 각각 투영하는 단계; 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제2 영상에 각각 투영하는 단계; 각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산하는 단계; 계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하는 단계;를 포함한다.According to an embodiment of the present invention for achieving the above object, a method for processing lidar information includes: acquiring measurement values of lidar sensors; acquiring multiple images; sampling some of the measured lidar sensor measurement values; projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively; projecting the sampled lidar sensor measurement values to a second image among multiple images, respectively; calculating a similarity between a projection position of each lidar sensor measurement value to a first image and a projection position to a second image; and selecting a lidar sensor measurement value having the largest calculated similarity.
본 발명의 실시예에 따른 라이다 정보 처리 방법은, 선택된 라이다 센서 측정 값의 법선 벡터를 램덤으로 부가하여, 다수의 라이다 센서 측정 값들을 생성하는 단계; 생성된 라이다 센서 측정 값들 중 일부를 샘플링하는 단계; 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제1 영상에 각각 투영하는 단계; 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제2 영상에 각각 투영하는 단계; 각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산하는 단계; 계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하는 단계;를 더 포함할 수 있다.A method of processing lidar information according to an embodiment of the present invention includes the steps of randomly adding a normal vector of a selected lidar sensor measurement value to generate a plurality of lidar sensor measurement values; sampling some of the generated lidar sensor measurement values; projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively; projecting the sampled lidar sensor measurement values to a second image among multiple images, respectively; calculating a similarity between a projection position of each lidar sensor measurement value to a first image and a projection position to a second image; The method may further include selecting a lidar sensor measurement value having the largest calculated similarity.
계산 단계에서 계산된 유사도들 중 가장 큰 유사도가 임계치를 초과하지 않으면, 생성 단계부터 재수행할 수 있다.If the greatest similarity among the similarities calculated in the calculation step does not exceed the threshold, the process may be performed again from the generation step.
그리고, 계산 단계는, 투영 위치와 법선 벡터를 이용하여 유사도를 계산할 수 있다.And, in the calculation step, the similarity may be calculated using the projection position and the normal vector.
본 발명의 실시예에 따른 라이다 정보 처리 방법은, 법선 벡터가 부가된 라이다 센서 측정 값을 영상으로 투영하는 단계; 투영된 라이다 센서 측정 값을 영상 내에서 다수의 인접 픽셀들로 전파하는 단계; 전파된 라이다 센서 측정 값들 간의 유사도들을 계산하는 단계; 유사도가 가장 큰 라이다 센서 측정 값을 추가하는 단계;를 더 포함할 수 있다.A method of processing lidar information according to an embodiment of the present invention includes: projecting a measurement value of a lidar sensor to which a normal vector is added to an image; propagating the projected lidar sensor measurements to a plurality of adjacent pixels within the image; calculating similarities between propagated lidar sensor measurements; The method may further include adding a lidar sensor measurement value having the greatest similarity.
또한, 전파 단계는, 투영된 라이다 센서 측정 값의 투영 위치와 법선 벡터에 백색 잡음을 더하여, 다수의 인접 픽셀들의 라이다 센서 측정 값들을 생성할 수 있다.In addition, the propagation step may add white noise to the normal vector and the projected position of the projected lidar sensor measurement value to generate lidar sensor measurement values of a plurality of adjacent pixels.
본 발명의 실시예에 따른 라이다 정보 처리 방법은, 추가된 라이다 센서 측정 값을 영상으로 투영하는 단계; 투영된 라이다 센서 측정 값을 영상 내에서 다수의 인접 픽셀들로 전파하는 단계; 전파된 라이다 센서 측정 값들 간의 유사도들을 계산하는 단계; 유사도가 가장 큰 라이다 센서 측정 값을 추가하는 단계;를 더 포함하고, 계산 단계에서 계산된 유사도들 중 가장 큰 유사도가 임계치를 초과하지 않으면, 투영 단계부터 재수행할 수 있다.A lidar information processing method according to an embodiment of the present invention includes: projecting an added lidar sensor measurement value into an image; propagating the projected lidar sensor measurements to a plurality of adjacent pixels within the image; calculating similarities between propagated lidar sensor measurements; The method may further include adding a lidar sensor measurement value having the greatest similarity, and if the highest similarity among similarities calculated in the calculation step does not exceed a threshold, the projecting step may be re-performed.
한편, 본 발명의 다른 실시예에 따른, 라이다 정보 처리 장치는, 라이다 센서 측정 값들을 획득하는 제1 획득부; 다중 영상을 획득하는 제2 획득부; 및 측정된 라이다 센서 측정 값들 중 일부를 샘플링하고, 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제1 영상에 각각 투영하며, 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제2 영상에 각각 투영하고, 각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산하며, 계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하는 보정부;를 포함한다.Meanwhile, according to another embodiment of the present invention, a lidar information processing apparatus includes: a first acquirer configured to acquire lidar sensor measurement values; a second acquisition unit for acquiring multiple images; and sampling some of the measured lidar sensor measurement values, respectively projecting the sampled lidar sensor measurement values to a first image among multiple images, and projecting the sampled lidar sensor measurement values to a second image among the multiple images, respectively and a correction unit that calculates a similarity between the projection position of each lidar sensor measurement value onto the first image and the projection position on the second image, and selects the lidar sensor measurement value having the largest calculated similarity. .
이상 설명한 바와 같이, 본 발명의 실시예들에 따르면, 2대 이상의 카메라를 활용하여 라이다 정보를 보정하고 업-샘플링함으로써, 라이다 센서의 정확도 향상과 정보량 증가에 의한 조밀도 향상 모두가 가능해져, 고가의 라이다 센서를 이용하거나 다수의 라이다 센서를 이용할 필요가 없어진다.As described above, according to the embodiments of the present invention, by correcting and up-sampling lidar information using two or more cameras, both the accuracy of the lidar sensor and the density improvement by increasing the amount of information are possible. , there is no need to use expensive lidar sensors or use multiple lidar sensors.
도 1은 라이다 센서 정보를 다중 영상에 투영하는 것을 나타낸 개념도,1 is a conceptual diagram showing the projection of lidar sensor information to multiple images;
도 2는 라이다 센서 정보를 영상에 투영했을 때 노이즈에 따른 위치 변화 예시,2 is an example of position change according to noise when the lidar sensor information is projected on an image;
도 3은 본 발명의 일 실시예에 따른 라이다 센서 정보 보정 방법의 설명에 제공되는 도면,3 is a view provided for explaining a method for calibrating lidar sensor information according to an embodiment of the present invention;
도 4는 본 발명의 다른 실시예에 따른 라이다 센서 정보 업-샘플링 방법의 설명에 제공되는 도면,4 is a view provided for the description of a lidar sensor information up-sampling method according to another embodiment of the present invention;
도 5는 본 발명의 또 다른 실시예에 따른 라이다 센서 정보 처리 장치의 블럭도이다.5 is a block diagram of an apparatus for processing lidar sensor information according to another embodiment of the present invention.
이하에서는 도면을 참조하여 본 발명을 보다 상세하게 설명한다.Hereinafter, the present invention will be described in more detail with reference to the drawings.
본 발명의 실시예에서는 다중 영상을 이용한 라이다 센서 정보 보정 및 업-샘플링 방법을 제시한다.In an embodiment of the present invention, a method of correcting lidar sensor information and up-sampling using multiple images is presented.
두 장 이상의 영상 정보가 주어졌을 때 영상 정보를 활용하여 라이다 또는 다양한 깊이 센서의 오차를 보정하여 정확도를 향상시키고, 라이다 또는 다양한 깊이 센서의 정보를 업-샘플링(up-sampling) 하여 조밀도를 향상시키는 방법이다.When two or more pieces of image information are given, the accuracy is improved by correcting the error of the lidar or various depth sensors by using the image information, and the density by up-sampling the information of the lidar or various depth sensors way to improve
라이다 센서 정보를 향상시키기 위해서, 다음과 같은 방법으로 접근하다. 저가의 라이다 센서는 약 수 cm 정도의 거리 오차를 가지게 되는데, 측정된 위치 정보를 도 1에 도시된 바와 같이 각각의 카메라로 투영할 수 있다.In order to improve the lidar sensor information, the following method is approached. A low-cost lidar sensor has a distance error of about several cm, and the measured location information can be projected by each camera as shown in FIG. 1 .
여기서 라이다 센서에서 측정된 3차원 포인트 X라고 할 때, 영상에서의 위치는 X를 투영하여 만들 수 있고, 이를 수식으로 나타내면 u = KRX + T 이다. K는 카메라의 내부 파라미터, R, T는 각각 카메라의 자세와 위치 정보가 된다.Here, assuming a three-dimensional point X measured by the lidar sensor, the position in the image can be created by projecting X, and u = KRX + T if expressed as a formula. K is the internal parameter of the camera, and R and T are the camera posture and position information, respectively.
위와 동일한 과정을 두 카메라에 대해 모두 수행 하면 두 영상에서의 좌표 값을 알 수 있게 되는데, 반대로 카메라에서 투영된 픽셀을 지나가는 직선을 역투사(back projection) 했을 때 3차원 포인트 X를 교점으로 지나가게 된다. If the same process as above is performed for both cameras, the coordinate values in both images can be known. do.
본 발명의 실시예에서는 다음과 같은 가정을 갖고 라이다 센서 측정 값의 보정을 수행한다. In an embodiment of the present invention, correction of the lidar sensor measurement value is performed under the following assumptions.
만약 라이다의 거리 값이 정확하다면 투영된 두 픽셀은 인접한 영역 내에서 가장 높은 유사도를 가진다. 따라서, 만약 보다 유사도가 높은 픽셀이 주변에 존재하게 된다면, 해당 픽셀의 위치 정보를 이용하여 라이가의 3차원 좌표 값을 보정 할 수 있다.If the distance value of the lidar is correct, the two projected pixels have the highest similarity within the adjacent area. Therefore, if a pixel with a higher similarity exists in the vicinity, the 3D coordinate value of the liga can be corrected using the position information of the pixel.
즉, 도 2에 나타난 바와 같이, 노이즈가 존재하지 않을때는 라이다 값을 영상에 투영했을 때 동일한 위치로 투영이 되는 반면 오차가 존재할 때는 동일하지 않은 위치로 투영이 된다. That is, as shown in FIG. 2 , when there is no noise, when the LIDAR value is projected onto the image, it is projected to the same position, whereas when there is an error, it is projected to a different position.
이를 보정하기 위해서 라이다의 3차원 값을 인접하는 위치로 옮기면서 투영 된 픽셀 간의 유사도를 높여주는 과정을 수행한다. To compensate for this, a process of increasing the similarity between projected pixels is performed while moving the 3D value of the lidar to an adjacent position.
도 3은 본 발명의 일 실시예에 따른 라이다 센서 정보 보정 방법의 설명에 제공되는 도면이다.3 is a diagram provided to explain a method for calibrating lidar sensor information according to an embodiment of the present invention.
라이다 센서를 이용하여 측정한 값들의 오차를 보정하기 위해서는, 다수의 카메라를 이용하여 다중 영상을 획득할 것이 전제된다.In order to correct the error of the values measured using the lidar sensor, it is premised that multiple images are acquired using a plurality of cameras.
다음, 측정된 라이다 센서 측정 값(좌표 값)들 중 일부를 램덤 샘플링한다(S110). S110단계를 위해, 우선 주어진 라이다의 초기 위치 정보를 기준으로 라이다의 3차원 공간 주변을 랜덤 샘플링하고, 업데이트 할 좌표 값의 후보 군을 얻는다. Next, some of the measured lidar sensor measurement values (coordinate values) are randomly sampled (S110). For step S110, first, based on the given initial position information of the lidar, random sampling around the 3D space of the lidar is performed, and a candidate group of coordinate values to be updated is obtained.
그리고, S110단계에서 샘플링된 라이다 센서 측정 값들을 다중 영상에 각각 투영한다(S120). 다중 영상은 2개이므로, S120단계에서 라이다 센서 측정 값들은 각각 2개의 영상, 제1 영상과 제2 영상으로 투영되게 된다.Then, the measured values of the lidar sensor sampled in step S110 are projected onto multiple images, respectively (S120). Since there are two multi-images, in step S120, the lidar sensor measurement values are projected as two images, a first image and a second image, respectively.
이후, 각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산한다(S130). 그리고, S130단계에서 계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택한다(S140). S140단계에 의해 샘플이 업데이트된다.Thereafter, the similarity between the projection position of each lidar sensor measurement value to the first image and the projection position to the second image is calculated ( S130 ). Then, a lidar sensor measurement value having the greatest similarity calculated in step S130 is selected (S140). The sample is updated by step S140.
다음, S140단계에서 선택된 라이다 센서 측정 값의 법선 벡터를 램덤으로 부가하여, 다수의 라이다 센서 측정 값들을 생성하고, 생성된 라이다 센서 측정 값들 중 일부를 램덤으로 샘플링한다(S150).Next, a normal vector of the measured value of the lidar sensor selected in step S140 is randomly added to generate a plurality of measured values of the lidar sensor, and some of the measured values of the lidar sensor are randomly sampled ( S150 ).
그리고, S150단계에서 샘플링된 라이다 센서 측정 값들을 다중 영상(제1 영상과 제2 영상)에 각각 투영하고(S120), 각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산한다(S130).Then, the LiDAR sensor measurement values sampled in step S150 are respectively projected onto multiple images (first image and second image) (S120), and the projected position of each lidar sensor measurement value onto the first image and the second image The degree of similarity between the projected positions of the s is calculated ( S130 ).
S150단계에 의해 라이다 센서 측정 값은 법선 벡터를 포함하게 되므로, 반복되는 유사도 계산에서는 투영 위치와 법선 벡터를 이용하여 유사도를 계산하게 된다. 이에 따라, 정방 형태가 아닌 법선 벡터의 형태에 따른 비교가 가능해지는데, 이를 통해 보다 정교하게 라이다 측정 값을 보정할 수 있게 된다.Since the measured value of the lidar sensor includes the normal vector in step S150, the similarity is calculated using the projected position and the normal vector in the repeated similarity calculation. Accordingly, it is possible to compare according to the shape of the normal vector rather than the square shape, and through this, it is possible to more precisely correct the LIDAR measurement value.
다음, S130단계에서 계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하여 업데이트하고(S140), S150단계로 진입하여 이후 절차들을 반복하게 된다.Next, the measured value of the lidar sensor having the greatest similarity calculated in step S130 is selected and updated (S140), and the subsequent steps are repeated in step S150.
업데이트 및 절차 반복은 S130단계에서 계산된 유사도들 중 가장 큰 유사도가 임계치를 초과할 때까지 이루어지는 것으로 구현 가능하다. 즉, S130단계에서 최대 유사도가 임계치를 초과하면 S140단계에서 선택된 라이다 센서 측정 값이 보정된 라이다 센서 측정 값이 되는 것이다.Updating and repeating the procedure may be implemented until the greatest similarity among the similarities calculated in step S130 exceeds a threshold. That is, when the maximum similarity exceeds the threshold in step S130, the measured value of the lidar sensor selected in step S140 becomes the corrected lidar sensor measurement.
지금까지, 라이다 센서 정보 보정 방법에 대해 바람직한 실시예를 들어 상세히 설명하였다.So far, a preferred embodiment has been described in detail with respect to the method of correcting the lidar sensor information.
위 실시예에서는 다중 영상이 2개의 카메라 영상인 것을 상정하였는데, 최소의 필요 정보에 해당한다. 더 많은 수의 카메라들을 이용하여 영상을 생성한다면, 보정의 정확도는 더욱 향상될 수 있다.In the above embodiment, it is assumed that the multi-image is an image of two cameras, which corresponds to the minimum required information. If an image is generated using a larger number of cameras, the accuracy of correction may be further improved.
도 4는 본 발명의 다른 실시예에 따른 라이다 센서 정보 업-샘플링 방법의 설명에 제공되는 도면이다.4 is a diagram provided to explain a method for up-sampling lidar sensor information according to another embodiment of the present invention.
먼저, 법선 벡터가 부가된 라이다 센서 측정 값을 획득한다(S210). 법선 벡터가 부가된 라이다 센서 측정 값은 전술한 라이다 센서 정보 보정을 통해 생성한 것을 이용하거나, 이와 별개로 생성할 수도 있다.First, a measurement value of the lidar sensor to which the normal vector is added is obtained (S210). The lidar sensor measurement value to which the normal vector is added may be generated by using the above-described LiDAR sensor information correction, or may be generated separately.
다음, S210단계에서 획득한 라이다 센서 측정 값을 영상으로 투영하고(S20), S220단계에서 투영된 라이다 센서 측정 값을 영상 내에서 다수의 인접 픽셀들로 전파한다(S230).Next, the lidar sensor measurement value obtained in step S210 is projected onto an image (S20), and the lidar sensor measurement value projected in operation S220 is propagated to a plurality of adjacent pixels in the image (S230).
S230단계에서의 전파는, S220단계에서 투영된 라이다 센서 측정 값의 투영 위치와 법선 벡터에 백색 잡음을 더하여, 다수의 인접 픽셀들의 라이다 센서 측정 값들을 생성하는 과정에 해당한다.The propagation in step S230 corresponds to the process of generating lidar sensor measurement values of a plurality of adjacent pixels by adding white noise to the projection position and normal vector of the lidar sensor measurement value projected in operation S220.
이후, S230단계에서 전파된 라이다 센서 측정 값들 간의 유사도들을 계산하고(S240), 유사도가 가장 큰 라이다 센서 측정 값을 추가하여 업-샘플링을 수행한다(S250).Thereafter, similarities between the propagated lidar sensor measurement values are calculated in step S230 (S240), and up-sampling is performed by adding the lidar sensor measurement value having the greatest similarity (S250).
인접하는 픽셀끼리는 3차원 정보가 유사할 것에 가정을 두고, N개의 샘플을 생성하여 전파한 후 가장 유사도가 높은 값을 갖는 3차원 정보를 할당하는 것이며, 이후 S220단계부터 재수행된다.It is assumed that 3D information is similar between adjacent pixels, and after generating and propagating N samples, 3D information having the highest similarity is allocated, and then it is performed again from step S220.
즉, S250단계에사 추가된 라이다 센서 측정 값을 영상으로 투영하고(S20), S220단계에서 투영된 라이다 센서 측정 값을 영상 내에서 다수의 인접 픽셀들로 전파하며(S230), S230단계에서 전파된 라이다 센서 측정 값들 간의 유사도들을 계산하고(S240), 유사도가 가장 큰 라이다 센서 측정 값을 추가(S250)하는 것을 반복한다.That is, the lidar sensor measurement value added in step S250 is projected into an image (S20), and the lidar sensor measurement value projected in step S220 is propagated to a plurality of adjacent pixels in the image (S230), step S230 Calculating the similarities between the measured values of the LiDAR sensor propagated in (S240), and adding the measured value of the LiDAR sensor having the greatest similarity (S250) is repeated.
절차 반복은 S230단계에서 계산된 유사도들 중 가장 큰 유사도가 임계치를 초과할 때까지 이루어지는 것으로 구현 가능하다. 즉, 반복 횟수는 S240단계에서 최대 유사도가 임계치를 초과할 때까지로 제한할 수 있다.The repetition of the procedure may be implemented until the greatest similarity among the similarities calculated in step S230 exceeds a threshold. That is, the number of repetitions may be limited until the maximum similarity exceeds the threshold in step S240.
도 5는 본 발명의 또 다른 실시예에 따른 라이다 센서 정보 처리 장치의 블럭도이다.5 is a block diagram of an apparatus for processing lidar sensor information according to another embodiment of the present invention.
본 발명의 실시예에 따른 라이다 센서 정보 처리 장치는, 도시된 바와 같이, 다중 영상 획득부(310), 라이다 정보 획득부(320), 라이다 정보 보정부(330), 라이다 정보 업-샘플링부(340)를 포함하여 구성된다.The lidar sensor information processing apparatus according to an embodiment of the present invention, as shown, includes a multi-image acquisition unit 310 , a lidar information acquisition unit 320 , a lidar information correction unit 330 , and a lidar information up unit. - It is configured to include a sampling unit (340).
다중 영상 획득부(310)는 카메라(미도시)로부터 라이다 센서 정보의 보정과 업-샘플링 과정에 필요한 영상 정보를 획득한다.The multi-image acquisition unit 310 acquires image information necessary for correction and up-sampling of lidar sensor information from a camera (not shown).
라이다 정보 획득부(320)는 라이다 센서(미도시)로부터 보정과 업-샘플링 대상이 되는 라이다 정보를 획득한다.The lidar information acquisition unit 320 acquires lidar information to be corrected and up-sampling from a lidar sensor (not shown).
라이다 정보 보정부(330)는 다중 영상 획득부(310)를 통해 획득한 영상 정보를 이용하여 라이다 정보 획득부(320)에서 획득된 라이다 센서 정보를 보정한다.The lidar information correcting unit 330 corrects the lidar sensor information obtained by the lidar information obtaining unit 320 by using the image information obtained through the multi-image obtaining unit 310 .
라이다 정보 업-샘플링부(340)는 다중 영상 획득부(310)를 통해 획득한 영상 정보를 이용하여 라이다 정보 획득부(320)에서 획득된 라이다 센서 정보를 업-샘플링한다.The lidar information up-sampling unit 340 up-samples the lidar sensor information acquired by the lidar information acquiring unit 320 by using the image information acquired through the multi-image acquiring unit 310 .
한편, 라이다 정보 보정부(330)와 라이다 정보 업-샘플링부(340)는 선택적인 구현이 가능하다. 즉, 둘 중 하나만을 포함시키는 경우, 즉, 라이다 정보를 보정만 하거나 업-샘플링만 하는 경우에도 본 발명의 기술적 사상이 적용될 수 있다.Meanwhile, the lidar information correcting unit 330 and the lidar information up-sampling unit 340 can be selectively implemented. That is, the technical idea of the present invention can be applied even when only one of the two is included, that is, only correcting or up-sampling of lidar information.
지금까지, 다중 영상을 이용한 라이다 센서 정보 보정 및 업-샘플링 방법 및 장치에 대해 바람직한 실시예를 들어 상세히 설명하였다.So far, a preferred embodiment has been described in detail for a method and apparatus for correcting and up-sampling lidar sensor information using multiple images.
본 발명의 실시예들에서는, 2대 이상의 카메라를 활용하여 라이다 정보를 보정하고 업-샘플링함으로써, 라이다 센서의 정확도 향상과 정보량 증가에 의한 조밀도 향상 모두가 가능해져, 고가의 라이다 센서를 이용하거나 다수의 라이다 센서를 이용할 필요가 없어진다.In embodiments of the present invention, by correcting and up-sampling lidar information using two or more cameras, both the accuracy improvement of the lidar sensor and the density improvement by increasing the amount of information are possible, so that the expensive lidar sensor It is no longer necessary to use a system or multiple lidar sensors.
한편, 본 실시예에 따른 장치와 방법의 기능을 수행하게 하는 컴퓨터 프로그램을 수록한 컴퓨터로 읽을 수 있는 기록매체에도 본 발명의 기술적 사상이 적용될 수 있음은 물론이다. 또한, 본 발명의 다양한 실시예에 따른 기술적 사상은 컴퓨터로 읽을 수 있는 기록매체에 기록된 컴퓨터로 읽을 수 있는 코드 형태로 구현될 수도 있다. 컴퓨터로 읽을 수 있는 기록매체는 컴퓨터에 의해 읽을 수 있고 데이터를 저장할 수 있는 어떤 데이터 저장 장치이더라도 가능하다. 예를 들어, 컴퓨터로 읽을 수 있는 기록매체는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광디스크, 하드 디스크 드라이브, 등이 될 수 있음은 물론이다. 또한, 컴퓨터로 읽을 수 있는 기록매체에 저장된 컴퓨터로 읽을 수 있는 코드 또는 프로그램은 컴퓨터간에 연결된 네트워크를 통해 전송될 수도 있다.On the other hand, it goes without saying that the technical idea of the present invention can be applied to a computer-readable recording medium containing a computer program for performing the functions of the apparatus and method according to the present embodiment. In addition, the technical ideas according to various embodiments of the present invention may be implemented in the form of computer-readable codes recorded on a computer-readable recording medium. The computer-readable recording medium may be any data storage device readable by the computer and capable of storing data. For example, the computer-readable recording medium may be a ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, or the like. In addition, the computer-readable code or program stored in the computer-readable recording medium may be transmitted through a network connected between computers.
또한, 이상에서는 본 발명의 바람직한 실시예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어져서는 안될 것이다.In addition, although preferred embodiments of the present invention have been illustrated and described above, the present invention is not limited to the specific embodiments described above, and the technical field to which the present invention belongs without departing from the gist of the present invention as claimed in the claims In addition, various modifications are possible by those of ordinary skill in the art, and these modifications should not be individually understood from the technical spirit or perspective of the present invention.

Claims (8)

  1. 라이다 센서 측정 값들을 획득하는 단계;obtaining lidar sensor measurement values;
    다중 영상을 획득하는 단계;acquiring multiple images;
    측정된 라이다 센서 측정 값들 중 일부를 샘플링하는 단계;sampling some of the measured lidar sensor measurement values;
    샘플링된 라이다 센서 측정 값들을 다중 영상 중 제1 영상에 각각 투영하는 단계;projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively;
    샘플링된 라이다 센서 측정 값들을 다중 영상 중 제2 영상에 각각 투영하는 단계;projecting the sampled lidar sensor measurement values to a second image among multiple images, respectively;
    각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산하는 단계;calculating a similarity between a projection position of each lidar sensor measurement value to a first image and a projection position to a second image;
    계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하는 단계;를 포함하는 것을 특징으로 하는 라이다 정보 처리 방법.LiDAR information processing method comprising the; selecting a lidar sensor measurement value having the largest calculated similarity.
  2. 청구항 1에 있어서,The method according to claim 1,
    선택된 라이다 센서 측정 값의 법선 벡터를 램덤으로 부가하여, 다수의 라이다 센서 측정 값들을 생성하는 단계;randomly adding a normal vector of the selected lidar sensor measurement value to generate a plurality of lidar sensor measurement values;
    생성된 라이다 센서 측정 값들 중 일부를 샘플링하는 단계;sampling some of the generated lidar sensor measurement values;
    샘플링된 라이다 센서 측정 값들을 다중 영상 중 제1 영상에 각각 투영하는 단계;projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively;
    샘플링된 라이다 센서 측정 값들을 다중 영상 중 제2 영상에 각각 투영하는 단계;projecting the sampled lidar sensor measurement values to a second image among multiple images, respectively;
    각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산하는 단계;calculating a similarity between a projection position of each lidar sensor measurement value to a first image and a projection position to a second image;
    계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하는 단계;를 더 포함하는 것을 특징으로 하는 라이다 정보 처리 방법.The method of processing lidar information, characterized in that it further comprises; selecting the measured value of the lidar sensor having the greatest degree of similarity.
  3. 청구항 2에 있어서,3. The method according to claim 2,
    계산 단계에서 계산된 유사도들 중 가장 큰 유사도가 임계치를 초과하지 않으면, 생성 단계부터 재수행하는 것을 특징으로 하는 라이다 정보 처리 방법.If the highest similarity among the similarities calculated in the calculation step does not exceed a threshold, the method for processing lidar information is characterized in that the generation step is re-performed.
  4. 청구항 1에 있어서,The method according to claim 1,
    계산 단계는,The calculation step is
    투영 위치와 법선 벡터를 이용하여 유사도를 계산하는 것을 특징으로 하는 라이다 정보 처리 방법.LiDAR information processing method, characterized in that the similarity is calculated using the projection position and the normal vector.
  5. 청구항 1에 있어서,The method according to claim 1,
    법선 벡터가 부가된 라이다 센서 측정 값을 영상으로 투영하는 단계;Projecting the lidar sensor measurement value to which the normal vector is added to an image;
    투영된 라이다 센서 측정 값을 영상 내에서 다수의 인접 픽셀들로 전파하는 단계;propagating the projected lidar sensor measurements to a plurality of adjacent pixels within the image;
    전파된 라이다 센서 측정 값들 간의 유사도들을 계산하는 단계;calculating similarities between propagated lidar sensor measurements;
    유사도가 가장 큰 라이다 센서 측정 값을 추가하는 단계;를 더 포함하는 것을 특징으로 하는 라이다 정보 처리 방법.The method of processing lidar information, characterized in that it further comprises; adding the measurement value of the lidar sensor having the greatest similarity.
  6. 청구항 5에 있어서,6. The method of claim 5,
    전파 단계는,The propagation stage is
    투영된 라이다 센서 측정 값의 투영 위치와 법선 벡터에 백색 잡음을 더하여, 다수의 인접 픽셀들의 라이다 센서 측정 값들을 생성하는 것을 특징으로 하는 라이다 정보 처리 방법.A method for processing lidar information, comprising adding white noise to a projection position and a normal vector of a projected lidar sensor measurement value to generate lidar sensor measurement values of a plurality of adjacent pixels.
  7. 청구항 5에 있어서,6. The method of claim 5,
    추가된 라이다 센서 측정 값을 영상으로 투영하는 단계;Projecting the added lidar sensor measurement value into an image;
    투영된 라이다 센서 측정 값을 영상 내에서 다수의 인접 픽셀들로 전파하는 단계;propagating the projected lidar sensor measurements to a plurality of adjacent pixels within the image;
    전파된 라이다 센서 측정 값들 간의 유사도들을 계산하는 단계;calculating similarities between propagated lidar sensor measurements;
    유사도가 가장 큰 라이다 센서 측정 값을 추가하는 단계;를 더 포함하고,adding a lidar sensor measurement value having the greatest similarity; further comprising,
    계산 단계에서 계산된 유사도들 중 가장 큰 유사도가 임계치를 초과하지 않으면, 투영 단계부터 재수행하는 것을 특징으로 하는 라이다 정보 처리 방법.If the highest similarity among the similarities calculated in the calculation step does not exceed a threshold, the method for processing lidar information, characterized in that the projecting step is performed again.
  8. 라이다 센서 측정 값들을 획득하는 제1 획득부;a first acquisition unit acquiring the lidar sensor measurement values;
    다중 영상을 획득하는 제2 획득부; 및a second acquisition unit for acquiring multiple images; and
    측정된 라이다 센서 측정 값들 중 일부를 샘플링하고, 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제1 영상에 각각 투영하며, 샘플링된 라이다 센서 측정 값들을 다중 영상 중 제2 영상에 각각 투영하고, 각 라이다 센서 측정 값의 제1 영상으로의 투영 위치와 제2 영상으로의 투영 위치 간 유사도를 계산하며, 계산된 유사도가 가장 큰 라이다 센서 측정 값을 선택하는 보정부;를 포함하는 것을 특징으로 하는 라이다 정보 처리 장치.sampling some of the measured lidar sensor measurement values, projecting the sampled lidar sensor measurement values to a first image among multiple images, respectively projecting the sampled lidar sensor measurement values to a second image among multiple images, , a compensator that calculates a similarity between the projection position of each lidar sensor measurement value to the first image and the projection position to the second image, and selects the lidar sensor measurement value having the largest calculated similarity; Lidar information processing device, characterized in that.
PCT/KR2020/017217 2020-11-30 2020-11-30 Method and apparatus for lidar sensor information correction and up-sampling using multiple images WO2022114311A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0163568 2020-11-30
KR1020200163568A KR20220075476A (en) 2020-11-30 2020-11-30 LiDAR sensor information correction and up-sampling method and apparatus using multiple images

Publications (1)

Publication Number Publication Date
WO2022114311A1 true WO2022114311A1 (en) 2022-06-02

Family

ID=81755077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/017217 WO2022114311A1 (en) 2020-11-30 2020-11-30 Method and apparatus for lidar sensor information correction and up-sampling using multiple images

Country Status (2)

Country Link
KR (1) KR20220075476A (en)
WO (1) WO2022114311A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180044279A (en) * 2015-08-24 2018-05-02 퀄컴 인코포레이티드 System and method for depth map sampling
KR20190127624A (en) * 2019-10-31 2019-11-13 충북대학교 산학협력단 Apparatus and method for detecting object based on density using lidar sensor
US20200134896A1 (en) * 2018-10-24 2020-04-30 Samsung Electronics Co., Ltd. Method and apparatus for localization based on images and map data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180044279A (en) * 2015-08-24 2018-05-02 퀄컴 인코포레이티드 System and method for depth map sampling
US20200134896A1 (en) * 2018-10-24 2020-04-30 Samsung Electronics Co., Ltd. Method and apparatus for localization based on images and map data
KR20190127624A (en) * 2019-10-31 2019-11-13 충북대학교 산학협력단 Apparatus and method for detecting object based on density using lidar sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
J. LEE, M. KIM, H. KIM: "Camera and LiDAR Sensor Fusion for Improving Object Detection.", JOURNAL OF BROADCAST ENGINEERING, vol. 24, no. 4, 30 July 2019 (2019-07-30), Korea , pages 580 - 591, XP009536865, ISSN: 1226-7953 *
JINSOO KIM, JEONGHO CHO: "YOLO-based Real-Time Object Detection Scheme Combining RGB Image with LiDAR Point Cloud", THE JOURNAL OF KOREAN INSTITUTE OF INFORMATION TECHNOLOGY, vol. 17, no. 8, 31 August 2019 (2019-08-31), pages 93 - 105, XP009537024, ISSN: 1598-8619, DOI: 10.14801/jkiit.2019.17.8.93 *

Also Published As

Publication number Publication date
KR20220075476A (en) 2022-06-08

Similar Documents

Publication Publication Date Title
EP0197341B1 (en) Method of and apparatus for measuring coordinates
KR100911814B1 (en) Stereo-image matching error removal apparatus and removal methord using the same
WO2010027137A2 (en) Apparatus and method for frame interpolation based on accurate motion estimation
CN109658497B (en) Three-dimensional model reconstruction method and device
WO2017099510A1 (en) Method for segmenting static scene on basis of image statistical information and method therefor
EP3590090A1 (en) Method and apparatus for processing omni-directional image
WO2022114311A1 (en) Method and apparatus for lidar sensor information correction and up-sampling using multiple images
WO2014010820A1 (en) Method and apparatus for estimating image motion using disparity information of a multi-view image
CN110428461B (en) Monocular SLAM method and device combined with deep learning
WO2020071849A1 (en) Method for producing detailed 360 image by using actual measurement depth information
WO2022154523A1 (en) Method and device for matching three-dimensional oral scan data via deep-learning based 3d feature detection
WO2019139441A1 (en) Image processing device and method
WO2020045767A1 (en) Method for generating image using lidar and apparatus therefor
WO2018021657A1 (en) Method and apparatus for measuring confidence of deep value through stereo matching
CN112308776A (en) Method for solving occlusion and error mapping image sequence and point cloud data fusion
WO2013162172A1 (en) Method for acquiring pet image with ultra high resolution using movement of pet device
WO2019212065A1 (en) Multi-lidar signal calibration method and system
CN116012449A (en) Image rendering method and device based on depth information
WO2018230971A1 (en) Method and apparatus for processing omni-directional image
CN112233164B (en) Method for identifying and correcting error points of disparity map
CN112967228B (en) Determination method and device of target optical flow information, electronic equipment and storage medium
CN112270693B (en) Method and device for detecting motion artifact of time-of-flight depth camera
CN111833395B (en) Direction-finding system single target positioning method and device based on neural network model
WO2024123159A1 (en) Method and apparatus for generating high-quality depth image by using set of continuous rgb-d frames
JP2606815B2 (en) Image processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963706

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20963706

Country of ref document: EP

Kind code of ref document: A1