WO2021112362A1 - Reinforced location recognition method and system through merging of results of multiple location recognition based on covariance matrix - Google Patents

Reinforced location recognition method and system through merging of results of multiple location recognition based on covariance matrix Download PDF

Info

Publication number
WO2021112362A1
WO2021112362A1 PCT/KR2020/009056 KR2020009056W WO2021112362A1 WO 2021112362 A1 WO2021112362 A1 WO 2021112362A1 KR 2020009056 W KR2020009056 W KR 2020009056W WO 2021112362 A1 WO2021112362 A1 WO 2021112362A1
Authority
WO
WIPO (PCT)
Prior art keywords
observation
gaussian distribution
values
unit
covariance matrix
Prior art date
Application number
PCT/KR2020/009056
Other languages
French (fr)
Korean (ko)
Inventor
배기덕
이정우
박지현
엄태영
최영호
Original Assignee
한국로봇융합연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국로봇융합연구원 filed Critical 한국로봇융합연구원
Publication of WO2021112362A1 publication Critical patent/WO2021112362A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • the present invention relates to location recognition technology, and more particularly, to a method and system for enhanced location recognition through fusion of multiple location recognition results based on a covariance matrix.
  • a typical position recognition technology using a vision sensor is ORB. It extracts feature points (corners, boundary points, etc.) from the 2D image, which is the output data of the vision sensor, and stores them in the form of three-dimensional points.
  • the problem with this algorithm is that the distance to a point is calculated by comparing the image in the current frame with the image in the previous frame, so the distance value is inaccurate and the scale changes accordingly.
  • LIDAR location recognition technology using light detection and ranging
  • EKF extended Kalman filter
  • An object of the present invention in consideration of the above points, is to achieve fusion of multi-position recognition results based on covariance matrix that can be used flexibly and universally even when various types and types of sensors are combined, thereby resolving restrictions on sensor configuration.
  • a sensor unit including a plurality of sensors for deriving different types of sensing values by sensing a robot is a system for reinforcing position recognition through fusion of multiple position recognition results according to a preferred embodiment of the present invention for achieving the object as described above.
  • an observation unit comprising a plurality of observation modules for calculating the position of the robot from the sensing values derived by the plurality of sensors through a plurality of different position recognition algorithms and outputting a plurality of observation values in different formats; a transform unit including a plurality of transform modules for transforming the plurality of observation values of different formats into a covariance matrix that is a Gaussian distribution; and a merging unit for merging the Gaussian distribution of the plurality of observation values into one Gaussian distribution; and a predictor for deriving a predicted value by predicting the position of the robot from one merged Gaussian distribution.
  • each of the plurality of transformation modules is a Gaussian of the observed values transformed into the covariance matrix It is characterized in that the covariance matrix of observations is updated by merging the Gaussian distribution of the fed back prediction values into the distribution.
  • the plurality of sensors include a light detection and ranging (LIDAR) sensor and a vision sensor.
  • LIDAR light detection and ranging
  • the location recognition algorithm is EKF (extended Kalman filter) Localization, EIF (Eukaryotic Initiation Factors) Localization, PF (Particle filter) Localization and AMCL (adaptive Monte Carlo Localization), Visual SLAM (simultaneous localization and mapping), ORB (Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM, and characterized in that it includes at least one of Graph-based SLAM.
  • EKF extended Kalman filter
  • EIF Eukaryotic Initiation Factors
  • PF Particle filter
  • AMCL adaptive Monte Carlo Localization
  • Visual SLAM Simultaneous localization and mapping
  • ORB Oriented FAST and Rotated BRIEF
  • each of a plurality of sensors of the sensor unit senses a robot to derive sensing values of different types and outputting, by a plurality of observation modules of the observation unit, a plurality of observation values in different formats by calculating the position of the robot through a plurality of different position recognition algorithms from the sensing values derived by the plurality of sensors;
  • each of the plurality of transform modules is a Gaussian distribution of the observed values converted into the covariance matrix.
  • the method further includes updating a covariance matrix in the form of a mantissa distribution of observed values by merging the Gaussian distribution of the predicted values fed back to .
  • the plurality of sensors include a light detection and ranging (LIDAR) sensor and a vision sensor.
  • LIDAR light detection and ranging
  • the location recognition algorithm is EKF (extended Kalman filter) Localization, EIF (Eukaryotic Initiation Factors) Localization, PF (Particle filter) Localization and AMCL (adaptive Monte Carlo Localization), Visual SLAM (simultaneous localization and mapping), ORB (Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM, and includes at least one of Graph-based SLAM.
  • EKF extended Kalman filter
  • EIF Eukaryotic Initiation Factors
  • PF Particle filter
  • AMCL adaptive Monte Carlo Localization
  • Visual SLAM Simultaneous localization and mapping
  • ORB Oriented FAST and Rotated BRIEF
  • FIG. 1 is a diagram for explaining the configuration of an enhanced localization system through fusion of multiple location recognition results based on a covariance matrix according to an embodiment of the present invention.
  • FIG. 2 is a flowchart for explaining a method for enhanced localization through fusion of multiple localization results based on a covariance matrix according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a computing device according to an embodiment of the present invention.
  • FIG. 1 is a diagram for explaining the configuration of an enhanced localization system through fusion of multiple location recognition results based on a covariance matrix according to an embodiment of the present invention.
  • the enhanced location recognition system (10, hereinafter, abbreviated as 'reinforced location recognition system') through the fusion of multiple location recognition results according to an embodiment of the present invention includes a sensor unit 100 and an observation unit 200 ), a transform unit 300 , a merging unit 400 , a prediction unit 500 , and an update unit 600 .
  • the sensor unit 100 includes a plurality of sensors 110 and 120 .
  • the plurality of sensors 110 and 120 are different types of sensors, and have different types of output values. Accordingly, the plurality of sensors 110 and 120 sense the robot to be observed, and output the sensed values of in different formats, that is, according to the output format of each sensor.
  • the first sensor 110 may be a light detection and ranging (LIDAR) sensor
  • the second sensor 120 may be a vision sensor.
  • the sensor unit 100 is described as including a LIDAR sensor and a vision sensor, but the present invention is not limited thereto.
  • the sensor unit 100 of the present invention may use two or more sensors, and there is no limitation on the type thereof.
  • the observation unit 200 includes a plurality of observation modules 210 , 220 , 230 , and 240 .
  • the plurality of observation modules 210 , 220 , 230 , and 240 operate as a position recognition algorithm, calculate a position through this algorithm, and output an observation value.
  • Each of the plurality of observation modules 210 , 220 , 230 , and 240 corresponds to any one of the plurality of sensors of the sensor unit 100 .
  • each of the observation module (210, 220, 230, 240) from the sensing value output by the corresponding sensor of the sensor unit 100, each of the position recognition algorithm It calculates the position of the robot and outputs a plurality of observation values in different formats.
  • the first observation module 210 is an extended Kalman filter (EKF) Localization algorithm
  • the second observation module 220 is a PF (Particle filter) Localization algorithm
  • the third observation module 230 is an extended Kalman filter (EKF) algorithm.
  • the Kalman filter) localization algorithm and the fourth observation module 240 apply an Oriented FAST and Rotated BRIEF (ORB)-simultaneous localization and mapping (SLAM) algorithm.
  • the first observation module 210 and the second observation module 220 correspond to the first sensor 110 . Accordingly, when LIDAR data that is a sensor value of the first sensor 110 is input, the first observation module 210 calculates the position of the robot using the EKF algorithm from the LIDAR data and outputs an observation value. In addition, when LIDAR data that is a sensor value of the first sensor 110 is input, the second observation module 220 calculates the position of the robot from the LIDAR data through a PF algorithm and outputs an observation value.
  • the third observation module 230 and the fourth observation module 240 correspond to the second sensor 120 .
  • the third observation module 230 calculates the position of the robot from the vision sensor data through the EKF algorithm and outputs an observation value.
  • the fourth observation module 240 calculates the position of the robot from the vision sensor data through an ORB-SLAM algorithm and outputs an observation value.
  • EKF extended Kalman filter
  • EIF Eukaryotic Initiation Factors
  • PF Particle filter
  • AMCL adaptive Monte Carlo Localization
  • FAST and Rotated BRIEF FAST and Rotated BRIEF-SLAM, Direct SLAM, LSD-SLAM and Graph-based SLAM may be exemplified.
  • the conversion unit 300 includes a plurality of conversion modules 310 .
  • Each of the transformation modules 310 corresponds one-to-one to the observation modules 210, 220, 230, and 240.
  • the transformation module 310 transforms the input observation value into a covariance matrix in the form of a Gaussian distribution. At this time, the transformation module 310 determines whether the observed values output from the corresponding observation modules 210 , 220 , 230 , and 240 are covariance matrices in the form of a Gaussian distribution. As a result of this determination, when the corresponding observation value is not the covariance matrix, the transformation module 310 converts the observed value into the covariance matrix. On the other hand, when the corresponding observation value is a covariance matrix, the transformation module 310 bypasses the observation value.
  • the first observation module 210 and the third observation module 230 use the EKF algorithm
  • the second observation module 220 uses the PF algorithm.
  • the observation values output from the first to third observation modules 210 , 220 , and 230 are all output as a covariance matrix in the form of a Gaussian distribution.
  • the transformation module 310 corresponding to the first to third observation modules 210 , 220 , 230 bypasses the observation values output from the first to third observation modules 210 , 220 , and 230 .
  • the fourth observation module 240 uses the ORB-SLAM algorithm. Therefore, the observation value that is the output of the fourth observation module 240 is output as a two-dimensional coordinate value.
  • the transformation module 310 corresponding to the fourth observation module 240 separately transforms the observation value output from the fourth observation module 240 into a covariance matrix in the form of a Gaussian distribution.
  • the transformation module 310 converts the two-dimensional coordinate value, which is the observation value, which is the output of the fourth observation module 240, into a Gaussian distribution form through a transformation equation using the error of the standard of the second sensor 120 and the algorithm itself as a variable. It can be converted to a covariance matrix.
  • the transformation module 310 converts the observation values of the corresponding observation modules 210 , 220 , 230 , and 240 into a covariance matrix in the form of a Gaussian distribution, and then receives the position of the robot from the update unit 600 .
  • the covariance matrix in the form of a mantissa distribution of the observed values may be updated by merging the Gaussian distribution of the predicted values with the Gaussian distribution of the observed values.
  • the transformation module 310 if the transformation module 310 does not have a predicted value for the position of the robot fed back from the update unit 600 , the transformation module 310 outputs the covariance matrix of the observed values as it is.
  • the merging unit 400 merges the mantissa distributions of the plurality of observation values into one Gaussian distribution.
  • the prediction unit 500 derives a prediction value for predicting the position of the robot from one Gaussian distribution merged by the merging unit 400 .
  • the update unit 600 feeds back the output of the prediction unit 500 to each of the plurality of transformation modules 310 of the transformation unit 300 . Then, each of the plurality of transformation modules 310 transforms the observation values of the corresponding observation modules 210, 220, 230, 240 into a covariance matrix in the form of a Gaussian distribution, and then adds the Gaussian distribution of the predicted values to the Gaussian distribution of the observation values. By merging, the covariance matrix in the form of mantissa distribution of observations can be updated.
  • the Gaussian distribution of the fed back predicted value may be the probability of the robot's position at time t
  • the observed value may be the probability of the robot's position at time t+1.
  • the position of the robot can be predicted more precisely by accumulating and merging Gaussian distributions according to the repetition of the above-described feedback. If the noise of the sensor of the sensor unit 100 occurs, the Gaussian distributions of the covariance matrix of the transform unit 300 increase in range again, but if the feedback process is repeated, all the covariance matrices gradually converge to one distribution. do. In other words, it naturally acts as a noise filter.
  • FIG. 2 is a flowchart for explaining a method for enhanced localization through fusion of multiple localization results based on a covariance matrix according to an embodiment of the present invention.
  • the plurality of sensors 110 and 120 of the sensor unit 100 sense the robot in step S110 to derive sensing values of different types.
  • the plurality of sensors 110 and 120 are different types of sensors, and have different types of output values. That is, the plurality of sensors 110 and 120 sense the robot to be observed and output the sensed values according to the respective output formats of the sensors 110 and 120 .
  • the first sensor 110 may be a light detection and ranging (LIDAR) sensor
  • the second sensor 120 may be a vision sensor.
  • Each of the plurality of observation modules 210 , 220 , 230 , and 240 corresponds to any one of the plurality of sensors of the sensor unit 100 . Accordingly, the plurality of observation modules (210, 220, 230, 240) each of the observation modules (210, 220, 230, S120) from the sensing value output by the corresponding sensor (110, 120) of the sensor unit 100 in step S120, 240) Each position recognition algorithm calculates the position of the robot and outputs a plurality of observation values in different formats.
  • the first observation module 210 is the EKF algorithm
  • the second observation module 220 is the PF algorithm
  • the third observation module 230 is the EKF algorithm
  • the fourth observation module 240 is ORB-SLAM. Algorithms can be applied.
  • the first observation module 210 and the second observation module 220 correspond to the first sensor 110
  • the third observation module 230 and the fourth observation module 240 are the second sensor 120 .
  • the first observation module 210 calculates the position of the robot using the EKF algorithm from the LIDAR data that is the sensor value of the first sensor 110 and outputs an observation value
  • the second observation module 220 outputs the observation value of the first sensor From the LIDAR data, which is the sensor value of (110), the position of the robot is calculated through the PF algorithm and the observed value is output.
  • the third observation module 230 calculates the position of the robot from the vision sensor data that is the sensor value of the second sensor 120 through the EKF algorithm and outputs the observation value
  • the fourth observation module 240 outputs the second observation value. From the vision sensor data, which is the sensor value of the sensor 120 , the position of the robot is calculated through the ORB-SLAM algorithm, and the observed value is output.
  • Each of the plurality of transformation modules 310 of the transformation unit 300 corresponds one-to-one to the observation modules 210 , 220 , 230 , and 240 . Accordingly, when an observation value that is an output of the corresponding observation module 210 , 220 , 230 , 240 is input in step S130 , each of the plurality of transformation modules 310 of the transformation unit 300 converts the input observation value into a Gaussian distribution. Convert it to a covariance matrix of the form At this time, the transform module 310 converts the observed values into a covariance matrix when the observed values output from the corresponding observation modules 210, 220, 230, and 240 are not the covariance matrix in the form of a Gaussian distribution.
  • the transformation module 310 may bypass the observation value.
  • the first observation module 210 and the third observation module 230 use the EKF algorithm, and the second observation module 220 uses the PF algorithm. Accordingly, the observation values output from the first to third observation modules 210 , 220 , and 230 are all output as a covariance matrix in the form of a Gaussian distribution. Accordingly, the transformation module 310 corresponding to the first to third observation modules 210 , 220 , 230 bypasses the observation values output from the first to third observation modules 210 , 220 , and 230 .
  • the fourth observation module 240 uses the ORB-SLAM algorithm.
  • the observation value that is the output of the fourth observation module 240 is output as a two-dimensional coordinate value. Accordingly, the transformation module 310 corresponding to the fourth observation module 240 separately transforms the observation value output from the fourth observation module 240 into a covariance matrix in the form of a Gaussian distribution.
  • each of the plurality of transform modules 310 of the transform unit 300 is applied to the Gaussian distribution of the observation values converted into the covariance matrix in step S150.
  • the covariance matrix in the form of a mantissa distribution of observations is updated by merging the Gaussian distribution of the fed back prediction values. If the observed value is the probability of the robot's position at time t, then the Gaussian distribution of the fed back predicted value can be the probability of the robot's position at time t-1.
  • the merging unit 400 merges the mantissa distribution of the plurality of observation values into one Gaussian distribution in step S160. .
  • the prediction unit 500 derives a prediction value for predicting the position of the robot from one Gaussian distribution merged by the merging unit 400 in step S170 .
  • the update unit 600 feeds back the output of the prediction unit 500 to each of the plurality of transformation modules 310 of the transformation unit 300 in step S180 .
  • steps S110 to S180 described above are for continuously tracking the position of the robot, it is continuously repeated until the process of tracking the position of the robot is terminated.
  • FIG. 3 is a diagram illustrating a computing device according to an embodiment of the present invention.
  • the computing device TN100 of FIG. 3 may be an apparatus for the enhanced location recognition system and method described herein.
  • the computing device TN100 may include at least one processor TN110 , a transceiver device TN120 , and a memory TN130 . Also, the computing device TN100 may further include a storage device TN140 , an input interface device TN150 , an output interface device TN160 , and the like. Components included in the computing device TN100 may be connected by a bus TN170 to communicate with each other.
  • the processor TN110 may execute a program command stored in at least one of the memory TN130 and the storage device TN140.
  • the processor TN110 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to an embodiment of the present invention are performed.
  • the processor TN110 may be configured to implement procedures, functions, methods, and the like described in connection with an embodiment of the present invention.
  • the processor TN110 may control each component of the computing device TN100 .
  • Each of the memory TN130 and the storage device TN140 may store various information related to the operation of the processor TN110 .
  • Each of the memory TN130 and the storage device TN140 may be configured as at least one of a volatile storage medium and a non-volatile storage medium.
  • the memory TN130 may include at least one of a read only memory (ROM) and a random access memory (RAM).
  • the transceiver TN120 may transmit or receive a wired signal or a wireless signal.
  • the transceiver TN120 may be connected to a network to perform communication.
  • the method according to the embodiment of the present invention described above may be implemented in the form of a program readable by various computer means and recorded in a computer readable recording medium.
  • the recording medium may include a program command, a data file, a data structure, etc. alone or in combination.
  • the program instructions recorded on the recording medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software.
  • the recording medium includes magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floppy disks ( magneto-optical media) and hardware devices specially configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions may include high-level languages that can be executed by a computer using an interpreter or the like as well as machine language such as generated by a compiler.
  • Such hardware devices may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
  • the conventional location recognition method through the fusion of sensors has the advantage that various sensor data can be easily used, but if the configuration of the sensor is changed, the location recognition method may also be different. That is, there is a disadvantage that the sensor configuration is not flexible.
  • the present invention transforms the position recognition result using each sensor into a covariance matrix form, and based on this, it is possible to provide a fusion position recognition method and system that is robust to various environments and can be used in any sensor configuration.
  • the present invention supplements the flexibility and versatility of a position recognition method essential for a robot through a covariance matrix-based integration algorithm, and through this, it is easy to apply to various environments and eliminates restrictions on sensor configuration. Accordingly, the present invention has industrial applicability because it has sufficient potential for commercialization or business, as well as to the extent that it can be clearly implemented in reality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A reinforced location recognition system through merging of results of multiple location recognition, according to an embodiment of the present invention, comprises: a sensor unit including a plurality of sensors that sense a robot to derive different types of sensing values; an observation unit including a plurality of observation modules that calculate, from the sensing values derived by the plurality of sensors, the location of the robot via a plurality of different location recognition algorithms and output a plurality of different types of observation values; a conversion unit including a plurality of conversion modules that convert the plurality of different types of observation values into a covariance matrix that is a Gaussian distribution; a merge unit for merging the Gaussian distribution of the plurality of observation values into one Gaussian distribution; and a prediction unit that derives a prediction value by predicting the location of the robot from the merged one Gaussian distribution.

Description

공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 방법과 시스템Reinforced location recognition method and system through covariance matrix-based multi-location recognition result fusion
본 발명은 위치 인식 기술에 관한 것으로, 보다 상세하게는, 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 방법과 시스템에 관한 것이다. The present invention relates to location recognition technology, and more particularly, to a method and system for enhanced location recognition through fusion of multiple location recognition results based on a covariance matrix.
대부분의 로봇 시스템, 특히 이동형 로봇 시스템에 있어서 로봇이 스스로 어디에 있는지 파악하는 위치인식은 필수적인 기술이다. 위치인식을 위해 다양한 센서와 방법들이 연구되고 있다. 위치인식에 사용되는 대표적인 센서로는 사람의 눈 역할을 하는 비전센서, 위성으로부터 절대적인 좌표를 받을 수 있는 GPS, 여러 방향으로 레이저를 발사하고 반사되어 들어오는 정도를 통해 거리를 측정하는 LIDAR 등이 있다. 위의 센서들은 각각 고유한 특성을 지니고 있는데 이러한 특성을 이용하여 위치인식 알고리즘 방식이 개발되어 왔다. 하지만 센서마다 약점이 존재하고 이로 인해 사용하는 환경에 따라서는 성능이 저하되는 경우가 발생한다. In most robot systems, especially mobile robot systems, location recognition to determine where the robot is by itself is an essential technology. Various sensors and methods are being studied for location recognition. Representative sensors used for location recognition include vision sensors that act as human eyes, GPS that can receive absolute coordinates from satellites, and LIDAR that emits lasers in multiple directions and measures the distance based on the degree of reflection. Each of the above sensors has unique characteristics, and a location recognition algorithm method has been developed using these characteristics. However, each sensor has its own weaknesses, which can lead to a decrease in performance depending on the environment in which it is used.
비전센서를 이용한 대표적인 위치인식 기술로는 ORB가 있다. 비전센서의 출력 데이터인 2D 이미지에서 특징점(모서리, 경계점 등)을 추출하여 그것을 3차원의 점 형태로 저장한다. 이 알고리즘의 문제점으로는 현재 프레임에서의 이미지와 이전 프레임에서의 이미지를 비교하여 점까지의 거리를 연산하기 때문에 거리 값이 부정확하고 그로 인해 스케일이 달라지는 점이 있다. A typical position recognition technology using a vision sensor is ORB. It extracts feature points (corners, boundary points, etc.) from the 2D image, which is the output data of the vision sensor, and stores them in the form of three-dimensional points. The problem with this algorithm is that the distance to a point is calculated by comparing the image in the current frame with the image in the previous frame, so the distance value is inaccurate and the scale changes accordingly.
GPS를 이용한 위치인식 기술은 범용으로 자동차 내비게이션에서 사용하고 있다. 익히 알려진 것처럼 오차 값이 미터 단위로 발생하므로 정밀한 위치인식이 필요한 곳에서는 사용하기 힘들다. Location recognition technology using GPS is widely used in car navigation. As is well known, the error value occurs in meters, so it is difficult to use where precise location recognition is required.
LIDAR(light detection and ranging)를 이용한 위치인식 기술로는 EKF(extended Kalman filter) Localization이나 AMCL이 있다. 두 알고리즘 모두 바퀴의 회전 값을 예측값으로 사용 후 LIDAR의 값으로 예측값을 보정하는 형태로 사용하는 알고리즘이다. LIDAR가 100m 내에서는 다른 센서에 비해 측정값이 정확하다는 특성으로 인해 위치인식 정확도 또한 비교적 높다. 하지만 가로방향에 비해 세로방향으로의 해상도가 떨어지는 점, Laser의 거리값 만으로는 차이를 분간하기 힘든 환경에서의 한계, LIDAR 주변에 장애물이 있을 경우 온전한 측정 불가하다는 점 등의 문제로 인해 모든 환경이나 상황에 적용하기 힘들다는 한계가 명확하다. As a location recognition technology using light detection and ranging (LIDAR), there is an extended Kalman filter (EKF) localization or AMCL. Both algorithms use the wheel rotation value as the predicted value and then use the LIDAR value to correct the predicted value. Location recognition accuracy is also relatively high due to the characteristic that LIDAR has more accurate measurement values compared to other sensors within 100m. However, due to problems such as lower resolution in the vertical direction compared to the horizontal direction, limitations in an environment where it is difficult to distinguish the difference only with the distance value of the laser, and the fact that complete measurement is impossible if there are obstacles around the LIDAR, all environments and situations It is clear that it is difficult to apply to
상술한 바와 같은 점을 감안한 본 발명의 목적은 다양한 형식 및 종류의 센서를 조합하는 경우에도 유연하게 범용적으로 사용할 수 있어 센서 구성에 제약을 해소할 수 있는 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 방법과 시스템을 제공함에 있다. An object of the present invention, in consideration of the above points, is to achieve fusion of multi-position recognition results based on covariance matrix that can be used flexibly and universally even when various types and types of sensors are combined, thereby resolving restrictions on sensor configuration. To provide a reinforced location recognition method and system through
상술한 바와 같은 목적을 달성하기 위한 본 발명의 바람직한 실시예에 따른 다중 위치인식 결과 융합을 통한 강화 위치 인식 시스템은 로봇을 센싱하여 서로 다른 형식의 센싱값을 도출하는 복수의 센서를 포함하는 센서부와, 상기 복수의 센서가 도출한 센싱값으로부터 복수의 서로 다른 위치 인식 알고리즘을 통해 상기 로봇의 위치를 계산하여 서로 다른 형식의 복수의 관측값을 출력하는 복수의 관측모듈을 포함하는 관측부와, 상기 서로 다른 형식의 복수의 관측값을 가우시안 분포인 공분산 행렬로 변환하는 복수의 변환모듈을 포함하는 변환부와, 상기 복수의 관측값의 가우시안 분포를 하나의 가우시안 분포로 병합하는 병합부와, 상기 병합된 하나의 가우시안 분포로부터 상기 로봇의 위치를 예측하여 예측값을 도출하는 예측부를 포함한다. A sensor unit including a plurality of sensors for deriving different types of sensing values by sensing a robot is a system for reinforcing position recognition through fusion of multiple position recognition results according to a preferred embodiment of the present invention for achieving the object as described above. and an observation unit comprising a plurality of observation modules for calculating the position of the robot from the sensing values derived by the plurality of sensors through a plurality of different position recognition algorithms and outputting a plurality of observation values in different formats; a transform unit including a plurality of transform modules for transforming the plurality of observation values of different formats into a covariance matrix that is a Gaussian distribution; and a merging unit for merging the Gaussian distribution of the plurality of observation values into one Gaussian distribution; and a predictor for deriving a predicted value by predicting the position of the robot from one merged Gaussian distribution.
상기 예측값을 상기 변환부의 복수의 변환모듈 각각에 피드백하는 피드백부;를 더 포함하며, 상기 피드백부로부터 예측부의 예측값이 피드백되면, 상기 복수의 변환모듈 각각은 상기 공분산 행렬로 변환된 관측값의 가우시안 분포에 상기 피드백된 예측값의 가우시안 분포를 병합하여 관측값의 공분산 행렬을 갱신하는 것을 특징으로 한다. Further comprising; a feedback unit for feeding back the prediction value to each of the plurality of transformation modules of the transformation unit, wherein when the prediction value of the prediction unit is fed back from the feedback unit, each of the plurality of transformation modules is a Gaussian of the observed values transformed into the covariance matrix It is characterized in that the covariance matrix of observations is updated by merging the Gaussian distribution of the fed back prediction values into the distribution.
상기 복수의 센서는 LIDAR(light detection and ranging) 센서 및 비전 센서를 포함한다. The plurality of sensors include a light detection and ranging (LIDAR) sensor and a vision sensor.
상기 위치인식알고리즘은 EKF(extended Kalman filter) Localization, EIF(Eukaryotic Initiation Factors) Localization, PF(Particle filter) Localization 및 AMCL(adaptive Monte Carlo Localization), Visual SLAM(simultaneous localization and mapping), ORB(Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM 및 Graph-based SLAM 중 적어도 하나를 포함하는 것을 특징으로 한다. The location recognition algorithm is EKF (extended Kalman filter) Localization, EIF (Eukaryotic Initiation Factors) Localization, PF (Particle filter) Localization and AMCL (adaptive Monte Carlo Localization), Visual SLAM (simultaneous localization and mapping), ORB (Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM, and characterized in that it includes at least one of Graph-based SLAM.
상술한 바와 같은 목적을 달성하기 위한 본 발명의 바람직한 실시예에 따른 다중 위치인식 결과 융합을 통한 강화 위치 인식 방법은 센서부의 복수의 센서 각각이 로봇을 센싱하여 서로 다른 형식의 센싱값을 도출하는 단계와, 관측부의 복수의 관측모듈 각각이 상기 복수의 센서가 도출한 센싱값으로부터 복수의 서로 다른 위치 인식 알고리즘을 통해 상기 로봇의 위치를 계산하여 서로 다른 형식의 복수의 관측값을 출력하는 단계와, 변환부의 복수의 변환모듈이 상기 서로 다른 형식의 복수의 관측값을 가우시안 분포인 공분산 행렬로 변환하는 단계와, 병합부가 상기 복수의 관측값의 가우시안 분포를 하나의 가우시안 분포로 병합하는 단계와, 예측부가 상기 병합된 하나의 가우시안 분포로부터 상기 로봇의 위치를 예측하여 예측값을 도출하는 단계를 포함한다. In the method for reinforcing position recognition through fusion of multiple position recognition results according to a preferred embodiment of the present invention for achieving the object as described above, each of a plurality of sensors of the sensor unit senses a robot to derive sensing values of different types and outputting, by a plurality of observation modules of the observation unit, a plurality of observation values in different formats by calculating the position of the robot through a plurality of different position recognition algorithms from the sensing values derived by the plurality of sensors; A step of transforming, by a plurality of transform modules of a transforming unit, the plurality of observation values of different formats into a covariance matrix that is a Gaussian distribution, and a step of merging, by a merging unit, a Gaussian distribution of the plurality of observation values into a single Gaussian distribution; and deriving a predicted value by predicting the position of the robot from the merged one Gaussian distribution.
상기 방법은 상기 공분산 행렬로 변환하는 단계 후, 상기 하나의 가우시안 분포로 병합하는 단계 전, 피드백부로부터 예측부의 예측값이 피드백되면, 상기 복수의 변환모듈 각각이 공분산 행렬로 변환된 관측값의 가우시안 분포에 피드백된 예측값의 가우시안 분포를 병합하여 관측값의 가수시안 분포 형태의 공분산 행렬을 갱신하는 단계를 더 포함한다. In the method, after the step of transforming into the covariance matrix, and before the step of merging into one Gaussian distribution, when the prediction value of the prediction section is fed back from the feedback section, each of the plurality of transform modules is a Gaussian distribution of the observed values converted into the covariance matrix. The method further includes updating a covariance matrix in the form of a mantissa distribution of observed values by merging the Gaussian distribution of the predicted values fed back to .
상기 복수의 센서는 LIDAR(light detection and ranging) 센서 및 Vision 센서를 포함한다. The plurality of sensors include a light detection and ranging (LIDAR) sensor and a vision sensor.
상기 위치인식알고리즘은 EKF(extended Kalman filter) Localization, EIF(Eukaryotic Initiation Factors) Localization, PF(Particle filter) Localization 및 AMCL(adaptive Monte Carlo Localization), Visual SLAM(simultaneous localization and mapping), ORB(Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM 및 Graph-based SLAM 중 적어도 하나를 포함한다.The location recognition algorithm is EKF (extended Kalman filter) Localization, EIF (Eukaryotic Initiation Factors) Localization, PF (Particle filter) Localization and AMCL (adaptive Monte Carlo Localization), Visual SLAM (simultaneous localization and mapping), ORB (Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM, and includes at least one of Graph-based SLAM.
본 발명에 따르면, 로봇에 있어 필수적인 위치인식 기술에서 기존 센서 융합을 통한 위치인식 방법을 공분산 행렬 기반의 병합(Integration) 알고리즘을 통해 유연성과 범용성을 보완하고 이를 통해 다양한 환경에 적용이 쉽고 센서 구성에 제약을 해소할 수 있게 한다. According to the present invention, flexibility and versatility are supplemented through the covariance matrix-based integration algorithm to the existing position recognition method through sensor fusion in the position recognition technology essential for robots, and through this, it is easy to apply to various environments and is suitable for sensor configuration. to relieve restrictions.
도 1은 본 발명의 실시예에 따른 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 시스템의 구성을 설명하기 위한 도면이다. 1 is a diagram for explaining the configuration of an enhanced localization system through fusion of multiple location recognition results based on a covariance matrix according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 방법을 설명하기 위한 흐름도이다. 2 is a flowchart for explaining a method for enhanced localization through fusion of multiple localization results based on a covariance matrix according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 컴퓨팅 장치를 나타내는 도면이다. 3 is a diagram illustrating a computing device according to an embodiment of the present invention.
본 발명의 상세한 설명에 앞서, 이하에서 설명되는 본 명세서 및 청구범위에 사용된 용어나 단어는 통상적이거나 사전적인 의미로 한정해서 해석되어서는 아니 되며, 발명자는 그 자신의 발명을 가장 최선의 방법으로 설명하기 위해 용어의 개념으로 적절하게 정의할 수 있다는 원칙에 입각하여 본 발명의 기술적 사상에 부합하는 의미와 개념으로 해석되어야만 한다. 따라서 본 명세서에 기재된 실시예와 도면에 도시된 구성은 본 발명의 가장 바람직한 실시예에 불과할 뿐, 본 발명의 기술적 사상을 모두 대변하는 것은 아니므로, 본 출원시점에 있어서 이들을 대체할 수 있는 다양한 균등물과 변형 예들이 있을 수 있음을 이해하여야 한다. Prior to the detailed description of the present invention, the terms or words used in the present specification and claims described below should not be construed as being limited to their ordinary or dictionary meanings, and the inventors should develop their own inventions in the best way. For explanation, it should be interpreted as meaning and concept consistent with the technical idea of the present invention based on the principle that it can be appropriately defined as a concept of a term. Therefore, the embodiments described in this specification and the configurations shown in the drawings are only the most preferred embodiments of the present invention, and do not represent all the technical ideas of the present invention, so various equivalents that can replace them at the time of the present application It should be understood that there may be water and variations.
이하, 첨부된 도면을 참조하여 본 발명의 바람직한 실시예들을 상세히 설명한다. 이때, 첨부된 도면에서 동일한 구성 요소는 가능한 동일한 부호로 나타내고 있음을 유의해야 한다. 또한, 본 발명의 요지를 흐리게 할 수 있는 공지 기능 및 구성에 대한 상세한 설명은 생략할 것이다. 마찬가지의 이유로 첨부 도면에 있어서 일부 구성요소는 과장되거나 생략되거나 또는 개략적으로 도시되었으며, 각 구성요소의 크기는 실제 크기를 전적으로 반영하는 것이 아니다. Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In this case, it should be noted that the same components in the accompanying drawings are denoted by the same reference numerals as possible. In addition, detailed descriptions of well-known functions and configurations that may obscure the gist of the present invention will be omitted. For the same reason, some components are exaggerated, omitted, or schematically illustrated in the accompanying drawings, and the size of each component does not fully reflect the actual size.
먼저, 본 발명의 실시예에 따른 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 시스템에 대해서 설명하기로 한다. 도 1은 본 발명의 실시예에 따른 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 시스템의 구성을 설명하기 위한 도면이다. First, an enhanced localization system through fusion of multiple location recognition results based on a covariance matrix according to an embodiment of the present invention will be described. 1 is a diagram for explaining the configuration of an enhanced localization system through fusion of multiple location recognition results based on a covariance matrix according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 실시예에 따른 다중 위치인식 결과 융합을 통한 강화 위치인식 시스템(10, 이하, '강화 위치인식 시스템'으로 축약함)은 센서부(100), 관측부(200), 변환부(300), 병합부(400), 예측부(500) 및 갱신부(600)를 포함한다. Referring to FIG. 1 , the enhanced location recognition system (10, hereinafter, abbreviated as 'reinforced location recognition system') through the fusion of multiple location recognition results according to an embodiment of the present invention includes a sensor unit 100 and an observation unit 200 ), a transform unit 300 , a merging unit 400 , a prediction unit 500 , and an update unit 600 .
센서부(100)는 복수의 센서(110, 120)를 포함한다. 복수의 센서(110, 120)는 서로 다른 종류의 센서이며, 서로 다른 형식의 출력값을 가진다. 이에 따라, 복수의 센서(110, 120)는 관측 대상인 로봇을 센싱하여 서로 다른 형식, 즉, 센서 각각의 출력 형식에 따라 의 센싱값을 출력한다. 예컨대, 제1 센서(110)는 LIDAR(light detection and ranging) 센서이고, 제2 센서(120)는 비전(Vision) 센서가 될 수 있다. 이와 같이, 본 실시예에서는 센서부(100)가 LIDAR 센서 및 비전 센서로 이루어지는 것으로 설명되지만, 본 발명이 이에 한정되는 것은 아니다. 본 발명의 센서부(100)는 2 이상의 센서를 이용할 수 있으며, 그 종류의 제한은 없다. The sensor unit 100 includes a plurality of sensors 110 and 120 . The plurality of sensors 110 and 120 are different types of sensors, and have different types of output values. Accordingly, the plurality of sensors 110 and 120 sense the robot to be observed, and output the sensed values of in different formats, that is, according to the output format of each sensor. For example, the first sensor 110 may be a light detection and ranging (LIDAR) sensor, and the second sensor 120 may be a vision sensor. As described above, in the present embodiment, the sensor unit 100 is described as including a LIDAR sensor and a vision sensor, but the present invention is not limited thereto. The sensor unit 100 of the present invention may use two or more sensors, and there is no limitation on the type thereof.
관측부(200)는 복수의 관측모듈(210, 220, 230, 240)을 포함한다. 복수의 관측모듈(210, 220, 230, 240)은 위치 인식 알고리즘으로 동작하며, 이 알고리즘을 통해 위치를 계산하여 관측값을 출력한다. 복수의 관측모듈(210, 220, 230, 240) 각각은 센서부(100)의 복수의 센서 중 어느 하나에 대응한다. 이에 따라, 복수의 관측모듈(210, 220, 230, 240) 각각은 센서부(100)의 대응하는 센서가 출력한 센싱값으로부터 관측모듈(210, 220, 230, 240) 각각의 위치 인식 알고리즘을 통해 로봇의 위치를 계산하여 서로 다른 형식의 복수의 관측값을 출력한다. The observation unit 200 includes a plurality of observation modules 210 , 220 , 230 , and 240 . The plurality of observation modules 210 , 220 , 230 , and 240 operate as a position recognition algorithm, calculate a position through this algorithm, and output an observation value. Each of the plurality of observation modules 210 , 220 , 230 , and 240 corresponds to any one of the plurality of sensors of the sensor unit 100 . Accordingly, the plurality of observation modules (210, 220, 230, 240) each of the observation module (210, 220, 230, 240) from the sensing value output by the corresponding sensor of the sensor unit 100, each of the position recognition algorithm It calculates the position of the robot and outputs a plurality of observation values in different formats.
본 발명의 실시예에서 제1 관측모듈(210)은 EKF(extended Kalman filter) Localization 알고리즘, 제2 관측모듈(220)은 PF(Particle filter) Localization 알고리즘, 제3 관측모듈(230)은 EKF(extended Kalman filter) Localization 알고리즘, 제4 관측모듈(240)은 ORB(Oriented FAST and Rotated BRIEF)-SLAM(simultaneous localization and mapping) 알고리즘이 적용된다. In an embodiment of the present invention, the first observation module 210 is an extended Kalman filter (EKF) Localization algorithm, the second observation module 220 is a PF (Particle filter) Localization algorithm, and the third observation module 230 is an extended Kalman filter (EKF) algorithm. The Kalman filter) localization algorithm and the fourth observation module 240 apply an Oriented FAST and Rotated BRIEF (ORB)-simultaneous localization and mapping (SLAM) algorithm.
제1 관측모듈(210) 및 제2 관측모듈(220)은 제1 센서(110)에 대응한다. 이에 따라, 제1 관측모듈(210)은 제1 센서(110)의 센서값인 LIDAR 데이터가 입력되면, LIDAR 데이터로부터 EKF 알고리즘으로 로봇의 위치를 연산하여 관측값을 출력한다. 또한, 제2 관측모듈(220)은 제1 센서(110)의 센서값인 LIDAR 데이터가 입력되면, LIDAR 데이터로부터 PF 알고리즘을 통해 로봇의 위치를 연산하여 관측값을 출력한다. 제3 관측모듈(230) 및 제4 관측모듈(240)은 제2 센서(120)에 대응한다. 이에 따라, 제3 관측모듈(230)은 제2 센서(120)의 센서값인 비전센서 데이터가 입력되면, 비전센서 데이터로부터 EKF 알고리즘을 통해 로봇의 위치를 연산하여 관측값을 출력한다. 또한, 제4 관측모듈(240)은 제2 센서(120)의 센서값인 비전센서 데이터가 입력되면, 비전센서 데이터로부터 ORB-SLAM 알고리즘을 통해 로봇의 위치를 연산하여 관측값을 출력한다. The first observation module 210 and the second observation module 220 correspond to the first sensor 110 . Accordingly, when LIDAR data that is a sensor value of the first sensor 110 is input, the first observation module 210 calculates the position of the robot using the EKF algorithm from the LIDAR data and outputs an observation value. In addition, when LIDAR data that is a sensor value of the first sensor 110 is input, the second observation module 220 calculates the position of the robot from the LIDAR data through a PF algorithm and outputs an observation value. The third observation module 230 and the fourth observation module 240 correspond to the second sensor 120 . Accordingly, when vision sensor data that is a sensor value of the second sensor 120 is input, the third observation module 230 calculates the position of the robot from the vision sensor data through the EKF algorithm and outputs an observation value. In addition, when vision sensor data that is a sensor value of the second sensor 120 is input, the fourth observation module 240 calculates the position of the robot from the vision sensor data through an ORB-SLAM algorithm and outputs an observation value.
한편, 위치 인식 알고리즘으로 4개의 알고리즘을 이용하는 것으로 설명하였지만, 본 발명이 이에 한정되는 것은 아니며, 센서값으로부터 로봇의 위치를 계산할 수 있는 위치 인식 알고리즘이라면 그 종류의 제한은 없다. 이러한 위치 인식 알고리즘은 예컨대, EKF(extended Kalman filter) Localization, EIF(Eukaryotic Initiation Factors) Localization, PF(Particle filter) Localization 및 AMCL(adaptive Monte Carlo Localization), Visual SLAM(simultaneous localization and mapping), ORB(Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM 및 Graph-based SLAM 등을 예시할 수 있다. On the other hand, although it has been described that four algorithms are used as the position recognition algorithm, the present invention is not limited thereto, and if it is a position recognition algorithm capable of calculating the position of the robot from a sensor value, there is no limitation of the type. Such location recognition algorithms include, for example, extended Kalman filter (EKF) Localization, Eukaryotic Initiation Factors (EIF) Localization, Particle filter (PF) Localization and adaptive Monte Carlo Localization (AMCL), Visual SLAM (simultaneous localization and mapping), ORB (Oriented). FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM and Graph-based SLAM may be exemplified.
변환부(300)는 복수의 변환모듈(310)을 포함한다. 변환모듈(310) 각각은 관측모듈(210, 220, 230, 240)에 일대일 대응한다. The conversion unit 300 includes a plurality of conversion modules 310 . Each of the transformation modules 310 corresponds one-to-one to the observation modules 210, 220, 230, and 240.
변환모듈(310)은 대응하는 관측모듈(210, 220, 230, 240)의 출력인 관측값이 입력되면, 입력된 관측값을 가우시안 분포 형태인 공분산 행렬로 변환한다. 이때, 변환모듈(310)은 대응하는 관측모듈(210, 220, 230, 240)의 출력인 관측값이 가우시안 분포 형태인 공분산 행렬인지 여부를 판별한다. 이러한 판별 결과, 해당 관측값이 공분산 행렬이 아닌 경우, 변환모듈(310)은 그 관측값을 공분산 행렬로 변환한다. 반면, 해당 관측값이 공분산 행렬인 경우, 변환모듈(310)은 그 관측값을 바이패스(bypass)한다. 전술한 바와 같이, 제1 관측모듈(210) 및 제3 관측모듈(230)은 EKF 알고리즘을 사용하고, 제2 관측모듈(220)은 PF 알고리즘을 사용한다. 이에 따라, 제1 내지 제3 관측모듈(210, 220, 230)로부터 출력되는 관측값은 모두 가우시안 분포 형태인 공분산 행렬로 출력된다. 따라서 제1 내지 제3 관측모듈(210, 220, 230)에 대응하는 변환모듈(310)은 제1 내지 제3 관측모듈(210, 220, 230)에서 출력되는 관측값을 바이패스한다. 반면, 제4 관측모듈(240)은 ORB-SLAM 알고리즘을 이용한다. 따라서 제4 관측모듈(240)의 출력인 관측값은 2차원 좌표값으로 출력된다. 따라서 제4 관측모듈(240)에 대응하는 변환모듈(310)은 제4 관측모듈(240)에서 출력되는 관측값을 별도로 가우시안 분포 형태인 공분산 행렬로 변환한다. 이때, 변환모듈(310)은 제2 센서(120)의 규격과 알고리즘 자체의 오차를 변수로 하는 변환식을 통해 제4 관측모듈(240)의 출력인 관측값인 2차원 좌표값을 가우시안 분포 형태인 공분산 행렬로 변환할 수 있다. When an observation value that is an output of the corresponding observation module 210 , 220 , 230 , 240 is input, the transformation module 310 transforms the input observation value into a covariance matrix in the form of a Gaussian distribution. At this time, the transformation module 310 determines whether the observed values output from the corresponding observation modules 210 , 220 , 230 , and 240 are covariance matrices in the form of a Gaussian distribution. As a result of this determination, when the corresponding observation value is not the covariance matrix, the transformation module 310 converts the observed value into the covariance matrix. On the other hand, when the corresponding observation value is a covariance matrix, the transformation module 310 bypasses the observation value. As described above, the first observation module 210 and the third observation module 230 use the EKF algorithm, and the second observation module 220 uses the PF algorithm. Accordingly, the observation values output from the first to third observation modules 210 , 220 , and 230 are all output as a covariance matrix in the form of a Gaussian distribution. Accordingly, the transformation module 310 corresponding to the first to third observation modules 210 , 220 , 230 bypasses the observation values output from the first to third observation modules 210 , 220 , and 230 . On the other hand, the fourth observation module 240 uses the ORB-SLAM algorithm. Therefore, the observation value that is the output of the fourth observation module 240 is output as a two-dimensional coordinate value. Accordingly, the transformation module 310 corresponding to the fourth observation module 240 separately transforms the observation value output from the fourth observation module 240 into a covariance matrix in the form of a Gaussian distribution. At this time, the transformation module 310 converts the two-dimensional coordinate value, which is the observation value, which is the output of the fourth observation module 240, into a Gaussian distribution form through a transformation equation using the error of the standard of the second sensor 120 and the algorithm itself as a variable. It can be converted to a covariance matrix.
변환모듈(310)은 앞서 설명된 바와 같이, 대응하는 관측모듈(210, 220, 230, 240)의 관측값을 가우시안 분포 형태인 공분산 행렬로 변환한 후, 갱신부(600)로부터 로봇의 위치의 예측값이 피드백되면, 관측값의 가우시안 분포에 예측값의 가우시안 분포를 병합하여 관측값의 가수시안 분포 형태의 공분산 행렬을 갱신할 수 있다. 반면, 변환모듈(310)은 갱신부(600)로부터 피드백되는 로봇의 위치의 대한 예측값이 없으면, 변환모듈(310)은 관측값의 공분산 행렬을 그대로 출력한다. As described above, the transformation module 310 converts the observation values of the corresponding observation modules 210 , 220 , 230 , and 240 into a covariance matrix in the form of a Gaussian distribution, and then receives the position of the robot from the update unit 600 . When the predicted values are fed back, the covariance matrix in the form of a mantissa distribution of the observed values may be updated by merging the Gaussian distribution of the predicted values with the Gaussian distribution of the observed values. On the other hand, if the transformation module 310 does not have a predicted value for the position of the robot fed back from the update unit 600 , the transformation module 310 outputs the covariance matrix of the observed values as it is.
병합부(400)는 복수의 변환모듈(310)로부터 가수시안 분포를 가지는 공분산 행렬로 변환된 관측값이 입력되면, 복수의 관측값의 가수시안 분포를 하나의 가우시안 분포로 병합한다. When the observation values converted from the plurality of transformation modules 310 into the covariance matrix having the mantissa distribution are input, the merging unit 400 merges the mantissa distributions of the plurality of observation values into one Gaussian distribution.
예측부(500)는 병합부(400)에 의해 병합된 하나의 가우시안 분포로부터 상기 로봇의 위치를 예측한 예측값을 도출한다. The prediction unit 500 derives a prediction value for predicting the position of the robot from one Gaussian distribution merged by the merging unit 400 .
갱신부(600)는 예측부(500)의 출력을 변환부(300)의 복수의 변환모듈(310) 각각에 피드백한다. 그러면, 복수의 변환모듈(310) 각각은 대응하는 관측모듈(210, 220, 230, 240)의 관측값을 가우시안 분포 형태인 공분산 행렬로 변환한 후, 관측값의 가우시안 분포에 예측값의 가우시안 분포를 병합하여 관측값의 가수시안 분포 형태의 공분산 행렬을 갱신할 수 있다. 여기서, 피드백된 예측값의 가우시안 분포는 시간 t의 로봇의 위치의 확률이고, 관측값은 시간 t+1의 로봇의 위치의 확률이 될 수 있다. 따라서 전술한 피드백의 반복에 따라 가우시안 분포를 누적하여 병합함으로써 보다 정밀한 로봇의 위치를 예측할 수 있다. 만약, 센서부(100)의 센서의 노이즈가 발생할 경우 변환부(300)의 공분산 행렬의 가우시안 분포들은 다시 그 범위가 커지게 되지만 피드백 과정을 반복하면 모든 공분산 행렬들이 하나의 분포로 점점 수렴해가게 된다. 즉, 자연스럽게 노이즈 필터 역할도 수행한다. The update unit 600 feeds back the output of the prediction unit 500 to each of the plurality of transformation modules 310 of the transformation unit 300 . Then, each of the plurality of transformation modules 310 transforms the observation values of the corresponding observation modules 210, 220, 230, 240 into a covariance matrix in the form of a Gaussian distribution, and then adds the Gaussian distribution of the predicted values to the Gaussian distribution of the observation values. By merging, the covariance matrix in the form of mantissa distribution of observations can be updated. Here, the Gaussian distribution of the fed back predicted value may be the probability of the robot's position at time t, and the observed value may be the probability of the robot's position at time t+1. Therefore, the position of the robot can be predicted more precisely by accumulating and merging Gaussian distributions according to the repetition of the above-described feedback. If the noise of the sensor of the sensor unit 100 occurs, the Gaussian distributions of the covariance matrix of the transform unit 300 increase in range again, but if the feedback process is repeated, all the covariance matrices gradually converge to one distribution. do. In other words, it naturally acts as a noise filter.
다음으로, 본 발명의 실시예에 따른 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 방법에 대해서 설명하기로 한다. 도 2는 본 발명의 실시예에 따른 공분산 행렬 기반의 다중 위치인식 결과 융합을 통한 강화 위치인식 방법을 설명하기 위한 흐름도이다. Next, a method for reinforcing location recognition through fusion of multiple location recognition results based on a covariance matrix according to an embodiment of the present invention will be described. 2 is a flowchart for explaining a method for enhanced localization through fusion of multiple localization results based on a covariance matrix according to an embodiment of the present invention.
도 2를 참조하면, 센서부(100)의 복수의 센서(110, 120)는 S110 단계에서 로봇을 센싱하여 서로 다른 형식의 센싱값을 도출한다. 여기서, 복수의 센서(110, 120)는 서로 다른 종류의 센서이며, 서로 다른 형식의 출력값을 가진다. 즉, 복수의 센서(110, 120)는 관측 대상인 로봇을 센싱하여 센서(110, 120) 각각의 출력 형식에 따라 센싱값을 출력한다. 예컨대, 제1 센서(110)는 LIDAR(light detection and ranging) 센서이고, 제2 센서(120)는 비전(Vision) 센서가 될 수 있다. Referring to FIG. 2 , the plurality of sensors 110 and 120 of the sensor unit 100 sense the robot in step S110 to derive sensing values of different types. Here, the plurality of sensors 110 and 120 are different types of sensors, and have different types of output values. That is, the plurality of sensors 110 and 120 sense the robot to be observed and output the sensed values according to the respective output formats of the sensors 110 and 120 . For example, the first sensor 110 may be a light detection and ranging (LIDAR) sensor, and the second sensor 120 may be a vision sensor.
복수의 관측모듈(210, 220, 230, 240) 각각은 센서부(100)의 복수의 센서 중 어느 하나에 대응한다. 이에 따라, 복수의 관측모듈(210, 220, 230, 240) 각각은 S120 단계에서 센서부(100)의 대응하는 센서(110, 120)가 출력한 센싱값으로부터 관측모듈(210, 220, 230, 240) 각각의 위치 인식 알고리즘을 통해 로봇의 위치를 계산하여 서로 다른 형식의 복수의 관측값을 출력한다. 본 발명의 실시예에서 제1 관측모듈(210)은 EKF 알고리즘, 제2 관측모듈(220)은 PF 알고리즘, 제3 관측모듈(230)은 EKF 알고리즘, 제4 관측모듈(240)은 ORB-SLAM 알고리즘이 적용될 수 있다. 또한, 제1 관측모듈(210) 및 제2 관측모듈(220)은 제1 센서(110)에 대응하고, 제3 관측모듈(230) 및 제4 관측모듈(240)은 제2 센서(120)에 대응한다. 이에 따라, 제1 관측모듈(210)은 제1 센서(110)의 센서값인 LIDAR 데이터로부터 EKF 알고리즘으로 로봇의 위치를 연산하여 관측값을 출력하고, 제2 관측모듈(220)은 제1 센서(110)의 센서값인 LIDAR 데이터로부터 PF 알고리즘을 통해 로봇의 위치를 연산하여 관측값을 출력한다. 또한, 제3 관측모듈(230)은 제2 센서(120)의 센서값인 비전센서 데이터로부터 EKF 알고리즘을 통해 로봇의 위치를 연산하여 관측값을 출력하고, 제4 관측모듈(240)은 제2 센서(120)의 센서값인 비전센서 데이터로부터 ORB-SLAM 알고리즘을 통해 로봇의 위치를 연산하여 관측값을 출력한다. Each of the plurality of observation modules 210 , 220 , 230 , and 240 corresponds to any one of the plurality of sensors of the sensor unit 100 . Accordingly, the plurality of observation modules (210, 220, 230, 240) each of the observation modules (210, 220, 230, S120) from the sensing value output by the corresponding sensor (110, 120) of the sensor unit 100 in step S120, 240) Each position recognition algorithm calculates the position of the robot and outputs a plurality of observation values in different formats. In an embodiment of the present invention, the first observation module 210 is the EKF algorithm, the second observation module 220 is the PF algorithm, the third observation module 230 is the EKF algorithm, and the fourth observation module 240 is ORB-SLAM. Algorithms can be applied. In addition, the first observation module 210 and the second observation module 220 correspond to the first sensor 110 , and the third observation module 230 and the fourth observation module 240 are the second sensor 120 . respond to Accordingly, the first observation module 210 calculates the position of the robot using the EKF algorithm from the LIDAR data that is the sensor value of the first sensor 110 and outputs an observation value, and the second observation module 220 outputs the observation value of the first sensor From the LIDAR data, which is the sensor value of (110), the position of the robot is calculated through the PF algorithm and the observed value is output. In addition, the third observation module 230 calculates the position of the robot from the vision sensor data that is the sensor value of the second sensor 120 through the EKF algorithm and outputs the observation value, and the fourth observation module 240 outputs the second observation value. From the vision sensor data, which is the sensor value of the sensor 120 , the position of the robot is calculated through the ORB-SLAM algorithm, and the observed value is output.
변환부(300)의 복수의 변환모듈(310) 각각은 관측모듈(210, 220, 230, 240)에 일대일 대응한다. 이에 따라, 변환부(300)의 복수의 변환모듈(310) 각각은 S130 단계에서 대응하는 관측모듈(210, 220, 230, 240)의 출력인 관측값이 입력되면, 입력된 관측값을 가우시안 분포 형태인 공분산 행렬로 변환한다. 이때, 변환모듈(310)은 대응하는 관측모듈(210, 220, 230, 240)의 출력인 관측값이 가우시안 분포 형태인 공분산 행렬이 아닌 경우, 변환모듈(310)은 그 관측값을 공분산 행렬로 변환한다. 반면, 해당 관측값이 공분산 행렬인 경우, 변환모듈(310)은 그 관측값을 바이패스(bypass)할 수 있다. 전술한 바와 같이, 제1 관측모듈(210) 및 제3 관측모듈(230)은 EKF 알고리즘을 사용하고, 제2 관측모듈(220)은 PF 알고리즘을 사용한다. 이에 따라, 제1 내지 제3 관측모듈(210, 220, 230)로부터 출력되는 관측값은 모두 가우시안 분포 형태인 공분산 행렬로 출력된다. 따라서 제1 내지 제3 관측모듈(210, 220, 230)에 대응하는 변환모듈(310)은 제1 내지 제3 관측모듈(210, 220, 230)에서 출력되는 관측값을 바이패스한다. 반면, 제4 관측모듈(240)은 ORB-SLAM 알고리즘을 이용한다. 따라서 제4 관측모듈(240)의 출력인 관측값은 2차원 좌표값으로 출력된다. 따라서 제4 관측모듈(240)에 대응하는 변환모듈(310)은 제4 관측모듈(240)에서 출력되는 관측값을 별도로 가우시안 분포 형태인 공분산 행렬로 변환한다. Each of the plurality of transformation modules 310 of the transformation unit 300 corresponds one-to-one to the observation modules 210 , 220 , 230 , and 240 . Accordingly, when an observation value that is an output of the corresponding observation module 210 , 220 , 230 , 240 is input in step S130 , each of the plurality of transformation modules 310 of the transformation unit 300 converts the input observation value into a Gaussian distribution. Convert it to a covariance matrix of the form At this time, the transform module 310 converts the observed values into a covariance matrix when the observed values output from the corresponding observation modules 210, 220, 230, and 240 are not the covariance matrix in the form of a Gaussian distribution. convert On the other hand, when the corresponding observation value is a covariance matrix, the transformation module 310 may bypass the observation value. As described above, the first observation module 210 and the third observation module 230 use the EKF algorithm, and the second observation module 220 uses the PF algorithm. Accordingly, the observation values output from the first to third observation modules 210 , 220 , and 230 are all output as a covariance matrix in the form of a Gaussian distribution. Accordingly, the transformation module 310 corresponding to the first to third observation modules 210 , 220 , 230 bypasses the observation values output from the first to third observation modules 210 , 220 , and 230 . On the other hand, the fourth observation module 240 uses the ORB-SLAM algorithm. Therefore, the observation value that is the output of the fourth observation module 240 is output as a two-dimensional coordinate value. Accordingly, the transformation module 310 corresponding to the fourth observation module 240 separately transforms the observation value output from the fourth observation module 240 into a covariance matrix in the form of a Gaussian distribution.
다음으로, S140 단계에서 피드백이 존재하는지 여부를 판별하고, 피드백이 존재하는 경우, 변환부(300)의 복수의 변환모듈(310) 각각은 S150 단계에서 공분산 행렬로 변환된 관측값의 가우시안 분포에 피드백된 예측값의 가우시안 분포를 병합하여 관측값의 가수시안 분포 형태의 공분산 행렬을 갱신한다. 관측값이 시간 t의 로봇의 위치의 확률이라면, 피드백된 예측값의 가우시안 분포는 시간 t-1의 로봇의 위치의 확률이 될 수 있다. Next, it is determined whether feedback exists in step S140, and if there is feedback, each of the plurality of transform modules 310 of the transform unit 300 is applied to the Gaussian distribution of the observation values converted into the covariance matrix in step S150. The covariance matrix in the form of a mantissa distribution of observations is updated by merging the Gaussian distribution of the fed back prediction values. If the observed value is the probability of the robot's position at time t, then the Gaussian distribution of the fed back predicted value can be the probability of the robot's position at time t-1.
다음으로, 병합부(400)는 복수의 변환모듈(310)로부터 가수시안 분포를 가지는 공분산 행렬인 관측값이 입력되면, S160 단계에서 복수의 관측값의 가수시안 분포를 하나의 가우시안 분포로 병합한다. Next, when an observation value that is a covariance matrix having a mantissa distribution is input from the plurality of transformation modules 310, the merging unit 400 merges the mantissa distribution of the plurality of observation values into one Gaussian distribution in step S160. .
그런 다음, 예측부(500)는 S170 단계에서 병합부(400)에 의해 병합된 하나의 가우시안 분포로부터 로봇의 위치를 예측한 예측값을 도출한다. Then, the prediction unit 500 derives a prediction value for predicting the position of the robot from one Gaussian distribution merged by the merging unit 400 in step S170 .
갱신부(600)는 S180 단계에서 예측부(500)의 출력을 변환부(300)의 복수의 변환모듈(310) 각각에 피드백한다. The update unit 600 feeds back the output of the prediction unit 500 to each of the plurality of transformation modules 310 of the transformation unit 300 in step S180 .
전술한 S110 단계 내지 S180 단계는 로봇의 위치를 지속적으로 추적하기 위한 것이기 때문에 로봇이 위치를 추적하는 프로세스가 종료되기 전까지 지속적으로 반복된다. Since steps S110 to S180 described above are for continuously tracking the position of the robot, it is continuously repeated until the process of tracking the position of the robot is terminated.
도 3은 본 발명의 실시예에 따른 컴퓨팅 장치를 나타내는 도면이다. 도 3의 컴퓨팅 장치(TN100)는 본 명세서에서 기술된 강화 위치인식 시스템 및 방법을 위한 장치 일 수 있다. 3 is a diagram illustrating a computing device according to an embodiment of the present invention. The computing device TN100 of FIG. 3 may be an apparatus for the enhanced location recognition system and method described herein.
컴퓨팅 장치(TN100)는 적어도 하나의 프로세서(TN110), 송수신 장치(TN120), 및 메모리(TN130)를 포함할 수 있다. 또한, 컴퓨팅 장치(TN100)는 저장 장치(TN140), 입력 인터페이스 장치(TN150), 출력 인터페이스 장치(TN160) 등을 더 포함할 수 있다. 컴퓨팅 장치(TN100)에 포함된 구성 요소들은 버스(bus)(TN170)에 의해 연결되어 서로 통신을 수행할 수 있다.The computing device TN100 may include at least one processor TN110 , a transceiver device TN120 , and a memory TN130 . Also, the computing device TN100 may further include a storage device TN140 , an input interface device TN150 , an output interface device TN160 , and the like. Components included in the computing device TN100 may be connected by a bus TN170 to communicate with each other.
프로세서(TN110)는 메모리(TN130) 및 저장 장치(TN140) 중에서 적어도 하나에 저장된 프로그램 명령(program command)을 실행할 수 있다. 프로세서(TN110)는 중앙 처리 장치(CPU: central processing unit), 그래픽 처리 장치(GPU: graphics processing unit), 또는 본 발명의 실시예에 따른 방법들이 수행되는 전용의 프로세서를 의미할 수 있다. 프로세서(TN110)는 본 발명의 실시예와 관련하여 기술된 절차, 기능, 및 방법 등을 구현하도록 구성될 수 있다. 프로세서(TN110)는 컴퓨팅 장치(TN100)의 각 구성 요소를 제어할 수 있다.The processor TN110 may execute a program command stored in at least one of the memory TN130 and the storage device TN140. The processor TN110 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to an embodiment of the present invention are performed. The processor TN110 may be configured to implement procedures, functions, methods, and the like described in connection with an embodiment of the present invention. The processor TN110 may control each component of the computing device TN100 .
메모리(TN130) 및 저장 장치(TN140) 각각은 프로세서(TN110)의 동작과 관련된 다양한 정보를 저장할 수 있다. 메모리(TN130) 및 저장 장치(TN140) 각각은 휘발성 저장 매체 및 비휘발성 저장 매체 중에서 적어도 하나로 구성될 수 있다. 예를 들어, 메모리(TN130)는 읽기 전용 메모리(ROM: read only memory) 및 랜덤 액세스 메모리(RAM: random access memory) 중에서 적어도 하나로 구성될 수 있다. Each of the memory TN130 and the storage device TN140 may store various information related to the operation of the processor TN110 . Each of the memory TN130 and the storage device TN140 may be configured as at least one of a volatile storage medium and a non-volatile storage medium. For example, the memory TN130 may include at least one of a read only memory (ROM) and a random access memory (RAM).
송수신 장치(TN120)는 유선 신호 또는 무선 신호를 송신 또는 수신할 수 있다. 송수신 장치(TN120)는 네트워크에 연결되어 통신을 수행할 수 있다. The transceiver TN120 may transmit or receive a wired signal or a wireless signal. The transceiver TN120 may be connected to a network to perform communication.
한편, 전술한 본 발명의 실시예에 따른 방법은 다양한 컴퓨터수단을 통하여 판독 가능한 프로그램 형태로 구현되어 컴퓨터로 판독 가능한 기록매체에 기록될 수 있다. 여기서, 기록매체는 프로그램 명령, 데이터 파일, 데이터구조 등을 단독으로 또는 조합하여 포함할 수 있다. 기록매체에 기록되는 프로그램 명령은 본 발명을 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 예컨대 기록매체는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광 기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media) 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치를 포함한다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어를 포함할 수 있다. 이러한 하드웨어 장치는 본 발명의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다. Meanwhile, the method according to the embodiment of the present invention described above may be implemented in the form of a program readable by various computer means and recorded in a computer readable recording medium. Here, the recording medium may include a program command, a data file, a data structure, etc. alone or in combination. The program instructions recorded on the recording medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software. For example, the recording medium includes magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floppy disks ( magneto-optical media) and hardware devices specially configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions may include high-level languages that can be executed by a computer using an interpreter or the like as well as machine language such as generated by a compiler. Such hardware devices may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
다른 한편, 종래의 센서의 융합을 통한 위치인식 방법은 여러 센서 데이터를 쉽게 사용할 수 있다는 장점이 있지만 센서의 구성이 달라질 경우 위치인식 방법 또한 달라질 수 있다. 즉, 센서 구성이 유연하지 않다는 단점이 있다. 하지만, 본 발명은 각 센서들을 이용한 위치인식 결과를 공분산 행렬 형태로 변형하고 이를 기반으로 다양한 환경에 강인하고 어느 센서 구성에서나 사용이 가능한 융합 위치인식 방법과 시스템을 제공할 수 있다. On the other hand, the conventional location recognition method through the fusion of sensors has the advantage that various sensor data can be easily used, but if the configuration of the sensor is changed, the location recognition method may also be different. That is, there is a disadvantage that the sensor configuration is not flexible. However, the present invention transforms the position recognition result using each sensor into a covariance matrix form, and based on this, it is possible to provide a fusion position recognition method and system that is robust to various environments and can be used in any sensor configuration.
이상 본 발명을 몇 가지 바람직한 실시예를 사용하여 설명하였으나, 이들 실시예는 예시적인 것이며 한정적인 것이 아니다. 이와 같이, 본 발명이 속하는 기술분야에서 통상의 지식을 지닌 자라면 본 발명의 사상과 첨부된 특허청구범위에 제시된 권리범위에서 벗어나지 않으면서 균등론에 따라 다양한 변화와 수정을 가할 수 있음을 이해할 것이다. Although the present invention has been described above using several preferred embodiments, these examples are illustrative and not restrictive. As such, those of ordinary skill in the art to which the present invention pertains will understand that various changes and modifications can be made in accordance with the doctrine of equivalents without departing from the spirit of the present invention and the scope of rights set forth in the appended claims.
본 발명은 로봇에 있어 필수적인 위치인식 방법을 공분산 행렬 기반의 병합(Integration) 알고리즘을 통해 유연성과 범용성을 보완하고 이를 통해 다양한 환경에 적용이 쉽고 센서 구성에 제약을 해소할 수 있게 한다. 따라서 본 발명은 시판 또는 영업의 가능성이 충분할 뿐만 아니라 현실적으로 명백하게 실시할 수 있는 정도이므로 산업상 이용가능성이 있다. The present invention supplements the flexibility and versatility of a position recognition method essential for a robot through a covariance matrix-based integration algorithm, and through this, it is easy to apply to various environments and eliminates restrictions on sensor configuration. Accordingly, the present invention has industrial applicability because it has sufficient potential for commercialization or business, as well as to the extent that it can be clearly implemented in reality.

Claims (8)

  1. 다중 위치인식 결과 융합을 통한 강화 위치 인식 시스템에 있어서, In the reinforced location recognition system through the fusion of multiple location recognition results,
    로봇을 센싱하여 서로 다른 형식의 센싱값을 도출하는 복수의 센서를 포함하는 센서부; a sensor unit including a plurality of sensors for sensing the robot and deriving different types of sensing values;
    상기 복수의 센서가 도출한 센싱값으로부터 복수의 서로 다른 위치 인식 알고리즘을 통해 상기 로봇의 위치를 계산하여 서로 다른 형식의 복수의 관측값을 출력하는 복수의 관측모듈을 포함하는 관측부; an observation unit including a plurality of observation modules for calculating the position of the robot from the sensing values derived by the plurality of sensors through a plurality of different position recognition algorithms and outputting a plurality of observation values in different formats;
    상기 서로 다른 형식의 복수의 관측값을 가우시안 분포인 공분산 행렬로 변환하는 복수의 변환모듈을 포함하는 변환부; a transform unit including a plurality of transform modules for transforming the plurality of observation values of different formats into a covariance matrix having a Gaussian distribution;
    상기 복수의 관측값의 가우시안 분포를 하나의 가우시안 분포로 병합하는 병합부; 및 a merging unit merging the Gaussian distributions of the plurality of observation values into one Gaussian distribution; and
    상기 병합된 하나의 가우시안 분포로부터 상기 로봇의 위치를 예측하여 예측값을 도출하는 예측부;를 포함하는 것을 특징으로 하는 A prediction unit for deriving a predicted value by predicting the position of the robot from the merged one Gaussian distribution;
    강화 위치 인식 시스템. Enhanced location awareness system.
  2. 제1항에 있어서, According to claim 1,
    상기 예측값을 상기 변환부의 복수의 변환모듈 각각에 피드백하는 피드백부;를 더 포함하며, It further comprises; a feedback unit for feeding back the predicted value to each of the plurality of transformation modules of the transformation unit,
    상기 피드백부로부터 예측부의 예측값이 피드백되면, When the prediction value of the prediction unit is fed back from the feedback unit,
    상기 복수의 변환모듈 각각은 상기 공분산 행렬로 변환된 관측값의 가우시안 분포에 상기 피드백된 예측값의 가우시안 분포를 병합하여 관측값의 공분산 행렬을 갱신하는 것을 특징으로 하는 Each of the plurality of transformation modules updates the covariance matrix of observations by merging the Gaussian distribution of the fed back prediction values with the Gaussian distribution of the observation values transformed into the covariance matrix.
    강화 위치 인식 시스템. Enhanced location awareness system.
  3. 제1항에 있어서, According to claim 1,
    상기 복수의 센서는 The plurality of sensors
    LIDAR(light detection and ranging) 센서 및 비전 센서를 포함하는 것을 특징으로 하는 comprising a light detection and ranging (LIDAR) sensor and a vision sensor
    강화 위치 인식 시스템. Enhanced location awareness system.
  4. 제1항에 있어서, According to claim 1,
    상기 위치인식알고리즘은 The location recognition algorithm is
    EKF(extended Kalman filter) Localization, EIF(Eukaryotic Initiation Factors) Localization, PF(Particle filter) Localization 및 AMCL(adaptive Monte Carlo Localization), Visual SLAM(simultaneous localization and mapping), ORB(Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM 및 Graph-based SLAM 중 적어도 하나를 포함하는 것을 특징으로 하는 EKF (extended Kalman filter) Localization, EIF (Eukaryotic Initiation Factors) Localization, PF (Particle filter) Localization and adaptive Monte Carlo Localization (AMCL), Visual SLAM (simultaneous localization and mapping), ORB (Oriented FAST and Rotated BRIEF)-SLAM , Direct SLAM, LSD-SLAM and Graph-based SLAM characterized in that it comprises at least one
    강화 위치 인식 시스템. Enhanced location awareness system.
  5. 다중 위치인식 결과 융합을 통한 강화 위치 인식 방법에 있어서, In the reinforced location recognition method through fusion of multiple location recognition results,
    센서부의 복수의 센서 각각이 로봇을 센싱하여 서로 다른 형식의 센싱값을 도출하는 단계; deriving different types of sensing values by sensing the robot by each of the plurality of sensors of the sensor unit;
    관측부의 복수의 관측모듈 각각이 상기 복수의 센서가 도출한 센싱값으로부터 복수의 서로 다른 위치 인식 알고리즘을 통해 상기 로봇의 위치를 계산하여 서로 다른 형식의 복수의 관측값을 출력하는 단계; outputting, by each of the plurality of observation modules of the observation unit, the plurality of observation values in different formats by calculating the position of the robot through a plurality of different position recognition algorithms from the sensing values derived by the plurality of sensors;
    변환부의 복수의 변환모듈이 상기 서로 다른 형식의 복수의 관측값을 가우시안 분포인 공분산 행렬로 변환하는 단계; converting, by a plurality of transformation modules of a transformation unit, the plurality of observation values of different formats into a covariance matrix having a Gaussian distribution;
    병합부가 상기 복수의 관측값의 가우시안 분포를 하나의 가우시안 분포로 병합하는 단계; 및 merging, by a merging unit, Gaussian distributions of the plurality of observation values into one Gaussian distribution; and
    예측부가 상기 병합된 하나의 가우시안 분포로부터 상기 로봇의 위치를 예측하여 예측값을 도출하는 단계;를 포함하는 것을 특징으로 하는 Deriving a predicted value by a prediction unit predicting the position of the robot from the merged one Gaussian distribution;
    강화 위치 인식 방법. Reinforced position recognition method.
  6. 제5항에 있어서, 6. The method of claim 5,
    상기 공분산 행렬로 변환하는 단계 후, 상기 하나의 가우시안 분포로 병합하는 단계 전, After the step of converting to the covariance matrix, before the step of merging into the single Gaussian distribution,
    피드백부로부터 예측부의 예측값이 피드백되면, When the prediction value of the prediction unit is fed back from the feedback unit,
    상기 복수의 변환모듈 각각이 공분산 행렬로 변환된 관측값의 가우시안 분포에 피드백된 예측값의 가우시안 분포를 병합하여 관측값의 가수시안 분포 형태의 공분산 행렬을 갱신하는 단계;를 더 포함하는 것을 특징으로 하는 The step of updating the covariance matrix in the form of a mantissa distribution of the observation values by merging the Gaussian distribution of the predicted values fed back to the Gaussian distribution of the observation values transformed into the covariance matrix by each of the plurality of transformation modules;
    강화 위치 인식 방법. Reinforced position recognition method.
  7. 제5항에 있어서, 6. The method of claim 5,
    상기 복수의 센서는 The plurality of sensors
    LIDAR(light detection and ranging) 센서 및 Vision 센서를 포함하는 것을 특징으로 하는 comprising a light detection and ranging (LIDAR) sensor and a vision sensor
    강화 위치 인식 방법. Reinforced position recognition method.
  8. 제5항에 있어서, 6. The method of claim 5,
    상기 위치인식알고리즘은 The location recognition algorithm is
    EKF(extended Kalman filter) Localization, EIF(Eukaryotic Initiation Factors) Localization, PF(Particle filter) Localization 및 AMCL(adaptive Monte Carlo Localization), Visual SLAM(simultaneous localization and mapping), ORB(Oriented FAST and Rotated BRIEF)-SLAM, Direct SLAM, LSD-SLAM 및 Graph-based SLAM 중 적어도 하나를 포함하는 것을 특징으로 하는 EKF (extended Kalman filter) Localization, EIF (Eukaryotic Initiation Factors) Localization, PF (Particle filter) Localization and adaptive Monte Carlo Localization (AMCL), Visual SLAM (simultaneous localization and mapping), ORB (Oriented FAST and Rotated BRIEF)-SLAM , Direct SLAM, characterized in that it comprises at least one of LSD-SLAM and Graph-based SLAM
    강화 위치 인식 방법. Reinforced position recognition method.
PCT/KR2020/009056 2019-12-06 2020-07-09 Reinforced location recognition method and system through merging of results of multiple location recognition based on covariance matrix WO2021112362A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190161526A KR102259247B1 (en) 2019-12-06 2019-12-06 Strong Localization Method and System through Convergence of Multiple Localization Based on Covariance Matrix
KR10-2019-0161526 2019-12-06

Publications (1)

Publication Number Publication Date
WO2021112362A1 true WO2021112362A1 (en) 2021-06-10

Family

ID=76221973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/009056 WO2021112362A1 (en) 2019-12-06 2020-07-09 Reinforced location recognition method and system through merging of results of multiple location recognition based on covariance matrix

Country Status (2)

Country Link
KR (1) KR102259247B1 (en)
WO (1) WO2021112362A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130049610A (en) * 2011-11-04 2013-05-14 삼성전자주식회사 Mobile object and walking robot
KR20140003987A (en) * 2012-06-25 2014-01-10 서울대학교산학협력단 Slam system for mobile robot based on vision sensor data and motion sensor data fusion
US20150283700A1 (en) * 2014-04-02 2015-10-08 The Boeing Company Localization Within an Environment Using Sensor Fusion
KR20170109806A (en) * 2016-03-22 2017-10-10 현대자동차주식회사 Apparatus for avoiding side crash in vehicle and method thereof
KR20190131402A (en) * 2018-05-16 2019-11-26 주식회사 유진로봇 Moving Object and Hybrid Sensor with Camera and Lidar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101973709B1 (en) 2016-11-11 2019-04-30 고려대학교 산학협력단 Method of collision detection of robot arm manipulator

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130049610A (en) * 2011-11-04 2013-05-14 삼성전자주식회사 Mobile object and walking robot
KR20140003987A (en) * 2012-06-25 2014-01-10 서울대학교산학협력단 Slam system for mobile robot based on vision sensor data and motion sensor data fusion
US20150283700A1 (en) * 2014-04-02 2015-10-08 The Boeing Company Localization Within an Environment Using Sensor Fusion
KR20170109806A (en) * 2016-03-22 2017-10-10 현대자동차주식회사 Apparatus for avoiding side crash in vehicle and method thereof
KR20190131402A (en) * 2018-05-16 2019-11-26 주식회사 유진로봇 Moving Object and Hybrid Sensor with Camera and Lidar

Also Published As

Publication number Publication date
KR102259247B1 (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN109084746B (en) Monocular mode for autonomous platform guidance system with auxiliary sensor
US10549430B2 (en) Mapping method, localization method, robot system, and robot
CA3081608C (en) Method of avoiding collision of unmanned aerial vehicle
CN109116374B (en) Method, device and equipment for determining distance of obstacle and storage medium
US20200088858A1 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
US20150142248A1 (en) Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex
WO2021187793A1 (en) Electronic device for detecting 3d object on basis of fusion of camera and radar sensor, and operating method therefor
JP6830140B2 (en) Motion vector field determination method, motion vector field determination device, equipment, computer readable storage medium and vehicle
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
CN108345836A (en) Landmark identification for autonomous vehicle
US20180275663A1 (en) Autonomous movement apparatus and movement control system
CN114624460B (en) System and method for mapping a vehicle environment
US20200094848A1 (en) Vehicle perception system on-line dianostics and prognostics
CN109737971A (en) Vehicle-mounted assisting navigation positioning system, method, equipment and storage medium
KR20190062852A (en) System, module and method for detecting pedestrian, computer program
WO2020157138A1 (en) Object detection apparatus, system and method
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
US20220068017A1 (en) Method of adjusting grid spacing of height map for autonomous driving
WO2021112362A1 (en) Reinforced location recognition method and system through merging of results of multiple location recognition based on covariance matrix
CN115565058A (en) Robot, obstacle avoidance method, device and storage medium
CN110542422B (en) Robot positioning method, device, robot and storage medium
CN117496515A (en) Point cloud data labeling method, storage medium and electronic equipment
Youssefi et al. Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars
JP7466144B2 (en) PROGRAM, AUTONOMOUS MOBILE DEVICE MANAGEMENT DEVICE, MANAGEMENT METHOD AND MANAGEMENT SYSTEM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20897187

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20897187

Country of ref document: EP

Kind code of ref document: A1