WO2020071573A1 - Location information system using deep learning and method for providing same - Google Patents

Location information system using deep learning and method for providing same

Info

Publication number
WO2020071573A1
WO2020071573A1 PCT/KR2018/012092 KR2018012092W WO2020071573A1 WO 2020071573 A1 WO2020071573 A1 WO 2020071573A1 KR 2018012092 W KR2018012092 W KR 2018012092W WO 2020071573 A1 WO2020071573 A1 WO 2020071573A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
deep learning
location information
unit
Prior art date
Application number
PCT/KR2018/012092
Other languages
French (fr)
Korean (ko)
Inventor
이준혁
Original Assignee
(주)한국플랫폼서비스기술
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)한국플랫폼서비스기술 filed Critical (주)한국플랫폼서비스기술
Publication of WO2020071573A1 publication Critical patent/WO2020071573A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the present invention relates to a location information system using deep learning and a method for providing the same, and to provide a more accurate location information and service by deep learning based on a SLAM technique.
  • location information is provided to users in various ways.
  • a GPS system using a satellite navigation system is a representative one.
  • SLAM Simultaneous Localization And Map-Building
  • SLAM Simultaneous Localization And Map-Building
  • the mobile device may select the first keyframe based SLAM map of the local environment using one or more received images.
  • the individual location recognition of the mobile device within the local environment may be determined, and the individual location recognition may be based on a keyframe based SLAM map.
  • the mobile device can transmit the first keyframe to the server and receive a first global location-awareness response indicating a correction to the local map on the mobile device.
  • the first global position recognition response may include rotation, translation, and scale information.
  • the server proposes a technique for receiving keyframes from a mobile device and recognizing keyframes in the server map by matching keyframe features received from the mobile device to the server map features.
  • Republic of Korea Patent Publication No. 10-2013-0134986 (SLAM system and method of a mobile robot that receives a photo input from the user from the user, published on December 10, 2013) is a mobile robot that receives a photo input to the environment from the user
  • a SLAM system comprising: a user terminal that receives information or commands about an environment from a user and transmits it to a mobile robot;
  • a mobile robot SLAM system characterized in that it comprises a mobile robot that performs map creation by grasping the environment information and movement information obtained from the user terminal, information and commands, and a data acquisition device, and moving A data acquisition device mounted on the robot to obtain information about the surrounding environment or a means for measuring movement of the mobile robot; A second processing device that performs a function of determining the location of a mobile robot and determining a route based on real-time map creation, management and modification of the created map, and created map using the data obtained from the data acquisition device and the user terminal ; And a technology for controlling the driving of the mobile robot based on the information processed by the second processing device.
  • the present invention is to provide a location information system and a method for providing the information to provide a location information using deep learning, to be more precise and adaptable to changes in the external environment, in order to compensate for the shortcomings of the SLAM technology as described above.
  • the purpose is to provide a location information system and a method for providing the information to provide a location information using deep learning, to be more precise and adaptable to changes in the external environment, in order to compensate for the shortcomings of the SLAM technology as described above.
  • an image processing unit 11 and a local server 20 and a terminal 30 that generate a virtual map after processing and processing an image input from the outside )
  • a main server (10) consisting of a communication unit (12) for performing communication and a data storage unit (13) for storing data; and a main server (10) that is connected to the main server (10) and inputs information for each region.
  • a local server (20) that transmits and distributes regional information;
  • a terminal 30, to provide a location information system using deep learning.
  • the image processing unit 11 is a key frame extraction unit 111 for extracting a key frame of a raw image input through the terminal 30 or an external image device, and the key frame in the key frame extraction unit 111 SLAM processing unit 112 for SLAM processing the extracted image, a data processing unit 113 for deep-learning the SLAM-processed image by the SLAM processing unit 112 on a virtual mainframe, and a data processing unit 113 )
  • the data correction unit 114 that corrects distorted information by matching the processed data with external information
  • the landmark extraction unit 115 that extracts landmarks based on the information corrected by the data correction unit 114.
  • a virtual map generation unit 116 that generates a virtual map by applying the landmark extracted from the landmark extraction unit 115, and the data processing unit 113 displays the SLAM-processed image on a virtual mainframe. Placed on the screen, but continuously photographed images by time After SLAM processing, it is arranged on a virtual main frame for each time zone to grasp the overlapped part, and a deep learning that repeatedly executes such a task finds a centripetal point.
  • the local server 20 stores the raw image input through the terminal located in the area, and uploads the image storage unit 21 and the main server 10 and the terminal (uploading) for comparative analysis of the processed image.
  • 30 is composed of a communication unit 22 for communicating with, a data magnetic field unit 23 for storing data, and an information sharing unit 24 for providing information to the terminal 30 and the main server 10,
  • the image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and then processes the processed image in the corresponding region. It provides a location information system using deep learning, characterized in that it consists of a processed image upload section 212 for uploading.
  • a method for providing location information using deep learning which is another means for solving the problems of the present invention, comprising: an image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator; A key frame extraction step (S20) of extracting key frames for each time zone from the obtained image information; SLAM step of storing the video information for each key frame in a chronological order (S30); A data processing step of deep-learning the stored information after SLAM processing for each key frame and placing it on a virtual main frame (S40); A processing image information correcting step (S50) of correcting distorted information by matching processing data obtained through deep learning with external information; A landmark extraction step of extracting a landmark based on the modified information (S60); A virtual map generation step of generating a virtual map by applying the extracted landmark information (S70); And a location information providing step (S80) for providing location information using a virtual map.
  • S10 image information acquisition step
  • S20 key frame extraction step
  • S30 of extracting key frames for each time zone from the obtained
  • the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and the deep learning that repeatedly executes these operations finds the center of gravity.
  • the processing image information modification step (S50) is arranged on the virtual mainframe in the previous step (S40) and accurate location information through matching with the information of the centripetal point obtained through deep learning and external information Acquiring, but correcting distorted information that may occur during video shooting by matching the GPS information or the location information of the object obtained through the information of the center of the obtained object and the information of the map, aerial photograph, and cadastral map that is external information
  • FIG. 1 is a configuration diagram of a location information system using the deep learning of the present invention.
  • FIG. 2 is a configuration diagram of the main server 10 according to the present invention.
  • FIG. 3 is a block diagram of a local server 20 according to the present invention.
  • FIG. 4 is a flowchart of a method for providing location information using deep learning according to the present invention.
  • FIG. 1 is a configuration diagram of a location information system using deep learning of the present invention
  • FIG. 2 is a configuration diagram of a main server 10 according to the present invention
  • FIG. 3 is a configuration diagram of a local server 20 according to the present invention. .
  • the location information system using deep learning of the present invention is connected to the main server 10 and the main server 10, and inputs information for each region to the main server 10 It is composed of a local server 20 and a terminal 30 that transmits to and distributes regional information.
  • the main server 10 processes and processes the image input from the outside, and then generates a virtual map, and then an image processing unit 11 and a communication unit 12 that communicates with the local server 20 and the terminal 30. And a data storage unit 13 for storing data.
  • the image processing unit 11 has a key frame extracted from the key frame extraction unit 111 and the key frame extraction unit 111 to extract the key frame of the raw image input through the terminal 30 or an external image device
  • the data correction unit 114 that corrects distorted information by matching the old data with the external information, and the landmark extraction unit 115 and the landmark extraction unit that extract landmarks based on the information corrected by the data correction unit 114 It is composed of a virtual map generating unit 116 that generates a virtual map by applying the landmark extracted from the unit 115.
  • the key frame extraction unit 111 extracts a key frame from the input image information of the original image, and extracts a key frame for each time zone from the continuously photographed image to correct it when the position of the terminal changes.
  • the SLAM processing unit 112 SLAM-processes the inputted image information based on the key frame extracted by the key frame extraction unit 111.
  • SLAM Simultaneous Localization And Map-Building
  • Map-Building is a technique for recognizing terrain or features by binarizing or coding the recognized image, and detailed description thereof will be omitted as a known technique.
  • the data processing unit 113 arranges a SLAM-processed image on a virtual mainframe.
  • SLAM is processed on a continuous-shot image by time slot, and then it is arranged on a virtual main frame by time slot. In this way, the overlapping part is grasped, and the center of gravity is found through deep learning that repeatedly executes these tasks.
  • the data correction unit 114 is disposed on the virtual mainframe through the data processing unit 113 and the exact location of the target terrain or feature through matching with the information of the centripetal point obtained through deep learning and external information. Information will be grasped.
  • information of one of GPS information or location information of an object obtained through information such as a map, an aerial photograph, and a cadastral map is used.
  • the landmark extracting unit 115 extracts a landmark based on the modified information through the data correction unit 114, and more specifically, an element whose shape and location may change according to a change in time. And removes buildings with relatively little change in position or shape, and applies them as landmarks.
  • the virtual map generation unit 116 generates a virtual map by placing the landmark extracted through the landmark extraction unit 115 on the map.
  • the communication unit 12 is to perform communication with the local server 20 provided for each region and the terminal 30 for service use.
  • an external Internet network is used as a communication method, or a separate dedicated inlet is used.
  • the data storage unit 13 temporarily stores the raw image input from the terminal 30 or the like, and temporarily stores the SLAM image for each time zone in which the input raw image is processed and the processed image through post-production.
  • a user requests a service through the terminal 30, if there is backup data in the corresponding area, it is called and mounted on the data storage unit 13 of the main server 10 to store and store the service.
  • the local server 20 stores a raw image input through a terminal located in a corresponding region, and uploads an image storage unit 21 and a main server 10 and a terminal 30 to upload processed images for comparative analysis. It consists of a communication unit 22 for communicating with, a data magnetic field unit 23 for storing data, and an information sharing unit 24 for providing information to the terminal 30 and the main server 10.
  • the image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and It is composed of a processed image uploading unit 212 for uploading a processed image.
  • any communication means capable of a photographing function possessed by an administrator or a user can be used.
  • the terminal 30 can perform communication with the main server 10 or the local server 20 through the Internet, which is an external computer network, and download and install and use a separate APP or management program for service provision.
  • the Internet which is an external computer network
  • download and install and use a separate APP or management program for service provision is not a limitation.
  • FIG. 4 is a flowchart of a method for providing location information using deep learning.
  • the method for providing location information using deep learning includes an image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator and time from the image information acquired as described above.
  • the image information acquiring step (S10) refers to an image captured by a user or an administrator's terminal or an image captured while moving in the field through a separate imaging device.
  • the obtained image is input to the local server 20 located in each region, so that it can be transmitted by accessing an APP installed in the terminal or an open management program.
  • the key frame may be changed for each time zone because the key frame may fluctuate when the position of the terminal changes when the image is continuously photographed through the terminal. Will be extracted.
  • the GPS information of the terminal when using the GPS information of the terminal, it is possible to obtain the movement status and the movement direction information by using the GPS location information for each time zone recorded with the image when shooting a video, through which the movement speed of the terminal can be grasped.
  • the keyframe acquisition time according to the moving speed is automatically set.
  • the installation location of the base station is determined according to the frequency of the terminal. It is possible to grasp the moving direction and moving speed.
  • an installation interval is determined, and the moving speed of the terminal is measured by measuring the video time during movement from one telephone pole to another adjacent telephone pole, or a telegraph pole.
  • the distance change is measured using image time.
  • the video information for each keyframe is SLAM-processed and stored in chronological order
  • the video information for each time-slice is SLAM-processed and the processed information is stored in chronological order.
  • an identification code is assigned to each information, and an identification code is given to a place and time.
  • the data processing step (S40) of performing SLAM processing for each key frame and deep-learning the stored information to be disposed on the virtual main frame is performed by SLAM processing the continuous image information to grasp the exact position of objects in the image. It is to find a centripetal point to do.
  • the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and the center of gravity is found through deep learning that repeatedly executes these tasks.
  • the processing image information correction step (S50) of correcting distorted information by matching the external data with the processed data obtained through deep learning is arranged on the virtual mainframe in the previous step (S40) and deep learning It is to obtain accurate location information by matching the information of the centripetal point obtained through and the external information.
  • the distorted information that may occur during video shooting is corrected by matching the GPS information or location information of the object through information such as the center of the obtained object and information such as external information such as map, aerial photograph, and cadastral map. will be.
  • the landmark extraction step (S60) of extracting the landmark based on the information modified in the step (S50) is performed.
  • elements that may change in shape and position according to changes in time are removed, and buildings with relatively little change in position or shape are extracted and applied as landmarks.
  • SLAM information with a high exposure frequency is extracted from the hourly image regardless of the location of the video shooting, and it is matched with location information such as external information such as map, aerial photograph and GPS, and then set as a landmark through this.
  • the distinction for identification between ground fixtures such as mailboxes and trees and moving objects such as automobiles is also identified through the difference in exposure frequency in the same way as the landmark setting. It also applies to.
  • a virtual map generation step S70 of generating a virtual map by applying the extracted landmark information is performed.
  • SLAM information for each timeframe of the target object of the landmark is stored and provided as DATA in the image processed in step S30 in which the image information for each keyframe is SLAM processed and stored in chronological order.
  • step S80 of providing location information using the virtual map generated in the step S70 when a user or a service is provided, an image of a corresponding region is input through a terminal or the like, and the input image is received.
  • the landmark is extracted after SLAM processing in real time, and accurate location information is provided through comparison with the database in which SLAM information for each time zone for the extracted landmark is stored.
  • the image of the corresponding region is input through the user's terminal, etc., and when SLAM processing is performed in real time to extract the mark, a plurality of SLAM processing information is deeply diced. Make it possible to improve accuracy by running.
  • step (S40) after repeatedly overlapping a plurality of SLAM processing information in one frame, finding a centripetal point, confirming a landmark, and then providing location information through the tacking steps. And, accordingly, information such as a virtual map.
  • the visually impaired person can prevent a walking accident by guiding the route of a desired area or the location of a terrain or ground that interferes with walking when moving simply.
  • image input unit 212 processed image uploading unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a location information system using deep learning and a method for providing same, so as to provide location information and services by deep learning on the basis of a SLAM technique. To this end, the present invention provides a location information system using deep learning, comprising: a main server (10) consisting of an image treating unit (11) for generating a virtual map after processing and treating an externally-inputted image, a communication unit (12) for communicating with a local server (20) and a terminal (30), and a data storage unit (13) for storing data; the local server (20), connected to the main server (10), for transmitting information input for each region to the main server (10) and distributing and storing regional information; and the terminal (30). In addition, the present invention provides a method comprising: an image information acquisition step (S10); a keyframe extraction step (S20); a SLAM treatment step (S30); a data processing step (S40); a processed image information modification step (S50); a landmark extraction step (S60); a virtual map generation step (S70); and a location information provision step (S80).

Description

딥러닝을 이용한 위치정보 시스템 및 그 제공방법Location information system using deep learning and its provision method
본 발명은 딥러닝을 이용한 위치정보 시스템 및 그 제공방법에 대한 것으로, SLAM기법을 기반으로 딥러닝하여 보다 정확한 위치정보 및 서비스를 제공하게 되는 것이다.The present invention relates to a location information system using deep learning and a method for providing the same, and to provide a more accurate location information and service by deep learning based on a SLAM technique.
일반적으로 위치정보는 다양한 방법으로 사용자에게 서비스되고 있는데, 그 중 대표적인 것으로 위성항법 시스템을 적용한 GPS시스템이 대표적인 것이다.In general, location information is provided to users in various ways. Among them, a GPS system using a satellite navigation system is a representative one.
그러나, 이와 같은 GPS시스템의 경우 지구궤도를 돌고 있는 위성과의 통신을 기반으로 하여, 위성과 통신이 수행되지 않을 경우 그 이용자체가 불가능하며, 일정수 이상의 위성과의 통신이 수행되어야만 그 위치정보에 대한 정확도가 높아지게 된다.However, in the case of such a GPS system, based on the communication with the satellites orbiting the earth, if the communication with the satellites is not performed, the user itself is impossible, and communication with a certain number of satellites or more must be performed. The accuracy of is increased.
그러나, 이와 같은 GPS의 경우에도 그 정밀도가 수미터 내외의 오차가 존재하여 대략적인 위치정도만을 파악할 수 있을 뿐 보행자의 길찾기 또는 자율주행에 따른 위치정보를 제공하기에는 부족한 점이 있었다.However, even in the case of such a GPS, the precision of the error exists within a few meters, so it is possible to grasp only the approximate location, and it is insufficient to provide location information according to pedestrians' path finding or autonomous driving.
이에 따라 다양한 방법이 제안되었고 그 중 하나가, 로봇이 미지의 환경을 돌아다니면서 로봇에 부착되어 있는 센서만으로 외부의 도움 없이 환경에 대한 지도를 생성하는 기술인 SLAM(Simultaneous Localization And Map-Building)이 제안되고 있으며, 이중에서 지형 및 지물에 대한 영상촬영을 통하여 이를 분석하여 위치정보를 파악하는 기술들이 제안되고 있다.Accordingly, various methods have been proposed, and one of them is proposed by SLAM (Simultaneous Localization And Map-Building), a technology that generates a map of the environment without external assistance with only sensors attached to the robot while the robot moves around an unknown environment. Among them, techniques for obtaining location information by analyzing this through image photographing of terrain and features have been proposed.
이와 같은 SLAM기술을 적용한 대표적인 기술로는 대한민국 공개특허 제10-2016-0003731호(SLAM 맵들로부터의 광역 위치인식, 2016년 01월 11일 공개)가 제안되고 있으며, SLAM(simultaneous localization and mapping) 맵들로부터 광역 위치인식을 수행하기 위한 예시적인 방법들, 장치들, 및 시스템들이 개시된다. 모바일 디바이스는 하나 이상의 수신된 이미지들을 이용하여 로컬 환경의 제 1키프레임 기반 SLAM 맵을 선택할 수 있다. 로컬 환경 내에 있는 모바일 디바이스의 개별적인 위치인식이 결정될 수 있고, 개별적인 위치인식은 키프레임 기반 SLAM 맵에 기초할 수 있다. 모바일 디바이스는 제 1 키프레임을 서버로 전송하고 모바일 디바이스 상의 로컬 맵에 대한 정정을 나타내는 제 1 글로벌 위치인식 응답을 수신할 수 있다. 제 1 글로벌 위치인식 응답은 회전, 병진이동, 및 스케일 정보를 포함할 수 있다. 서버는 모바일 디바이스로부터 키프레임들을 수신하고, 모바일 디바이스로부터 수신된 키프레임 특징부들을 서버 맵 특징부에 매칭시킴으로써 서버 맵 내의 키프레임들을 위치인식할 수 있는 기술을 제안하고 있다.As a representative technology to which such SLAM technology is applied, Korean Patent Publication No. 10-2016-0003731 (wide-area recognition from SLAM maps, published on January 11, 2016) has been proposed, and SLAM (simultaneous localization and mapping) maps. Exemplary methods, apparatuses, and systems for performing wide area localization are disclosed. The mobile device may select the first keyframe based SLAM map of the local environment using one or more received images. The individual location recognition of the mobile device within the local environment may be determined, and the individual location recognition may be based on a keyframe based SLAM map. The mobile device can transmit the first keyframe to the server and receive a first global location-awareness response indicating a correction to the local map on the mobile device. The first global position recognition response may include rotation, translation, and scale information. The server proposes a technique for receiving keyframes from a mobile device and recognizing keyframes in the server map by matching keyframe features received from the mobile device to the server map features.
또한, 대한민국 공개특허 제10-2013-0134986호(사용자로부터 환경에 대한 사진 입력을 받는 이동로봇의 SLAM시스템 및 방법, 2013년 12월 10일 공개)는 사용자로부터 환경에 대한 사진 입력을 받는 이동로봇의 SLAM 시스템으로서, 사용자로부터 환경에 대한 정보나 명령을 입력받아 이동로봇으로 전송하는 사용자 단말기; 및In addition, Republic of Korea Patent Publication No. 10-2013-0134986 (SLAM system and method of a mobile robot that receives a photo input from the user from the user, published on December 10, 2013) is a mobile robot that receives a photo input to the environment from the user A SLAM system comprising: a user terminal that receives information or commands about an environment from a user and transmits it to a mobile robot; And
상기 사용자 단말기로부터 전송된 정보와 명령, 데이터 획득 장치로부터 획득한 주변 환경 정보 및 이동정보를 파악하여, 지도 작성을 수행하는 이동로봇을 포함하는 것을 특징으로 하는 이동로봇의 SLAM 시스템을 제공하고, 이동로봇에 장착되어 주변 환경에 대한 정보를 획득하거나, 이동로봇의 이동을 측정하는 수단인 데이터 획득 장치; 상기 데이터 획득 장치와 상기 사용자 단말기로부터 전송된 정보를 이용하여 실시간 지도작성, 작성된 지도의 관리 및 수정, 작성된 지도를 바탕으로 이동로봇의 위치를 파악하고 경로를 결정하는 기능을 수행하는 제2 처리장치; 및 상기 제2 처리장치에서 처리된 정보를 기반으로 이동로봇의 주행을 제어하는 기술을 제공하고 있다.It provides a mobile robot SLAM system, characterized in that it comprises a mobile robot that performs map creation by grasping the environment information and movement information obtained from the user terminal, information and commands, and a data acquisition device, and moving A data acquisition device mounted on the robot to obtain information about the surrounding environment or a means for measuring movement of the mobile robot; A second processing device that performs a function of determining the location of a mobile robot and determining a route based on real-time map creation, management and modification of the created map, and created map using the data obtained from the data acquisition device and the user terminal ; And a technology for controlling the driving of the mobile robot based on the information processed by the second processing device.
그러나, 앞선 선행기술들의 경우 영상 정보만을 SLAM하여 이를 기반으로 적용하기 때문에, 외부 환경 변화에 취약하여 외부환경이 변화될 경우 이를 적용할 수 없어 서비스를 제공하는데 한계가 발생하게 된다.However, in the case of the prior arts, since only the video information is SLAM and applied based on it, it is vulnerable to changes in the external environment, and when the external environment changes, it cannot be applied and thus there is a limit in providing services.
본 발명은 상기와 같은 SLAM 기술의 단점을 보완하기 위해, 딥러닝을 이용하여 위치정보를 제공하여, 보다 정밀하며, 외부환경 변화에 적응할 수 있도록 하는 위치정보 시스템 및 그 정보제공 방법을 제공하는 것을 목적으로 한다.The present invention is to provide a location information system and a method for providing the information to provide a location information using deep learning, to be more precise and adaptable to changes in the external environment, in order to compensate for the shortcomings of the SLAM technology as described above. The purpose.
본 발명의 과제를 해결하기 위한 딥러닝을 이용한 위치정보 시스템에 있어서, 외부에서 입력되는 영상을 가공 및 처리한 후 가상지도를 생성하는 영상처리유니트(11)와 로컬서버(20) 및 단말기(30)와 통신을 수행하는 통신유니트(12) 및 데이터를 저장하는 데이터저장유니트(13)로 구성되는 메인서버(10);와 상기 메인서버(10)에 연결되어 지역별로 입력되는 정보를 메인서버(10)에 전송해주고, 지역별 정보를 분산저장하는 로컬서버(20); 및 단말기(30)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템을 제공하게 된다.In a location information system using deep learning for solving the problems of the present invention, an image processing unit 11 and a local server 20 and a terminal 30 that generate a virtual map after processing and processing an image input from the outside ) And a main server (10) consisting of a communication unit (12) for performing communication and a data storage unit (13) for storing data; and a main server (10) that is connected to the main server (10) and inputs information for each region. 10) a local server (20) that transmits and distributes regional information; And a terminal 30, to provide a location information system using deep learning.
이때, 상기 영상처리유니트(11)는 단말기(30) 또는 외부영상기기를 통하여 입력되는 원시영상의 키프레임을 추출하는 키프레임추출부(111)와, 상기 키프레임추출부(111)에서 키프레임이 추출된 영상을 SLAM처리하는 SLAM처리부(112)와, SLAM처리부(112)에서 SLAM처리된 영상을 가상의 메인프레임 상에 딥러닝하여 배치하는 데이터가공부(113),와 데이터가공부(113)에서 가공된 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 데이터수정부(114),와 데이터수정부(114)에서 수정된 정보를 기준으로 랜드마크를 추출하는 랜드마크추출부(115) 및 랜드마크추출부(115)에서 추출된 랜드마크를 적용하여 가상지도를 생성하는 가상지도생성부(116)로 구성되며, 상기 데이터가공부(113)은 SLAM처리된 영상을 가상의 메인프레임 상에 배치하되, 연속 촬영된 영상을 시간대별로 SLAM처리한 후 이를 가상의 메임프레임상에 시간대별로 배치하여 중첩된 부분을 파악하게 되고, 이러한 작업을 반복시행하는 딥러닝을 통하여 구심점을 찾게 된다.At this time, the image processing unit 11 is a key frame extraction unit 111 for extracting a key frame of a raw image input through the terminal 30 or an external image device, and the key frame in the key frame extraction unit 111 SLAM processing unit 112 for SLAM processing the extracted image, a data processing unit 113 for deep-learning the SLAM-processed image by the SLAM processing unit 112 on a virtual mainframe, and a data processing unit 113 ) The data correction unit 114 that corrects distorted information by matching the processed data with external information, and the landmark extraction unit 115 that extracts landmarks based on the information corrected by the data correction unit 114. And a virtual map generation unit 116 that generates a virtual map by applying the landmark extracted from the landmark extraction unit 115, and the data processing unit 113 displays the SLAM-processed image on a virtual mainframe. Placed on the screen, but continuously photographed images by time After SLAM processing, it is arranged on a virtual main frame for each time zone to grasp the overlapped part, and a deep learning that repeatedly executes such a task finds a centripetal point.
또한, 상기 로컬서버(20)는 해당지역에 위치하는 단말기를 통하여 입력되는 원시영상을 저장하고, 가공된 영상을 비교분석하기 위해 업로딩하는 영상저장유니트(21)와 메인서버(10)와 단말기(30)와의 통신을 수행하는 통신유니트(22), 데이터를 저장하는 데이터자장유니트(23) 및 단말기(30)와 메인서버(10)에 정보를 제공하기 위한 정보제공유니트(24)로 구성되며, 상기 영상저장유니트(21)는 단말기 또는 별도의 촬영장비로부터 입력되는 원시영상을 입렵받아 저장하기 위한 영상입력부(211)와 가공되어 저장된 백업데이터와 입력된 원시영상 정보를 비교하여 해당지역의 가공영상을 업로딩하기 위한 가공영상업로딩부(212)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템을 제공하게 된다.In addition, the local server 20 stores the raw image input through the terminal located in the area, and uploads the image storage unit 21 and the main server 10 and the terminal (uploading) for comparative analysis of the processed image. 30) is composed of a communication unit 22 for communicating with, a data magnetic field unit 23 for storing data, and an information sharing unit 24 for providing information to the terminal 30 and the main server 10, The image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and then processes the processed image in the corresponding region. It provides a location information system using deep learning, characterized in that it consists of a processed image upload section 212 for uploading.
본 발명의 과제를 해결하기 위한 또다른 수단인 딥러닝을 이용한 위치정보 제공방법에 있어서, 사용자 또는 관리자의 단말기로부터 영상을 획득하는 영상정보 획득단계(S10); 상기와 같이 획득된 영상정보에서 시간대별 키프레임을 추출하는 키프레임 추출단계(S20); 상기 키프레임별 영상정보를 SLAM 처리하여 시간순으로 저장하는 SLAM 단계(S30); 키프레임별 SLAM 처리 후 저장된 정보를 딥러닝하여 가상의 메인프레임 상에 배치하는 데이터 가공단계(S40); 딥러닝을 통하여 획득된 가공 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 가공영상정보 수정단계(S50); 수정된 정보를 기준으로 하여 랜드마크를 추출하는 랜드마크 추출단계(S60); 추출된 랜드마크 정보를 적용하여 가상지도를 생성하는 가상지도 생성단계(S70); 및 가상지도를 이용하여 위치정보를 제공하는 위치정보 제공단계(S80)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 제공방법을 제공하게 된다.A method for providing location information using deep learning, which is another means for solving the problems of the present invention, comprising: an image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator; A key frame extraction step (S20) of extracting key frames for each time zone from the obtained image information; SLAM step of storing the video information for each key frame in a chronological order (S30); A data processing step of deep-learning the stored information after SLAM processing for each key frame and placing it on a virtual main frame (S40); A processing image information correcting step (S50) of correcting distorted information by matching processing data obtained through deep learning with external information; A landmark extraction step of extracting a landmark based on the modified information (S60); A virtual map generation step of generating a virtual map by applying the extracted landmark information (S70); And a location information providing step (S80) for providing location information using a virtual map.
이때, 상기 데이터 가공단계(S40)는 SLAM처리된 정보를 가상의 메인프레임상에 시간대별로 획득된 정보를 배치하여 중첩된 부분을 파악하게 되고, 이러한 작업을 반복시행하는 딥러닝을 통하여 구심점을 찾는 것을 특징으로 하고, 상기 가공영상정보 수정단계(S50)는 전단계(S40)에서 가상의 메인프레임상에 배치되고 딥러닝을 통하여 획득된 구심점의 정보와, 외부정보와의 매칭을 통하여 정확한 위치정보를 획득하되, 획득된 대상물의 구심점과 외부정보인 지도, 항공사진 및 지적도의 정보를 통하여 획득되는 대상물의 GPS 정보 또는 위치정보와의 매칭을 통하여 영상촬영시 발생할 수 있는 왜곡된 정보를 수정하는 것을 특징으로 하는 딥러닝을 이용한 위치정보 제공방법을 제공함으로써 본 발명의 과제를 보다 잘 해결할 수 있도록 한다.At this time, in the data processing step (S40), the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and the deep learning that repeatedly executes these operations finds the center of gravity. Characterized in that, the processing image information modification step (S50) is arranged on the virtual mainframe in the previous step (S40) and accurate location information through matching with the information of the centripetal point obtained through deep learning and external information Acquiring, but correcting distorted information that may occur during video shooting by matching the GPS information or the location information of the object obtained through the information of the center of the obtained object and the information of the map, aerial photograph, and cadastral map that is external information By providing a method for providing location information using deep learning to be able to better solve the problem of the present invention.
본 발명의 딥러닝을 이용한 위치정보 시스템 및 그 제공방법을 제공함으로써, 외부 환경변화에 영향을 적게 받아 신뢰성이 높은 위치정보를 제공할 수 있는 효과가 있다.By providing the location information system using the deep learning of the present invention and a method for providing the location information, there is an effect capable of providing highly reliable location information that is less affected by external environmental changes.
도 1은 본 발명 딥러닝을 이용한 위치정보 시스템의 구성도이다.1 is a configuration diagram of a location information system using the deep learning of the present invention.
도 2는 본 발명에 따른 메인서버(10)의 구성도이다.2 is a configuration diagram of the main server 10 according to the present invention.
도 3은 본 발명에 따른 로컬서버(20)의 구성도이다. 3 is a block diagram of a local server 20 according to the present invention.
도 4는 본 발명에 따른 딥러닝을 이용한 위치정보 제공방법의 순서도이다.4 is a flowchart of a method for providing location information using deep learning according to the present invention.
이하에서 당업자가 본 발명의 딥러닝을 이용한 위치정보 시스템 및 그 제공방법을 용이하게 실시할 수 있도록 이하에서 도면을 참조하여 상세하게 설명하도록 한다.Hereinafter, with reference to the accompanying drawings, it will be described in detail so that those skilled in the art can easily implement the location information system using the deep learning of the present invention and a method for providing the same.
도 1은 본 발명 딥러닝을 이용한 위치정보 시스템의 구성도이고, 도 2는 본 발명에 따른 메인서버(10)의 구성도이며, 도 3은 본 발명에 따른 로컬서버(20)의 구성도이다. 1 is a configuration diagram of a location information system using deep learning of the present invention, FIG. 2 is a configuration diagram of a main server 10 according to the present invention, and FIG. 3 is a configuration diagram of a local server 20 according to the present invention. .
도 1 내지 도 3을 참조하여 상세하게 설명하면, 본 발명의 딥러닝을 이용한 위치정보 시스템은 메인서버(10)와 상기 메인서버(10)에 연결되어 지역별로 입력되는 정보를 메인서버(10)에 전송해주고, 지역별 정보를 분산저장하는 로컬서버(20) 및 단말기(30)로 구성된다.When described in detail with reference to FIGS. 1 to 3, the location information system using deep learning of the present invention is connected to the main server 10 and the main server 10, and inputs information for each region to the main server 10 It is composed of a local server 20 and a terminal 30 that transmits to and distributes regional information.
상기 메인서버(10)는 외부에서 입력되는 영상을 가공 및 처리한 후 가상지도를 생성하는 영상처리유니트(11)와 로컬서버(20) 및 단말기(30)와 통신을 수행하는 통신유니트(12) 및 데이터를 저장하는 데이터저장유니트(13)로 구성된다.The main server 10 processes and processes the image input from the outside, and then generates a virtual map, and then an image processing unit 11 and a communication unit 12 that communicates with the local server 20 and the terminal 30. And a data storage unit 13 for storing data.
여기서, 상기 영상처리유니트(11)는 단말기(30) 또는 외부영상기기를 통하여 입력되는 원시영상의 키프레임을 추출하는 키프레임추출부(111)와 상기 키프레임추출부(111)에서 키프레임이 추출된 영상을 SLAM처리하는 SLAM처리부(112)와 SLAM처리부(112)에서 SLAM처리된 영상을 가상의 메인프레임 상에 딥러닝하여 배치하는 데이터가공부(113)와 데이터가공부(113)에서 가공된 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 데이터수정부(114)와 데이터수정부(114)에서 수정된 정보를 기준으로 랜드마크를 추출하는 랜드마크추출부(115) 및 랜드마크추출부(115)에서 추출된 랜드마크를 적용하여 가상지도를 생성하는 가상지도생성부(116)로 구성된다.Here, the image processing unit 11 has a key frame extracted from the key frame extraction unit 111 and the key frame extraction unit 111 to extract the key frame of the raw image input through the terminal 30 or an external image device The SLAM processing unit 112 for SLAM processing the extracted image and the data processing unit 113 and data processing unit 113 for deep-learning the SLAM-processed image on the virtual mainframe by the SLAM processing unit 112. The data correction unit 114 that corrects distorted information by matching the old data with the external information, and the landmark extraction unit 115 and the landmark extraction unit that extract landmarks based on the information corrected by the data correction unit 114 It is composed of a virtual map generating unit 116 that generates a virtual map by applying the landmark extracted from the unit 115.
상기 키프레임추출부(111)는 입력된 원시영상의 영상정보에서 키프레임을 추출하는 것으로, 연속적으로 촬영된 영상에서 시간대별 키프레임을 추출하여 단말기의 위치가 변동될 경우 이를 보정하기 위한 것이다.The key frame extraction unit 111 extracts a key frame from the input image information of the original image, and extracts a key frame for each time zone from the continuously photographed image to correct it when the position of the terminal changes.
이는 SLAM처리, 딥러닝을 통한 데이터가공, 외부정보와의 매칭을 통한 데이터수정, 랜드마크 추출 등의 후반 작업시 연속된 영상의 가공 및 처리에 따른 오차 등을 줄이기 위하여 키프레임을 추출하고 이를 기반으로 하여 후반 작업을 진행하기 위한 것이다.This extracts key frames to reduce errors due to processing and processing of continuous images during post-production such as SLAM processing, data processing through deep learning, data modification through matching with external information, and landmark extraction. It is intended to proceed with the latter work.
상기 SLAM처리부(112)는 키프레임추출부(111)에서 추출된 키프레임을 기반으로 하여 입력된 영상정보를 SLAM처리 하는 것이다.The SLAM processing unit 112 SLAM-processes the inputted image information based on the key frame extracted by the key frame extraction unit 111.
여기서 SLAM(Simultaneous Localization And Map-Building)은 인식된 영상을 2진화 또는 코드화 하여 지형 또는 지물을 인식하는 기술로 이에 대한 상세한 설명은 공지기술로서 생략하도록 한다. Here, SLAM (Simultaneous Localization And Map-Building) is a technique for recognizing terrain or features by binarizing or coding the recognized image, and detailed description thereof will be omitted as a known technique.
상기 데이터가공부(113)은 SLAM처리된 영상을 가상의 메인프레임 상에 배치하는 것으로, 보다 구체적으로 설명하면, 연속 촬영된 영상을 시간대별로 SLAM처리한 후 이를 가상의 메임프레임상에 시간대별로 배치하여 중첩된 부분을 파악하게 되고, 이러한 작업을 반복시행하는 딥러닝을 통하여 구심점을 찾게 된다.The data processing unit 113 arranges a SLAM-processed image on a virtual mainframe. In more detail, SLAM is processed on a continuous-shot image by time slot, and then it is arranged on a virtual main frame by time slot. In this way, the overlapping part is grasped, and the center of gravity is found through deep learning that repeatedly executes these tasks.
이는, 연속촬영된 영상일 경우 각각의 프레임별 촬영위치가 다르거나, 대상물의 부피가 존재할 경우 영상 촬영위치에 따른 왜곡현상이 발생할 수 있기 때문에 이를 보정하는 것이다.This is to correct this because, in the case of a continuously photographed image, a photographing position for each frame is different or a volume of an object exists, distortion may occur according to the photographing location.
상기 데이터수정부(114)는 데이터가공부(113)를 통하여 가상의 메인프레임 상에 배치되고 딥러닝을 통하여 획득된 구심점의 정보와, 외부정보와의 매칭을 통하여 대상물인 지형 또는 지물의 정확한 위치정보를 파악하게 된다.The data correction unit 114 is disposed on the virtual mainframe through the data processing unit 113 and the exact location of the target terrain or feature through matching with the information of the centripetal point obtained through deep learning and external information. Information will be grasped.
이때, 상기 외부정보는 지도, 항공사진 및 지적도 등의 정보를 통하여 획득되는 대상물의 GPS정보 또는 위치정보 중 하나의 정보가 이용된다.At this time, as the external information, information of one of GPS information or location information of an object obtained through information such as a map, an aerial photograph, and a cadastral map is used.
상기 랜드마크추출부(115)는 데이터수정부(114)를 통하여 수정된 정보를 기준으로 하여 랜드마크를 추출하는 것으로, 보다 구체적으로는 시간적인 변화에 따라 그 형상 및 위치 등이 변할 수 있는 요소를 제거하고, 위치 변동이나 형상의 변형이 비교적 적은 건물 등을 추출하여 이를 랜드마크로 적용하게 된다.The landmark extracting unit 115 extracts a landmark based on the modified information through the data correction unit 114, and more specifically, an element whose shape and location may change according to a change in time. And removes buildings with relatively little change in position or shape, and applies them as landmarks.
상기 가상지도생성부(116)는 랜드마크추출부(115)를 통하여 추출된 랜드마크를 지도에 배치하여 가상지도를 생성하게 된다.The virtual map generation unit 116 generates a virtual map by placing the landmark extracted through the landmark extraction unit 115 on the map.
상기 통신유니트(12)는 각 지역별로 구비되는 로컬서버(20) 및 서비스 이용을 위한 단말기(30)와의 통신을 수행하게 되는 것이다.The communication unit 12 is to perform communication with the local server 20 provided for each region and the terminal 30 for service use.
이때, 통신방법으로는 외부 인터넷망을 이용하거나, 별도의 전용 인렛 등을 이용하게 된다.At this time, an external Internet network is used as a communication method, or a separate dedicated inlet is used.
상기 데이터저장유니트(13)는 단말기(30) 등으로부터 입력된 원시영상을 임시로 저장하고, 입력된 원시영상을 가공한 시간대별 SLAM영상 및 후작업을 통하여 가공된 영상을 임시로 저장하게 된다.The data storage unit 13 temporarily stores the raw image input from the terminal 30 or the like, and temporarily stores the SLAM image for each time zone in which the input raw image is processed and the processed image through post-production.
이때, 작업 및 서비스가 완료된 데이터의 경우 해당 지역에 구비되는 로컬서버(20)로 이송되어 각각의 로컬서버(20)에 데이터를 저장하여 메인서버(10)에 발생할 수 있는 부하를 감소시켜주게 된다.At this time, in the case of data that has been completed for work and service, it is transferred to the local server 20 provided in the corresponding area to store data in each local server 20 to reduce the load that may occur on the main server 10. .
여기서, 사용자가 단말기(30)를 통하여 서비스를 요청할 경우 해당 지역의 백업데이터가 있는 경우 이를 호출하여 메인서버(10)의 데이터저장유니트(13)에 탑재하여 저장한 후 서비스를 실시하게 된다.Here, when a user requests a service through the terminal 30, if there is backup data in the corresponding area, it is called and mounted on the data storage unit 13 of the main server 10 to store and store the service.
상기 로컬서버(20)는 해당지역에 위치하는 단말기를 통하여 입력되는 원시영상을 저장하고, 가공된 영상을 비교분석하기 위해 업로딩하는 영상저장유니트(21)와 메인서버(10)와 단말기(30)와의 통신을 수행하는 통신유니트(22), 데이터를 저장하는 데이터자장유니트(23) 및 단말기(30)와 메인서버(10)에 정보를 제공하기 위한 정보제공유니트(24)로 구성된다.The local server 20 stores a raw image input through a terminal located in a corresponding region, and uploads an image storage unit 21 and a main server 10 and a terminal 30 to upload processed images for comparative analysis. It consists of a communication unit 22 for communicating with, a data magnetic field unit 23 for storing data, and an information sharing unit 24 for providing information to the terminal 30 and the main server 10.
이때, 상기 영상저장유니트(21)는 단말기 또는 별도의 촬영장비로부터 입력되는 원시영상을 입렵받아 저장하기 위한 영상입력부(211)와 가공되어 저장된 백업데이터와 입력된 원시영상 정보를 비교하여 해당지역의 가공영상을 업로딩하기 위한 가공영상업로딩부(212)로 구성된다.At this time, the image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and It is composed of a processed image uploading unit 212 for uploading a processed image.
이는 사용자가 단말기(30)를 통하여 해당지역의 원시영상을 입력하고 서비스를 요청할 경우 입력된 원시영상과 해당지역의 가공된 영상을 비교하여, 가공된 영상이 백업데이터로 저장되어 있을 경우, 이를 업로딩하여 서비스를 제공하기 위한 것이다.This compares the inputted raw image with the processed image in the corresponding region when the user inputs the raw image in the corresponding region through the terminal 30 and requests service, and uploads it if the processed image is stored as backup data. To provide services.
상기 단말기(30)의 경우 관리자 또는 사용자가 보유하고 있는 촬영기능이 가능한 통신용 수단이면 무엇이든 가능하다.In the case of the terminal 30, any communication means capable of a photographing function possessed by an administrator or a user can be used.
이와 같은 단말기(30)는 외부 전산망인 인터넷 등을 통하여 메인서버(10) 또는 로컬서버(20)와 통신 수행이 가능하여, 서비스 제공을 위한 별도의 APP 또는 관리프로그램 등을 다운받아 설치하여 사용할 수 있으나, 이를 한정하는 것은 아니다.The terminal 30 can perform communication with the main server 10 or the local server 20 through the Internet, which is an external computer network, and download and install and use a separate APP or management program for service provision. However, this is not a limitation.
상기와 같은 구성에 의해 본 발명의 딥러닝을 이용한 위치정보 시스템을 완성할 수 있는 것이다.With the above configuration, it is possible to complete the location information system using the deep learning of the present invention.
본 발명에 따른 또다른 발명인 딥러닝을 이용한 위치정보 제공방법의 순서도인 도 4를 참조하여 이하에서 상세하게 설명하도록 한다.Another invention according to the present invention will be described in detail below with reference to FIG. 4 which is a flowchart of a method for providing location information using deep learning.
도 4를 참조하여 상세하게 설명하면, 본 발명에 따른 딥러닝을 이용한 위치정보 제공방법은 사용자 또는 관리자의 단말기로부터 영상을 획득하는 영상정보 획득단계(S10)와 상기와 같이 획득된 영상정보에서 시간대별 키프레임을 추출하는 키프레임 추출단계(S20)와 상기 키프레임별 영상정보를 SLAM 처리하여 시간순으로 저장하는 SLAM 단계(S30)와 키프레임별 SLAM 처리 후 저장된 정보를 딥러닝하여 가상의 메인프레임 상에 배치하는 데이터 가공단계(S40)와 딥러닝을 통하여 획득된 가공 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 가공영상정보 수정단계(S50), 수정된 정보를 기준으로 하여 랜드마크를 추출하는 랜드마크 추출단계(S60), 추출된 랜드마크 정보를 적용하여 가상지도를 생성하는 가상지도 생성단계(S70) 및 가상지도를 이용하여 위치정보를 제공하는 위치정보 제공단계(S80)로 구성된다.Referring to FIG. 4 in detail, the method for providing location information using deep learning according to the present invention includes an image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator and time from the image information acquired as described above. A keyframe extraction step (S20) of extracting keyframes for each unit, and a SLAM step (S30) of processing the SLAM processing of the video information for each keyframe in chronological order, and deep learning of the stored information after SLAM processing for each keyframe to simulate a virtual mainframe Data processing step (S40) placed on the image and processing image information correction step (S50) to correct distorted information by matching the processing data obtained through deep learning with external information, and landmarks based on the modified information Landmark extraction step (S60) to extract, virtual map generation step (S70) to generate a virtual map by applying the extracted landmark information, and location using the virtual map It consists of a location information providing step (S80) for providing information.
여기서 상기 영상정보 획득단계(S10)는 사용자 또는 관리자의 단말기로 촬영된 영상 또는 별도의 촬영장비를 통하여 현장에서 이동하며 촬영된 영상을 말하는 것이다.Here, the image information acquiring step (S10) refers to an image captured by a user or an administrator's terminal or an image captured while moving in the field through a separate imaging device.
이와 같이 획득된 영상을 지역별로 위치한 로컬서버(20)로 입력받게 되는데, 단말기에 설치된 APP 또는 개방형 관리프로그램에 접속하여 전송할 수 있도록 한다.The obtained image is input to the local server 20 located in each region, so that it can be transmitted by accessing an APP installed in the terminal or an open management program.
이는 관리자가 서비스를 제공하기 위해 전지역의 정보를 획득하기 어려운 문제가 발생할 수 있기 때문에 서비스 제공을 원하는 사용자가 원하는 지역의 영상정보를 획득하여 전송한 후 서비스를 요청할 수 있도록 하기 위한 것이다.This is to allow a user who wants to provide a service to request a service after acquiring and transmitting video information in a desired area because a problem may arise in which an administrator may have difficulty obtaining information in all areas in order to provide a service.
상기 획득된 영상정보에서 시간대별 키프레임을 추출하는 키프레임 추출단계(S20)는 단말기를 통하여 연속적으로 영상이 촬영될 경우 단말기의 위치가 변동될 경우를 키프레임이 변동될 수 있기 때문에 시간대별로 키프레임을 추출하게 된다.In the key frame extraction step (S20) of extracting the key frame for each time zone from the obtained image information, the key frame may be changed for each time zone because the key frame may fluctuate when the position of the terminal changes when the image is continuously photographed through the terminal. Will be extracted.
여기서, 시간대별 키프레임을 추출하는 방법으로는 사용자가 기설정한 시간 간격으로 적용하거나, 영상을 획득하는 단말기의 GPS 정보, 기지국과의 거리 또는 영상속에 특정대상물의 크기 변화 등을 이용한 삼각측정법 등이 적용될 수 있다.Here, as a method of extracting key frames for each time zone, a triangulation method using a GPS information of a terminal that acquires an image at a predetermined time interval, a distance from a base station or a size change of a specific object in an image, etc. This can be applied.
예를 들어, 단말기의 GPS 정보를 이용할 경우, 동영상의 촬영시 영상과 함께 기록되는 시간대별 GPS 위치정보를 이용하여 이동여부 및 이동방향 정보를 획득할 수 있고, 이를 통하여 단말기의 이동속도를 파악할 수 있으며, 이동속도에 따른 키프레임 획득 시간을 자동으로 설정하게 되는 것이다.For example, when using the GPS information of the terminal, it is possible to obtain the movement status and the movement direction information by using the GPS location information for each time zone recorded with the image when shooting a video, through which the movement speed of the terminal can be grasped. The keyframe acquisition time according to the moving speed is automatically set.
또한, 기지국과의 거리를 이용하는 경우에는 단말기의 주파수에 따라 기지국의 설치 위치가 정해지는데, 1개의 기지국과의 통신 후 인접한 타 기지국으로 통신이 이동될 경우 이를 통하여 방향 및 이동된 시간을 통하여 단말기의 이동방향 및 이동속도를 파악할 수 있는 것이다.In addition, when using a distance from the base station, the installation location of the base station is determined according to the frequency of the terminal. It is possible to grasp the moving direction and moving speed.
또한, 영상속의 특정대상물의 크기 및 형상의 변화와 영상촬영시간을 이용하여 이동여부를 파악하게 되며, 이동여부가 파악되면, 이동속도와 방향을 파악할 수 있는 것이다.In addition, it is possible to grasp whether the movement is performed by using a change in the size and shape of a specific object in the image and the image shooting time. If the movement is determined, the moving speed and direction can be grasped.
보다 구체적으로 설명하면, 예를 들어, 길가에 설치되는 전신주의 경우 설치간격이 정해져 있고, 1개의 전신주부터 인접한 또다른 전신주까지 이동하는 동안의 영상시간을 측정하여 단말기의 이동속도를 측정하거나, 전신주의 높이 변화에 따른 삼각측정법을 통하여 영상시간 등을 이용하여 거리변화를 측정하는 것이다.More specifically, for example, in the case of a telegraph pole installed on the roadside, an installation interval is determined, and the moving speed of the terminal is measured by measuring the video time during movement from one telephone pole to another adjacent telephone pole, or a telegraph pole. Through the triangulation method according to the change in height, the distance change is measured using image time.
상기 키프레임별 영상정보를 SLAM 처리하여 시간순으로 저장하는 SLAM단계(S30)는 시간대별 키프레임을 추출한 후 각각의 시간대별 영상정보를 SLAM처리하고, 처리된 정보를 시간순으로 각각 저장하게 되는 것이다.In the SLAM step (S30) in which the video information for each keyframe is SLAM-processed and stored in chronological order, after extracting keyframes for each time-slice, the video information for each time-slice is SLAM-processed and the processed information is stored in chronological order.
이때, 정보를 저장할 때, 각각의 정보에는 식별코드가 부여되는데, 장소와 시간에 대하여 식별코드로 부여된다.At this time, when storing information, an identification code is assigned to each information, and an identification code is given to a place and time.
예를 들어, "서울-강남-테헤란로-2018.01.01 12:00 23'"와 같은 방법으로 식별정보가 부여된다.For example, "Seoul-Gangnam-Tehran-ro-2018.01.01 12:00 23 '" is provided with identification information.
상기 단계(S30) 다음으로 키프레임별 SLAM 처리 후 저장된 정보를 딥러닝하여 가상의 메인프레임 상에 배치하는 데이터 가공 단계(S40)는 연속된 영상정보를 SLAM처리하여 영상속 대상물들의 정확한 위치를 파악하기 위한 구심점을 찾는 것이다.After the step (S30), the data processing step (S40) of performing SLAM processing for each key frame and deep-learning the stored information to be disposed on the virtual main frame is performed by SLAM processing the continuous image information to grasp the exact position of objects in the image. It is to find a centripetal point to do.
보다 구체적으로 설명하면, SLAM처리된 정보를 가상의 메인프레임상에 시간대별로 획득된 정보를 배치하여 중첩된 부분을 파악하게 되고, 이러한 작업을 반복시행하는 딥러닝을 통하여 구심점을 찾게 되는 것이다.In more detail, the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and the center of gravity is found through deep learning that repeatedly executes these tasks.
이는 영상속의 대상물인 건물 및 지형 또는 지물이 부피를 갖게 될경우 영상 촬영위치에 따라 왜곡될 수 있기 때문이다.This is because if the object and the building or terrain in the image have a volume, it may be distorted depending on the location of the image.
상기 단계(S40) 후 딥러닝을 통하여 획득된 가공 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 가공영상정보 수정단계(S50)는 전단계(S40)에서 가상의 메인프레임상에 배치되고 딥러닝을 통하여 획득된 구심점의 정보와, 외부정보와의 매칭을 통하여 정확한 위치정보를 획득하는 것이다.After the step (S40), the processing image information correction step (S50) of correcting distorted information by matching the external data with the processed data obtained through deep learning is arranged on the virtual mainframe in the previous step (S40) and deep learning It is to obtain accurate location information by matching the information of the centripetal point obtained through and the external information.
보다 구체적으로 설명하면, 획득된 대상물의 구심점과 외부정보인 지도, 항공사진 및 지적도 등의 정보를 통하여 대상물의 GPS 정보 또는 위치정보와의 매칭을 통하여 영상촬영시 발생할 수 있는 왜곡된 정보를 수정하는 것이다.In more detail, the distorted information that may occur during video shooting is corrected by matching the GPS information or location information of the object through information such as the center of the obtained object and information such as external information such as map, aerial photograph, and cadastral map. will be.
상기 단계(S50)에서 수정된 정보를 기준으로 랜드마크를 추출하는 랜드마크 추출단계(S60)를 거치게 된다.The landmark extraction step (S60) of extracting the landmark based on the information modified in the step (S50) is performed.
보다 구체적으로 설명하면, 시간적인 변화에 따라 그 형상 및 위치 등이 변할 수 있는 요소를 제거하고, 위치 변동이나 형상의 변형이 비교적 적은 건물 등을 추출하여 이를 랜드마크로 적용하게 된다.In more detail, elements that may change in shape and position according to changes in time are removed, and buildings with relatively little change in position or shape are extracted and applied as landmarks.
이때, 시간별 영상에서 영상촬영의 위치에 상관없이 노출빈도가 높은 SLAM정보를 추출하고, 이를 외부정보인 지도, 항공사진 및 GPS와 같은 위치정보와 매칭한 후 이를 통하여 랜드마크로 설정하여 적용하는 것이다.At this time, SLAM information with a high exposure frequency is extracted from the hourly image regardless of the location of the video shooting, and it is matched with location information such as external information such as map, aerial photograph and GPS, and then set as a landmark through this.
또한, 랜드마크 설정과 함께, 우체통, 나무 등과 같은 지상 고정물과 자동차와 같은 이동물체와의 식별을 위한 구분 또한 랜드마크 설정과 동일하게 노출빈도의 차이를 통하여 파악하게 되며, 이를 다음단계인 가상지도에도 적용하게 된다.In addition, with the landmark setting, the distinction for identification between ground fixtures such as mailboxes and trees and moving objects such as automobiles is also identified through the difference in exposure frequency in the same way as the landmark setting. It also applies to.
상기 단계(S60) 후 추출된 랜드마크 정보를 적용하여 가상지도를 생성하는 가상지도 생성단계(S70)를 거치게 된다.After the step S60, a virtual map generation step S70 of generating a virtual map by applying the extracted landmark information is performed.
이는 가상의 2차원 또는 3차원의 지도에 설정된 랜드마크를 배치하여 가상지도를 생성하게 되는 것이다.This is to create a virtual map by arranging landmarks set on a virtual 2D or 3D map.
이때, 가상지도의 랜드마크에는 키프레임별 영상정보를 SLAM 처리하여 시간순으로 저장하는 단계(S30)에서 처리된 영상에서 랜드마크 대상이 되는 대상물의 시간대별 SLAM 정보가 DATA로 저장되고 제공된다.At this time, in the landmark of the virtual map, SLAM information for each timeframe of the target object of the landmark is stored and provided as DATA in the image processed in step S30 in which the image information for each keyframe is SLAM processed and stored in chronological order.
상기 단계(S70)에서 생성된 가상지도를 이용하여 위치정보를 제공하는 위치정보 제공단계(S80)는 사용자 또는 서비스를 제공받을 경우, 단말기 등을 통하여 해당지역의 영상이 입력되고, 입력된 영상을 실시간으로 SLAM처리한 후 랜드마크를 추출하며, 추출된 랜드마크에 대한 시간대별 SLAM정보가 저장된 DATA베이스와의 비교를 통하여 정확한 위치정보를 제공하게 되는 것이다.In the step S80 of providing location information using the virtual map generated in the step S70, when a user or a service is provided, an image of a corresponding region is input through a terminal or the like, and the input image is received. The landmark is extracted after SLAM processing in real time, and accurate location information is provided through comparison with the database in which SLAM information for each time zone for the extracted landmark is stored.
여기서, 사용자에게 위치정보를 제공하는 정보 제공단계(S80)에서도 사용자의 단말기 등을 통하여 해당지역의 영상이 입력되고, 실시간으로 SLAM처리 한 후 랜트마크를 추출할 때, 다수의 SLAM 처리 정보를 딥러닝하여 정확도를 향상시킬 수 있도록 한다.Here, in the information providing step of providing location information to the user (S80), the image of the corresponding region is input through the user's terminal, etc., and when SLAM processing is performed in real time to extract the mark, a plurality of SLAM processing information is deeply diced. Make it possible to improve accuracy by running.
보다 구체적으로 설명하면, 앞선 과정(S40)에서와 마찬가지로, 다수의 SLAM처리 정보를 하나의 프레임에 반복하여 중첩한 후 구심점을 찾은 후 랜드마크에 대한 확정 및 이를 통하여 압선 단계들에서 제공되는 위치정보 및 이에 따른 가상지도와 같은 정보를 이용하게 되는 것이다.More specifically, as in the previous step (S40), after repeatedly overlapping a plurality of SLAM processing information in one frame, finding a centripetal point, confirming a landmark, and then providing location information through the tacking steps. And, accordingly, information such as a virtual map.
이와 같은 위치정보를 통하여 시각장애인 또는 위치정보가 필요한 서비스에 적용할 수 있도록 하는 것이다.Through this location information, it is intended to be applied to services that require visually impaired or location information.
예를 들어, 시각장애인용 보조도구인 웨어러블비전의 경우 시각장애인이 원하는 지역의 길안내 또는 단순 이동시 보행에 방해가 되는 지형 또는 지상물의 위치 등을 안내하여 보행사고 등을 방지할 수 있도록 하는 것이다.For example, in the case of the wearable vision, which is an auxiliary tool for the visually impaired, the visually impaired person can prevent a walking accident by guiding the route of a desired area or the location of a terrain or ground that interferes with walking when moving simply.
10 : 메인서버 11 : 영상처리유니트10: Main server 11: Image processing unit
12 : 통신유니트 13 : 데이터저장유니트12: Communication unit 13: Data storage unit
20 : 로컬서버 21 : 영상저장유니트20: local server 21: video storage unit
22 : 통신유니트 23 : 데이터저장유니트22: communication unit 23: data storage unit
24 : 정보제공유니트 30 : 단말기24: information sharing share 30: terminal
111 : 키프레임추출부 112 : SLAM처리부111: key frame extraction unit 112: SLAM processing unit
113 : 데이터가공부 114 : 데이터수정부113: data processing unit 114: data correction
115 : 랜드마크추출부 116 : 가상지도생성부115: landmark extraction unit 116: virtual map generation unit
211 : 영상입력부 212 : 가공영상업로딩부211: image input unit 212: processed image uploading unit

Claims (8)

  1. 딥러닝을 이용한 위치정보 시스템에 있어서,In the location information system using deep learning,
    외부에서 입력되는 영상을 가공 및 처리한 후 가상지도를 생성하는 영상처리유니트(11)와 로컬서버(20) 및 단말기(30)와 통신을 수행하는 통신유니트(12) 및 데이터를 저장하는 데이터저장유니트(13)로 구성되는 메인서버(10);와 After processing and processing the image input from the outside, the image processing unit 11 that generates a virtual map and the communication unit 12 that communicates with the local server 20 and the terminal 30 and data storage that stores data Main server (10) consisting of a unit (13); And
    상기 메인서버(10)에 연결되어 지역별로 입력되는 정보를 메인서버(10)에 전송해주고, 지역별 정보를 분산저장하는 로컬서버(20); 및 A local server 20 that is connected to the main server 10 and transmits information input by region to the main server 10, and distributes and stores regional information; And
    단말기(30)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템.Location information system using deep learning, characterized in that consisting of a terminal (30).
  2. 제 1항에 있어서,According to claim 1,
    상기 영상처리유니트(11)는 단말기(30) 또는 외부영상기기를 통하여 입력되는 원시영상의 키프레임을 추출하는 키프레임추출부(111)와, 상기 키프레임추출부(111)에서 키프레임이 추출된 영상을 SLAM처리하는 SLAM처리부(112)와, SLAM처리부(112)에서 SLAM처리된 영상을 가상의 메인프레임 상에 딥러닝하여 배치하는 데이터가공부(113),와 데이터가공부(113)에서 가공된 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 데이터수정부(114),와 데이터수정부(114)에서 수정된 정보를 기준으로 랜드마크를 추출하는 랜드마크추출부(115) 및 랜드마크추출부(115)에서 추출된 랜드마크를 적용하여 가상지도를 생성하는 가상지도생성부(116)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템.The image processing unit 11 includes a key frame extraction unit 111 for extracting a key frame of a raw image input through the terminal 30 or an external image device, and a key frame extracted by the key frame extraction unit 111 The SLAM processing unit 112 for SLAM processing the image, and the data processing unit 113 and the data processing unit 113 for deep-learning the SLAM-processed image on the virtual mainframe. A data correction unit 114 for correcting distorted information by matching the processed data with external information, and a landmark extraction unit 115 and a land for extracting landmarks based on the information corrected by the data correction unit 114 A location information system using deep learning, characterized by comprising a virtual map generator (116) that generates a virtual map by applying the landmark extracted from the mark extractor (115).
  3. 제 2항에 있어서,According to claim 2,
    상기 데이터가공부(113)은 SLAM처리된 영상을 가상의 메인프레임 상에 배치하되, 연속 촬영된 영상을 시간대별로 SLAM처리한 후 이를 가상의 메임프레임상에 시간대별로 배치하여 중첩된 부분을 파악하게 되고, 이러한 작업을 반복시행하는 딥러닝을 통하여 구심점을 찾는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템.The data processing unit 113 arranges the SLAM-processed image on a virtual mainframe, but after performing SLAM processing of continuously photographed images by time zone, it is arranged on a virtual main frame for each time zone to identify the overlapped portion. The location information system using deep learning is characterized by finding a centripetal point through deep learning that repeatedly performs such operations.
  4. 제 1항에 있어서,According to claim 1,
    상기 로컬서버(20)는 해당지역에 위치하는 단말기를 통하여 입력되는 원시영상을 저장하고, 가공된 영상을 비교분석하기 위해 업로딩하는 영상저장유니트(21)와 메인서버(10)와 단말기(30)와의 통신을 수행하는 통신유니트(22), 데이터를 저장하는 데이터자장유니트(23) 및 단말기(30)와 메인서버(10)에 정보를 제공하기 위한 정보제공유니트(24)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템.The local server 20 stores a raw image input through a terminal located in a corresponding region, and uploads an image storage unit 21 and a main server 10 and a terminal 30 to upload processed images for comparative analysis. Characterized in that it consists of a communication unit (22) for performing communication with, a data magnetic field unit (23) for storing data, and an information sharing unit (24) for providing information to the terminal (30) and the main server (10). Location information system using deep learning.
  5. 제 4항에 있어서,The method of claim 4,
    상기 영상저장유니트(21)는 단말기 또는 별도의 촬영장비로부터 입력되는 원시영상을 입렵받아 저장하기 위한 영상입력부(211)와 가공되어 저장된 백업데이터와 입력된 원시영상 정보를 비교하여 해당지역의 가공영상을 업로딩하기 위한 가공영상업로딩부(212)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 시스템.The image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and then processes the processed image in the corresponding region. A location information system using deep learning, characterized in that it consists of a processed image uploading unit 212 for uploading.
  6. 딥러닝을 이용한 위치정보 제공방법에 있어서,In the method of providing location information using deep learning,
    사용자 또는 관리자의 단말기로부터 영상을 획득하는 영상정보 획득단계(S10); An image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator;
    상기와 같이 획득된 영상정보에서 시간대별 키프레임을 추출하는 키프레임 추출단계(S20);A key frame extraction step (S20) of extracting key frames for each time zone from the obtained image information;
    상기 키프레임별 영상정보를 SLAM 처리하여 시간순으로 저장하는 SLAM 단계(S30);SLAM step of storing the video information for each key frame in a chronological order (S30);
    키프레임별 SLAM 처리 후 저장된 정보를 딥러닝하여 가상의 메인프레임 상에 배치하는 데이터 가공단계(S40);A data processing step of deep-learning the stored information after SLAM processing for each key frame and placing it on a virtual main frame (S40);
    딥러닝을 통하여 획득된 가공 데이터와 외부정보를 매칭하여 왜곡된 정보를 수정하는 가공영상정보 수정단계(S50);A processing image information correcting step (S50) of correcting distorted information by matching processing data obtained through deep learning with external information;
    수정된 정보를 기준으로 하여 랜드마크를 추출하는 랜드마크 추출단계(S60);A landmark extraction step of extracting a landmark based on the modified information (S60);
    추출된 랜드마크 정보를 적용하여 가상지도를 생성하는 가상지도 생성단계(S70); 및 A virtual map generation step of generating a virtual map by applying the extracted landmark information (S70); And
    가상지도를 이용하여 위치정보를 제공하는 위치정보 제공단계(S80)로 구성되는 것을 특징으로 하는 딥러닝을 이용한 위치정보 제공방법.A method of providing location information using deep learning, characterized by comprising a location information providing step (S80) of providing location information using a virtual map.
  7. 제 6항에 있어서,The method of claim 6,
    상기 데이터 가공단계(S40)는 SLAM처리된 정보를 가상의 메인프레임상에 시간대별로 획득된 정보를 배치하여 중첩된 부분을 파악하게 되고, 이러한 작업을 반복시행하는 딥러닝을 통하여 구심점을 찾는 것을 특징으로 하는 딥러닝을 이용한 위치정보 제공방법.In the data processing step (S40), the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and it is characterized by finding a centripetal point through deep learning that repeatedly performs such operations. Location information providing method using deep learning.
  8. 제 7항에 있어서,The method of claim 7,
    상기 가공영상정보 수정단계(S50)는 전단계(S40)에서 가상의 메인프레임상에 배치되고 딥러닝을 통하여 획득된 구심점의 정보와, 외부정보와의 매칭을 통하여 정확한 위치정보를 획득하되, 획득된 대상물의 구심점과 외부정보인 지도, 항공사진 및 지적도의 정보를 통하여 획득되는 대상물의 GPS 정보 또는 위치정보와의 매칭을 통하여 영상촬영시 발생할 수 있는 왜곡된 정보를 수정하는 것을 특징으로 하는 딥러닝을 이용한 위치정보 제공방법.The processing image information modification step (S50) is arranged on the virtual mainframe in the previous step (S40) to obtain accurate location information through matching with the information of the centripetal point obtained through deep learning and external information. Deep learning, characterized by correcting distorted information that may occur during video shooting, by matching the GPS information or location information of an object obtained through the information of the centripetal point of the object and external information such as map, aerial photograph, and cadastral map Method of providing location information.
PCT/KR2018/012092 2018-10-05 2018-10-15 Location information system using deep learning and method for providing same WO2020071573A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0118712 2018-10-05
KR1020180118712A KR102033075B1 (en) 2018-10-05 2018-10-05 A providing location information systme using deep-learning and method it

Publications (1)

Publication Number Publication Date
WO2020071573A1 true WO2020071573A1 (en) 2020-04-09

Family

ID=68421306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/012092 WO2020071573A1 (en) 2018-10-05 2018-10-15 Location information system using deep learning and method for providing same

Country Status (2)

Country Link
KR (1) KR102033075B1 (en)
WO (1) WO2020071573A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111044993B (en) * 2019-12-27 2021-11-05 歌尔股份有限公司 Laser sensor based slam map calibration method and device
KR102320628B1 (en) * 2021-01-08 2021-11-02 강승훈 System and method for precise positioning information based on machine learning
KR102400733B1 (en) * 2021-01-27 2022-05-23 김성중 Contents extension apparatus using image embedded code

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100787747B1 (en) * 2006-07-06 2007-12-24 주식회사 대우일렉트로닉스 Device and method for updating front road map image of car navigation terminal
KR20130023433A (en) * 2011-08-29 2013-03-08 연세대학교 산학협력단 Apparatus for managing map of mobile robot based on slam and method thereof
JP2016528476A (en) * 2013-04-30 2016-09-15 クアルコム,インコーポレイテッド Wide area position estimation from SLAM map
KR20180059188A (en) * 2016-11-25 2018-06-04 연세대학교 산학협력단 Method of Generating 3d-Background Map Except Dynamic Obstacles Using Deep Learning
KR20180094463A (en) * 2017-02-15 2018-08-23 한양대학교 산학협력단 Method for saving and loading of slam map

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101380852B1 (en) 2012-05-30 2014-04-10 서울대학교산학협력단 Slam system and method for mobile robots with environment picture input from user
KR101439921B1 (en) 2012-06-25 2014-09-17 서울대학교산학협력단 Slam system for mobile robot based on vision sensor data and motion sensor data fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100787747B1 (en) * 2006-07-06 2007-12-24 주식회사 대우일렉트로닉스 Device and method for updating front road map image of car navigation terminal
KR20130023433A (en) * 2011-08-29 2013-03-08 연세대학교 산학협력단 Apparatus for managing map of mobile robot based on slam and method thereof
JP2016528476A (en) * 2013-04-30 2016-09-15 クアルコム,インコーポレイテッド Wide area position estimation from SLAM map
KR20180059188A (en) * 2016-11-25 2018-06-04 연세대학교 산학협력단 Method of Generating 3d-Background Map Except Dynamic Obstacles Using Deep Learning
KR20180094463A (en) * 2017-02-15 2018-08-23 한양대학교 산학협력단 Method for saving and loading of slam map

Also Published As

Publication number Publication date
KR102033075B1 (en) 2019-10-16

Similar Documents

Publication Publication Date Title
WO2020071573A1 (en) Location information system using deep learning and method for providing same
WO2019240452A1 (en) Method and system for automatically collecting and updating information related to point of interest in real space
WO2014073841A1 (en) Method for detecting image-based indoor position, and mobile terminal using same
WO2016024797A1 (en) Tracking system and tracking method using same
WO2015014018A1 (en) Indoor positioning and navigation method for mobile terminal based on image recognition technology
WO2011074759A1 (en) Method for extracting 3-dimensional object information out of a single image without meta information
WO2019194557A1 (en) Surveying system and method using unmanned aerial vehicles
WO2020075954A1 (en) Positioning system and method using combination of results of multimodal sensor-based location recognition
WO2019139243A1 (en) Apparatus and method for updating high definition map for autonomous driving
KR101558467B1 (en) System for revising coordinate in the numerical map according to gps receiver
WO2019054593A1 (en) Map production apparatus using machine learning and image processing
WO2020071619A1 (en) Apparatus and method for updating detailed map
WO2017022994A1 (en) Method for providing putting-on-green information
WO2020235734A1 (en) Method for estimating distance to and location of autonomous vehicle by using mono camera
WO2020251099A1 (en) Method for calling a vehicle to current position of user
WO2020101156A1 (en) Orthoimage-based geometric correction system for mobile platform having mounted sensor
WO2015147371A1 (en) Device and method for correcting vehicle position, system for correcting vehicle position by using same, and vehicle capable of manless operation
WO2021125578A1 (en) Position recognition method and system based on visual information processing
WO2015122658A1 (en) Distance measurement method using vision sensor database
WO2019127049A1 (en) Image matching method, device, and storage medium
WO2020189909A2 (en) System and method for implementing 3d-vr multi-sensor system-based road facility management solution
WO2021045445A1 (en) Driver's license test processing device
CN113934212A (en) Intelligent building site safety inspection robot capable of being positioned
KR20200002219A (en) Indoor navigation apparatus and method
WO2021221334A1 (en) Device for generating color map formed on basis of gps information and lidar signal, and control method for same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18936101

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18936101

Country of ref document: EP

Kind code of ref document: A1