WO2020071573A1 - Système d'informations d'emplacement utilisant un apprentissage profond et son procédé d'obtention - Google Patents

Système d'informations d'emplacement utilisant un apprentissage profond et son procédé d'obtention

Info

Publication number
WO2020071573A1
WO2020071573A1 PCT/KR2018/012092 KR2018012092W WO2020071573A1 WO 2020071573 A1 WO2020071573 A1 WO 2020071573A1 KR 2018012092 W KR2018012092 W KR 2018012092W WO 2020071573 A1 WO2020071573 A1 WO 2020071573A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
deep learning
location information
unit
Prior art date
Application number
PCT/KR2018/012092
Other languages
English (en)
Korean (ko)
Inventor
이준혁
Original Assignee
(주)한국플랫폼서비스기술
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)한국플랫폼서비스기술 filed Critical (주)한국플랫폼서비스기술
Publication of WO2020071573A1 publication Critical patent/WO2020071573A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Definitions

  • the present invention relates to a location information system using deep learning and a method for providing the same, and to provide a more accurate location information and service by deep learning based on a SLAM technique.
  • location information is provided to users in various ways.
  • a GPS system using a satellite navigation system is a representative one.
  • SLAM Simultaneous Localization And Map-Building
  • SLAM Simultaneous Localization And Map-Building
  • the mobile device may select the first keyframe based SLAM map of the local environment using one or more received images.
  • the individual location recognition of the mobile device within the local environment may be determined, and the individual location recognition may be based on a keyframe based SLAM map.
  • the mobile device can transmit the first keyframe to the server and receive a first global location-awareness response indicating a correction to the local map on the mobile device.
  • the first global position recognition response may include rotation, translation, and scale information.
  • the server proposes a technique for receiving keyframes from a mobile device and recognizing keyframes in the server map by matching keyframe features received from the mobile device to the server map features.
  • Republic of Korea Patent Publication No. 10-2013-0134986 (SLAM system and method of a mobile robot that receives a photo input from the user from the user, published on December 10, 2013) is a mobile robot that receives a photo input to the environment from the user
  • a SLAM system comprising: a user terminal that receives information or commands about an environment from a user and transmits it to a mobile robot;
  • a mobile robot SLAM system characterized in that it comprises a mobile robot that performs map creation by grasping the environment information and movement information obtained from the user terminal, information and commands, and a data acquisition device, and moving A data acquisition device mounted on the robot to obtain information about the surrounding environment or a means for measuring movement of the mobile robot; A second processing device that performs a function of determining the location of a mobile robot and determining a route based on real-time map creation, management and modification of the created map, and created map using the data obtained from the data acquisition device and the user terminal ; And a technology for controlling the driving of the mobile robot based on the information processed by the second processing device.
  • the present invention is to provide a location information system and a method for providing the information to provide a location information using deep learning, to be more precise and adaptable to changes in the external environment, in order to compensate for the shortcomings of the SLAM technology as described above.
  • the purpose is to provide a location information system and a method for providing the information to provide a location information using deep learning, to be more precise and adaptable to changes in the external environment, in order to compensate for the shortcomings of the SLAM technology as described above.
  • an image processing unit 11 and a local server 20 and a terminal 30 that generate a virtual map after processing and processing an image input from the outside )
  • a main server (10) consisting of a communication unit (12) for performing communication and a data storage unit (13) for storing data; and a main server (10) that is connected to the main server (10) and inputs information for each region.
  • a local server (20) that transmits and distributes regional information;
  • a terminal 30, to provide a location information system using deep learning.
  • the image processing unit 11 is a key frame extraction unit 111 for extracting a key frame of a raw image input through the terminal 30 or an external image device, and the key frame in the key frame extraction unit 111 SLAM processing unit 112 for SLAM processing the extracted image, a data processing unit 113 for deep-learning the SLAM-processed image by the SLAM processing unit 112 on a virtual mainframe, and a data processing unit 113 )
  • the data correction unit 114 that corrects distorted information by matching the processed data with external information
  • the landmark extraction unit 115 that extracts landmarks based on the information corrected by the data correction unit 114.
  • a virtual map generation unit 116 that generates a virtual map by applying the landmark extracted from the landmark extraction unit 115, and the data processing unit 113 displays the SLAM-processed image on a virtual mainframe. Placed on the screen, but continuously photographed images by time After SLAM processing, it is arranged on a virtual main frame for each time zone to grasp the overlapped part, and a deep learning that repeatedly executes such a task finds a centripetal point.
  • the local server 20 stores the raw image input through the terminal located in the area, and uploads the image storage unit 21 and the main server 10 and the terminal (uploading) for comparative analysis of the processed image.
  • 30 is composed of a communication unit 22 for communicating with, a data magnetic field unit 23 for storing data, and an information sharing unit 24 for providing information to the terminal 30 and the main server 10,
  • the image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and then processes the processed image in the corresponding region. It provides a location information system using deep learning, characterized in that it consists of a processed image upload section 212 for uploading.
  • a method for providing location information using deep learning which is another means for solving the problems of the present invention, comprising: an image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator; A key frame extraction step (S20) of extracting key frames for each time zone from the obtained image information; SLAM step of storing the video information for each key frame in a chronological order (S30); A data processing step of deep-learning the stored information after SLAM processing for each key frame and placing it on a virtual main frame (S40); A processing image information correcting step (S50) of correcting distorted information by matching processing data obtained through deep learning with external information; A landmark extraction step of extracting a landmark based on the modified information (S60); A virtual map generation step of generating a virtual map by applying the extracted landmark information (S70); And a location information providing step (S80) for providing location information using a virtual map.
  • S10 image information acquisition step
  • S20 key frame extraction step
  • S30 of extracting key frames for each time zone from the obtained
  • the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and the deep learning that repeatedly executes these operations finds the center of gravity.
  • the processing image information modification step (S50) is arranged on the virtual mainframe in the previous step (S40) and accurate location information through matching with the information of the centripetal point obtained through deep learning and external information Acquiring, but correcting distorted information that may occur during video shooting by matching the GPS information or the location information of the object obtained through the information of the center of the obtained object and the information of the map, aerial photograph, and cadastral map that is external information
  • FIG. 1 is a configuration diagram of a location information system using the deep learning of the present invention.
  • FIG. 2 is a configuration diagram of the main server 10 according to the present invention.
  • FIG. 3 is a block diagram of a local server 20 according to the present invention.
  • FIG. 4 is a flowchart of a method for providing location information using deep learning according to the present invention.
  • FIG. 1 is a configuration diagram of a location information system using deep learning of the present invention
  • FIG. 2 is a configuration diagram of a main server 10 according to the present invention
  • FIG. 3 is a configuration diagram of a local server 20 according to the present invention. .
  • the location information system using deep learning of the present invention is connected to the main server 10 and the main server 10, and inputs information for each region to the main server 10 It is composed of a local server 20 and a terminal 30 that transmits to and distributes regional information.
  • the main server 10 processes and processes the image input from the outside, and then generates a virtual map, and then an image processing unit 11 and a communication unit 12 that communicates with the local server 20 and the terminal 30. And a data storage unit 13 for storing data.
  • the image processing unit 11 has a key frame extracted from the key frame extraction unit 111 and the key frame extraction unit 111 to extract the key frame of the raw image input through the terminal 30 or an external image device
  • the data correction unit 114 that corrects distorted information by matching the old data with the external information, and the landmark extraction unit 115 and the landmark extraction unit that extract landmarks based on the information corrected by the data correction unit 114 It is composed of a virtual map generating unit 116 that generates a virtual map by applying the landmark extracted from the unit 115.
  • the key frame extraction unit 111 extracts a key frame from the input image information of the original image, and extracts a key frame for each time zone from the continuously photographed image to correct it when the position of the terminal changes.
  • the SLAM processing unit 112 SLAM-processes the inputted image information based on the key frame extracted by the key frame extraction unit 111.
  • SLAM Simultaneous Localization And Map-Building
  • Map-Building is a technique for recognizing terrain or features by binarizing or coding the recognized image, and detailed description thereof will be omitted as a known technique.
  • the data processing unit 113 arranges a SLAM-processed image on a virtual mainframe.
  • SLAM is processed on a continuous-shot image by time slot, and then it is arranged on a virtual main frame by time slot. In this way, the overlapping part is grasped, and the center of gravity is found through deep learning that repeatedly executes these tasks.
  • the data correction unit 114 is disposed on the virtual mainframe through the data processing unit 113 and the exact location of the target terrain or feature through matching with the information of the centripetal point obtained through deep learning and external information. Information will be grasped.
  • information of one of GPS information or location information of an object obtained through information such as a map, an aerial photograph, and a cadastral map is used.
  • the landmark extracting unit 115 extracts a landmark based on the modified information through the data correction unit 114, and more specifically, an element whose shape and location may change according to a change in time. And removes buildings with relatively little change in position or shape, and applies them as landmarks.
  • the virtual map generation unit 116 generates a virtual map by placing the landmark extracted through the landmark extraction unit 115 on the map.
  • the communication unit 12 is to perform communication with the local server 20 provided for each region and the terminal 30 for service use.
  • an external Internet network is used as a communication method, or a separate dedicated inlet is used.
  • the data storage unit 13 temporarily stores the raw image input from the terminal 30 or the like, and temporarily stores the SLAM image for each time zone in which the input raw image is processed and the processed image through post-production.
  • a user requests a service through the terminal 30, if there is backup data in the corresponding area, it is called and mounted on the data storage unit 13 of the main server 10 to store and store the service.
  • the local server 20 stores a raw image input through a terminal located in a corresponding region, and uploads an image storage unit 21 and a main server 10 and a terminal 30 to upload processed images for comparative analysis. It consists of a communication unit 22 for communicating with, a data magnetic field unit 23 for storing data, and an information sharing unit 24 for providing information to the terminal 30 and the main server 10.
  • the image storage unit 21 compares the image input unit 211 for receiving and storing the raw image input from the terminal or a separate photographing device, the processed and stored backup data and the input raw image information, and It is composed of a processed image uploading unit 212 for uploading a processed image.
  • any communication means capable of a photographing function possessed by an administrator or a user can be used.
  • the terminal 30 can perform communication with the main server 10 or the local server 20 through the Internet, which is an external computer network, and download and install and use a separate APP or management program for service provision.
  • the Internet which is an external computer network
  • download and install and use a separate APP or management program for service provision is not a limitation.
  • FIG. 4 is a flowchart of a method for providing location information using deep learning.
  • the method for providing location information using deep learning includes an image information acquisition step (S10) of acquiring an image from a terminal of a user or an administrator and time from the image information acquired as described above.
  • the image information acquiring step (S10) refers to an image captured by a user or an administrator's terminal or an image captured while moving in the field through a separate imaging device.
  • the obtained image is input to the local server 20 located in each region, so that it can be transmitted by accessing an APP installed in the terminal or an open management program.
  • the key frame may be changed for each time zone because the key frame may fluctuate when the position of the terminal changes when the image is continuously photographed through the terminal. Will be extracted.
  • the GPS information of the terminal when using the GPS information of the terminal, it is possible to obtain the movement status and the movement direction information by using the GPS location information for each time zone recorded with the image when shooting a video, through which the movement speed of the terminal can be grasped.
  • the keyframe acquisition time according to the moving speed is automatically set.
  • the installation location of the base station is determined according to the frequency of the terminal. It is possible to grasp the moving direction and moving speed.
  • an installation interval is determined, and the moving speed of the terminal is measured by measuring the video time during movement from one telephone pole to another adjacent telephone pole, or a telegraph pole.
  • the distance change is measured using image time.
  • the video information for each keyframe is SLAM-processed and stored in chronological order
  • the video information for each time-slice is SLAM-processed and the processed information is stored in chronological order.
  • an identification code is assigned to each information, and an identification code is given to a place and time.
  • the data processing step (S40) of performing SLAM processing for each key frame and deep-learning the stored information to be disposed on the virtual main frame is performed by SLAM processing the continuous image information to grasp the exact position of objects in the image. It is to find a centripetal point to do.
  • the SLAM-processed information is placed on the virtual mainframe for each time zone to identify the overlapped portion, and the center of gravity is found through deep learning that repeatedly executes these tasks.
  • the processing image information correction step (S50) of correcting distorted information by matching the external data with the processed data obtained through deep learning is arranged on the virtual mainframe in the previous step (S40) and deep learning It is to obtain accurate location information by matching the information of the centripetal point obtained through and the external information.
  • the distorted information that may occur during video shooting is corrected by matching the GPS information or location information of the object through information such as the center of the obtained object and information such as external information such as map, aerial photograph, and cadastral map. will be.
  • the landmark extraction step (S60) of extracting the landmark based on the information modified in the step (S50) is performed.
  • elements that may change in shape and position according to changes in time are removed, and buildings with relatively little change in position or shape are extracted and applied as landmarks.
  • SLAM information with a high exposure frequency is extracted from the hourly image regardless of the location of the video shooting, and it is matched with location information such as external information such as map, aerial photograph and GPS, and then set as a landmark through this.
  • the distinction for identification between ground fixtures such as mailboxes and trees and moving objects such as automobiles is also identified through the difference in exposure frequency in the same way as the landmark setting. It also applies to.
  • a virtual map generation step S70 of generating a virtual map by applying the extracted landmark information is performed.
  • SLAM information for each timeframe of the target object of the landmark is stored and provided as DATA in the image processed in step S30 in which the image information for each keyframe is SLAM processed and stored in chronological order.
  • step S80 of providing location information using the virtual map generated in the step S70 when a user or a service is provided, an image of a corresponding region is input through a terminal or the like, and the input image is received.
  • the landmark is extracted after SLAM processing in real time, and accurate location information is provided through comparison with the database in which SLAM information for each time zone for the extracted landmark is stored.
  • the image of the corresponding region is input through the user's terminal, etc., and when SLAM processing is performed in real time to extract the mark, a plurality of SLAM processing information is deeply diced. Make it possible to improve accuracy by running.
  • step (S40) after repeatedly overlapping a plurality of SLAM processing information in one frame, finding a centripetal point, confirming a landmark, and then providing location information through the tacking steps. And, accordingly, information such as a virtual map.
  • the visually impaired person can prevent a walking accident by guiding the route of a desired area or the location of a terrain or ground that interferes with walking when moving simply.
  • image input unit 212 processed image uploading unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un système d'informations d'emplacement utilisant un apprentissage profond et son procédé d'obtention, permettant d'obtenir des informations d'emplacement et des services par apprentissage profond sur la base d'une technique SLAM. La présente invention concerne donc un système d'informations d'emplacement utilisant un apprentissage profond, comprenant : un serveur principal (10) comprenant une unité de traitement d'image (11) servant à générer une carte virtuelle après traitement et à traiter une image entrée par l'extérieur, une unité de communication (12) servant à communiquer avec un serveur local (20) et un terminal (30), et une unité de stockage de données (13) servant à stocker des données; le serveur local (20), connecté au serveur principal (10), servant à transmettre des informations entrées pour chaque région au serveur principal (10) et à distribuer et à stocker des informations régionales; et le terminal (30). De plus, la présente invention concerne un procédé comprenant : une étape d'acquisition d'informations d'image (S10); une étape d'extraction d'image clé (S20); une étape de traitement SLAM (S30); une étape de traitement de données (S40); une étape de modification d'informations d'image traitée (S50); une étape d'extraction de point de repère (S60); une étape de génération de carte virtuelle (S70); et une étape de fourniture d'informations d'emplacement (S80).
PCT/KR2018/012092 2018-10-05 2018-10-15 Système d'informations d'emplacement utilisant un apprentissage profond et son procédé d'obtention WO2020071573A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180118712A KR102033075B1 (ko) 2018-10-05 2018-10-05 딥러닝을 이용한 위치정보 시스템 및 그 제공방법
KR10-2018-0118712 2018-10-05

Publications (1)

Publication Number Publication Date
WO2020071573A1 true WO2020071573A1 (fr) 2020-04-09

Family

ID=68421306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/012092 WO2020071573A1 (fr) 2018-10-05 2018-10-15 Système d'informations d'emplacement utilisant un apprentissage profond et son procédé d'obtention

Country Status (2)

Country Link
KR (1) KR102033075B1 (fr)
WO (1) WO2020071573A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111044993B (zh) * 2019-12-27 2021-11-05 歌尔股份有限公司 一种基于激光传感器的slam地图校准方法和装置
KR102320628B1 (ko) * 2021-01-08 2021-11-02 강승훈 머신러닝 기반 정밀 위치 정보 제공 시스템 및 그 제공 방법
KR102400733B1 (ko) * 2021-01-27 2022-05-23 김성중 이미지에 내재된 코드를 이용한 컨텐츠 확장 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100787747B1 (ko) * 2006-07-06 2007-12-24 주식회사 대우일렉트로닉스 차량 네비게이션 단말기의 전면 도로 맵 영상 업데이트장치 및 그 방법
KR20130023433A (ko) * 2011-08-29 2013-03-08 연세대학교 산학협력단 Slam 기술 기반 이동 로봇의 지도를 관리하기 위한 장치 및 그 방법
JP2016528476A (ja) * 2013-04-30 2016-09-15 クアルコム,インコーポレイテッド Slamマップからのワイドエリア位置推定
KR20180059188A (ko) * 2016-11-25 2018-06-04 연세대학교 산학협력단 딥 러닝을 이용한 동적 장애물이 없는 배경 위주의 3차원 지도 생성 방법
KR20180094463A (ko) * 2017-02-15 2018-08-23 한양대학교 산학협력단 Slam 지도 저장 및 로드 방법

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101380852B1 (ko) 2012-05-30 2014-04-10 서울대학교산학협력단 사용자로부터 환경에 대한 사진 입력을 받는 이동로봇의 slam 시스템 및 방법
KR101439921B1 (ko) 2012-06-25 2014-09-17 서울대학교산학협력단 비젼 센서 정보와 모션 센서 정보를 융합한 모바일 로봇용 slam 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100787747B1 (ko) * 2006-07-06 2007-12-24 주식회사 대우일렉트로닉스 차량 네비게이션 단말기의 전면 도로 맵 영상 업데이트장치 및 그 방법
KR20130023433A (ko) * 2011-08-29 2013-03-08 연세대학교 산학협력단 Slam 기술 기반 이동 로봇의 지도를 관리하기 위한 장치 및 그 방법
JP2016528476A (ja) * 2013-04-30 2016-09-15 クアルコム,インコーポレイテッド Slamマップからのワイドエリア位置推定
KR20180059188A (ko) * 2016-11-25 2018-06-04 연세대학교 산학협력단 딥 러닝을 이용한 동적 장애물이 없는 배경 위주의 3차원 지도 생성 방법
KR20180094463A (ko) * 2017-02-15 2018-08-23 한양대학교 산학협력단 Slam 지도 저장 및 로드 방법

Also Published As

Publication number Publication date
KR102033075B1 (ko) 2019-10-16

Similar Documents

Publication Publication Date Title
WO2020071573A1 (fr) Système d'informations d'emplacement utilisant un apprentissage profond et son procédé d'obtention
WO2016024797A1 (fr) Système de suivi et procédé de suivi l'utilisant
WO2014073841A1 (fr) Procédé de détection de localisation intérieure basée sur image et terminal mobile utilisant ledit procédé
WO2015014018A1 (fr) Procédé de navigation et de positionnement en intérieur pour terminal mobile basé sur la technologie de reconnaissance d'image
WO2011074759A1 (fr) Procédé d'extraction d'informations tridimensionnelles d'objet d'une image unique sans méta-informations
WO2019240452A1 (fr) Procédé et système pour automatiquement collecter et mettre à jour des informations associées à un point d'intérêt dans un espace réel
WO2019194557A1 (fr) Système et procédé de surveillance utilisant des véhicules aériens sans pilote
WO2019139243A1 (fr) Appareil et procédé de mise à jour d'une carte à haute définition pour la conduite autonome
WO2019054593A1 (fr) Appareil de production de carte utilisant l'apprentissage automatique et le traitement d'image
WO2020075954A1 (fr) Système et procédé de positionnement utilisant une combinaison de résultats de reconnaissance d'emplacement basée sur un capteur multimodal
KR101558467B1 (ko) 지피에스 수신기의 동선에 따라 수치지도에 링크된 수치좌표를 보정하는 수치정보 시스템
WO2021221334A1 (fr) Dispositif de génération de palette de couleurs formée sur la base d'informations gps et de signal lidar, et son procédé de commande
WO2020235734A1 (fr) Procédé destiné à estimer la distance à un véhicule autonome et sa position au moyen d'une caméra monoscopique
WO2020071619A1 (fr) Appareil et procédé pour mettre à jour une carte détaillée
WO2016035993A1 (fr) Dispositif et procédé d'établissement de carte intérieure utilisant un point de nuage
WO2017022994A1 (fr) Procédé pour fournir des informations de putting sur le vert
WO2012091326A2 (fr) Système de vision de rue en temps réel tridimensionnel utilisant des informations d'identification distinctes
WO2020251099A1 (fr) Procédé permettant d'appeler un véhicule vers la position actuelle d'un utilisateur
WO2021125578A1 (fr) Procédé et système de reconnaissance de position reposant sur un traitement d'informations visuelles
WO2015122658A1 (fr) Procédé de mesure de distance utilisant une base de données de capteur de vision
WO2020189909A2 (fr) Système et procédé de mise en oeuvre d'une solution de gestion d'installation routière basée sur un système multi-capteurs 3d-vr
WO2021045445A1 (fr) Dispositif de traitement d'examen de permis de conduire d'un conducteur
CN113934212A (zh) 一种可定位的智慧工地安全巡检机器人
WO2019132504A1 (fr) Appareil et procédé de guidage de destination
KR20090064673A (ko) 위치 식별 태그의 위치를 이용한 위치 측정 단말 및 위치측정 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18936101

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18936101

Country of ref document: EP

Kind code of ref document: A1