WO2022097765A1 - 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 - Google Patents
객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 Download PDFInfo
- Publication number
- WO2022097765A1 WO2022097765A1 PCT/KR2020/015298 KR2020015298W WO2022097765A1 WO 2022097765 A1 WO2022097765 A1 WO 2022097765A1 KR 2020015298 W KR2020015298 W KR 2020015298W WO 2022097765 A1 WO2022097765 A1 WO 2022097765A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- indoor
- virtual space
- objects
- user
- location
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 239000013598 vector Substances 0.000 claims description 50
- 238000004891 communication Methods 0.000 claims description 30
- 238000013135 deep learning Methods 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/383—Indoor data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/17—Image acquisition using hand-held instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/003—Maps
- G09B29/006—Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes
Definitions
- the present invention relates to a terminal device, a service server, and a method for indoor positioning based on object recognition, and more particularly, to recognize an object in an image photographed through a photographing unit of the terminal device, and to use vectorized coordinates of the recognized object
- a terminal device a service server, and a method for object recognition-based indoor positioning capable of estimating a user's location by matching on an indoor map.
- LBS location-based services
- the location-based service can provide services such as indoor navigation by measuring a user's location using Wi-Fi, BEACON, and the like.
- Wi-Fi Wi-Fi
- BEACON Wi-Fi
- Wi-Fi is used to determine the user's location
- the size of the Wi-Fi reception signal used for location calculation varies greatly indoors, so it is difficult to provide an appropriate location-based service.
- conventional indoor location estimation techniques have a problem in that expensive equipment or infrastructure must be built.
- the present invention has been devised to improve the above problems, and an object of the present invention is to recognize an object in an image photographed through a photographing unit of a terminal device, and object recognition capable of estimating a user's location using the object It is to provide a terminal device, a service server, and a method for the indoor positioning based on the same.
- a terminal device recognizes the first and second objects from an image obtained through a storage unit, a photographing unit, and an image obtained through the photographing unit, in which an indoor map matched with a coordinate value for each predetermined object is stored, and the second object
- the virtual space positions of the first and second objects are respectively estimated, and the virtual space distances between each of the first and second objects and the user are calculated using the virtual space positions of the first and second objects and the user's virtual space positions.
- a controller for estimating each and estimating the indoor location of the user using the virtual space distance.
- control unit may display the user's location on the indoor map and display it through the display unit together with the image obtained through the photographing unit.
- control unit recognizes the first and second objects using deep learning, and estimates the virtual space positions of the first and second objects using point clouds generated in virtual space, respectively.
- the controller may estimate the user's virtual space location using a dead reckoning algorithm.
- the controller calculates a first Euclidean distance between the virtual space location of the first object and the user's virtual space location, and calculates a first Euclidean distance between the virtual space location of the second object and the user's virtual space location. 2 Euclidean distance can be calculated.
- the control unit obtains the indoor coordinates of the first and second objects from the indoor map, respectively, and uses the first and second Euclidean distances and the indoor coordinates of the first and second objects to estimate the first indoor predicted position and the second indoor predicted position of the user, and a first virtual vector and a second
- the external product of the virtual vector is obtained, and the external product of the first real vector and the second real vector, which is a vector from the first indoor predicted position to the indoor coordinates of the first and second objects, and the first indoor and outdoor product and the
- a second indoor/outdoor product is obtained as an outer product of a first real vector and a second real vector that is a vector up to the indoor coordinates of the first and second objects based on a second indoor predicted position, and among the first and second indoor/outdoor products
- the predicted indoor location of the indoor/outer product having the same sign as the virtual space outer product may be estimated as the user's indoor location.
- a communication unit for communicating with a service server through a communication network, wherein the control unit, when the image collection application stored in the storage unit is executed, to photograph the preselected objects through the photographing unit, the The image of each captured object may be stored, and the captured image of each stored object may be transmitted to the service server through the communication unit.
- a service server includes a communication unit that receives a photographed image of each object set on an indoor map, learns a photographed image from each object received through the communication unit, and receives the photographed image of each learned object. and an object recognition model generator for generating an object recognition model for object recognition by matching the object and coordinates of the object through the object recognition model.
- the service server when a location estimation request signal including a photographed image is received from the terminal device through the communication unit, the service server recognizes first and second objects by inputting the photographed image into the object recognition model, and The virtual space positions of the first and second objects are respectively estimated, and the virtual space distances between each of the first and second objects and the user are calculated using the virtual space positions of the first and second objects and the user's virtual space positions.
- the method may further include a location estimator for estimating each, estimating the indoor location of the user using the virtual space distance, and transmitting the estimated indoor location to the terminal device.
- a terminal device recognizes first and second objects in an image obtained through a photographing unit, and determines the virtual space locations of the first and second objects, respectively. estimating; estimating, by the terminal device, a virtual space distance between each of the first and second objects and the user using the virtual space positions of the first and second objects and the user's virtual space positions; and estimating, by the terminal device, the indoor location of the user using the virtual space distance.
- the terminal device recognizes the first and second objects using deep learning, and a point cloud generated in a virtual space can be used to estimate the virtual space positions of the first and second objects, respectively.
- each of the estimating of the virtual space distance comprises: estimating, by the terminal device, the virtual space location of the user using a dead reckoning algorithm; calculating a first Euclidean distance between the virtual space locations of , and calculating a second Euclidean distance between the virtual space location of the second object and the user's virtual space location.
- the step of estimating the indoor location of the user includes: obtaining, by the terminal device, the indoor coordinates of the first and second objects from the indoor map, respectively, the first and second Euclidean distances; estimating, by the terminal device, the first and second indoor predicted positions of the user by using the indoor coordinates of the first and second objects; obtaining, by the terminal device, an external product of a first virtual vector and a second virtual vector that is a vector up to the virtual space position of an object, indoor coordinates of the first and second objects based on the first indoor predicted position, by the terminal
- the first real vector and the second real vector are vectors to the indoor coordinates of the first and second objects based on the first indoor and outdoor products, and the second indoor and outdoor products of the first real vector and the second real vector obtaining, by the terminal device, a second indoor/external product of the actual vector, estimating, by the terminal device, an indoor predicted location of an indoor/outer product whose sign is the same as that of the virtual space outer product among the first and second indoor
- the present invention by estimating the user's location using an image captured by the camera of the user terminal device in an indoor environment, it is possible to accurately estimate the indoor location without building expensive equipment or infrastructure.
- FIG. 1 is a conceptual diagram for explaining object recognition-based indoor positioning according to an embodiment of the present invention.
- FIG. 2 is a diagram for explaining an object recognition-based indoor positioning system according to an embodiment of the present invention.
- FIG. 3 is a block diagram showing the configuration of a terminal device according to an embodiment of the present invention.
- FIG. 4 is an exemplary diagram for explaining a location estimation screen according to an embodiment of the present invention.
- FIG. 5 is an exemplary diagram for explaining a method of estimating a user's indoor location using a cross product according to an embodiment of the present invention.
- FIG. 6 is a block diagram showing the configuration of a service server according to an embodiment of the present invention.
- FIG. 7 is a flowchart illustrating an object recognition-based indoor positioning method according to an embodiment of the present invention.
- FIG. 8 is a flowchart illustrating a method of estimating a user's location using a cross product according to an embodiment of the present invention.
- implementations described herein may be implemented as, for example, a method or process, an apparatus, a software program, a data stream, or a signal. Although discussed only in the context of a single form of implementation (eg, discussed only as a method), implementations of the discussed features may also be implemented in other forms (eg, as an apparatus or program).
- the apparatus may be implemented in suitable hardware, software and firmware, and the like.
- a method may be implemented in an apparatus such as, for example, a processor, which generally refers to a computer, a microprocessor, a processing device, including an integrated circuit or programmable logic device, and the like. Processors also include communication devices such as computers, cell phones, portable/personal digital assistants (“PDA”) and other devices that facilitate communication of information between end-users.
- PDA portable/personal digital assistants
- FIG. 1 is a conceptual diagram for explaining object recognition-based indoor positioning according to an embodiment of the present invention.
- a user acquires an image through a photographing unit of the terminal device 100 and recognizes first and second preselected objects from the acquired image. Then, the terminal device 100 matches the recognized first and second objects on the vectorized indoor map, and estimates the user's indoor location.
- FIG. 2 is a diagram for explaining an object recognition-based indoor positioning system according to an embodiment of the present invention.
- the object recognition-based indoor positioning system includes a manager terminal 100a, a service server 200, and a user terminal 100b.
- the manager terminal 100a, the service server 200, and the user terminal 100b may be connected to various types of wireless communication networks such as Wifi, 3G, and LTE.
- the manager terminal 100a maps selected objects to the indoor map for use in user location estimation. That is, the administrator may select objects to be used for user location estimation.
- the objects are selected from among static objects (eg, a store sign, a sign, a fire hydrant, etc.), and may be mainly selected from objects having unique characteristics in the room.
- the manager maps the selected objects to the pre-made indoor map. That is, coordinate values on the indoor map are stored for each selected object.
- the indoor map is a digitized (vectorized) map produced using CAD drawings, Point Cloud Map, Lidar Map, and image map, and the digitized map can be used in the manager terminal 100a and the user terminal 100b. It could be a map.
- the indoor map may include main information of the corresponding room. For example, in the case of a shopping mall, the indoor map may include a boundary line dividing the stores, a store name, and the like.
- the manager terminal 100a stores an image collection application, stores images of objects photographed through the image collection application for deep learning network learning for object recognition, and stores the stored object-specific photographed images in the service server 200 ) is sent to
- the photographed image may be an image photographed in various directions including the object.
- the manager terminal 100a captures images including preselected objects in various directions, and provides the captured images for each object to the service server 200 to be used as learning data for object recognition.
- the service server 200 collects captured images of each object set on the indoor map from the manager terminal 100a, and learns the captured images for each object to generate an object recognition model.
- the service server 200 may generate an object recognition model using deep learning. Specifically, when a photographed image for each object is received, the service server 200 stores the image pixel coordinate values and the object name of the four vertices of the smallest rectangle (hereinafter, referred to as a 'bounding box') including the object. learn deep learning networks.
- the deep learning network may be designed with various models related to object recognition issues, for example, the YOLO network may be used.
- the service server 200 estimates the location of the user terminal 100b by inputting the captured image into the location estimation model.
- the user terminal 100b stores the location estimation application, and when the surrounding environment is photographed through the location estimation application, recognizes an object (eg, a sign, a fire hydrant, a picture frame, a door, etc.) from the captured image, and the location coordinates of the recognized objects The user's location is estimated using the field and distance estimation values.
- an object eg, a sign, a fire hydrant, a picture frame, a door, etc.
- the manager terminal 100a and the user terminal 100b have been described separately, but the manager terminal 100a and the user terminal 100b may be the same terminal. Therefore, hereinafter, for convenience of description, the manager terminal 100a and the user terminal 100b will be referred to as the terminal device 100 .
- FIG. 3 is a block diagram showing the configuration of a terminal device according to an embodiment of the present invention
- FIG. 4 is an exemplary diagram for explaining a location estimation screen according to an embodiment of the present invention
- FIG. 5 is an embodiment of the present invention It is an exemplary diagram for explaining a method of estimating a user's indoor location using an external product according to .
- the terminal device 100 includes a communication unit 110 , a storage unit 120 , a photographing unit 130 , a display unit 140 , and a control unit 150 .
- the communication unit 110 is a configuration for communication with the service server 200 through a communication network, and may transmit and receive various information such as an image obtained through the photographing unit 130 .
- the communication unit 110 may be implemented in various forms, such as a short-range communication module, a wireless communication module, a mobile communication module, and a wired communication module.
- the storage unit 120 is configured to store data related to the operation of the terminal device 100 .
- the storage unit 120 may use a known storage medium, for example, any one or more of known storage media such as ROM, PROM, EPROM, EEPROM, RAM, etc. may be used.
- the storage unit 120 may store an indoor map matched with a coordinate value for each pre-selected object.
- the storage unit 120 may store an image collection application capable of obtaining a preselected photographed image for each object by driving the photographing unit 130 .
- a location estimation application for estimating a current location using an image may be stored in the storage unit 120 .
- the photographing unit 130 acquires an image when the image collection application or the image estimation application is executed, and transmits the acquired image to the controller 150 .
- the photographing unit 130 may be, for example, a camera.
- the display unit 140 is configured to display various information related to the operation of the terminal device 100 .
- the display unit 140 may display an image collection screen when the image collection application is executed, and may display the location estimation screen when the location estimation application is executed.
- the display unit 140 may also operate as an input unit for receiving information from a user.
- the control unit 150 drives the photographing unit 130 , stores an image for each object photographed through the photographing unit 130 , and stores a photographed image of each object. is transmitted to the service server 200 . That is, when the collection of images in various directions for the pre-selected objects is completed, the controller 150 transmits the collected object and the coordinates of the corresponding object on the indoor map to the service server 200 .
- the number of objects on the indoor map and the shooting direction may be set in advance according to an administrator.
- the control unit 150 drives the photographing unit 130 , and an object (eg, a signboard, a fire hydrant, a picture frame) in the image captured by the photographing unit 130 . , door, etc.) and estimates the user's location using the location coordinates and distance estimation values of the recognized objects.
- an object eg, a signboard, a fire hydrant, a picture frame
- the controller 150 recognizes the first object and the second object from the image acquired through the photographing unit 130 , estimates the virtual space positions of the first and second objects, respectively, and the first and second objects
- the virtual space distance between each of the first and second objects and the user is respectively estimated using the virtual space location of , and the user's virtual space location, and the indoor location of the user is estimated using the virtual space distance.
- the virtual space may mean a space visible on the screen.
- the control unit 150 points cloud ( point cloud).
- Point clouds each have coordinates in virtual space. Accordingly, the coordinates of the time point cloud map is started to be generated (that is, the time point when the user starts moving) may be set to [0,0]. If the user moves around the entire area of the indoor space, a point cloud map for the indoor environment will be generated, and each point will have coordinates in the virtual space.
- the controller 150 recognizes the first and second predetermined objects in the image obtained through the photographing unit 130, and uses the point cloud generated in the virtual space to determine the virtual space positions of the first and second objects. estimate each.
- the controller 150 recognizes the first object and the second object using deep learning technology, and outputs a first bounding box including the first object and a second bounding box including the second object.
- the controller 150 may recognize the first object and the second object using the Yolo Network, and at least one of an object name, a coordinate value, and a length of a bounding box may be displayed in the first and second bounding boxes, respectively.
- the controller 150 recognizes the first and second objects, the same recognition result as the captured image 310 of the location estimation screen 300 shown in FIG. 4 may be output. 4, a bounding box (A) including an object called 'sunshade' is displayed, and coordinates [x,y] and [x1,y1], two side lengths d and h, can be displayed in the bounding box. there is.
- the controller 150 when an image including the learned object is input to the deep learning network, the controller 150 outputs the coordinates of four vertices and the object name of the image pixel of the bounding box including the object.
- the controller 150 selects the most central points in the first and second bounding boxes, respectively, and estimates the virtual space positions of the first object and the second object. That is, the controller 150 may select the most central point in the bounding boxes of the first and second objects from among the point clouds generated on the image, and estimate the selected point coordinates as virtual space coordinates.
- the controller 150 uses the virtual space positions of the first and second objects and the user's virtual space positions, respectively, for the first and second objects and the user. Estimate the virtual space distance between them.
- the controller 150 continuously tracks the user's location through sensors (eg, gyroscope, accelerometer, geomagnetic sensor, etc.) mounted in the terminal device 100 starting with [0,0] in the virtual space. .
- the controller 150 may track the user's virtual space location using a dead reckoning algorithm.
- the dead reckoning algorithm may be an algorithm for tracking the user's movement by estimating the moving distance and direction of the user based on sensors (not shown) mounted on the terminal device 100 .
- the controller 150 calculates a first Euclidean distance between the virtual space location of the first object and the user's virtual space location, and the virtual space location of the second object and the user's virtual space Calculate the second Euclidean distance between positions.
- the first Euclidean distance and the second Euclidean distance are scalar values, the distance value in the virtual space and the distance value on the indoor map may be the same.
- the control unit 150 uses the virtual space location of the first and second objects, the user's virtual space location, and the Euclidean distance between the virtual space location of the first and second objects and the user's virtual space location, on the indoor map. It is possible to estimate the actual location of the user. In this case, the actual positions of the first and second objects on the indoor map may be preset coordinate values.
- the controller 150 obtains the indoor coordinates (actual positions) of the first and second objects from the indoor map, respectively, and uses the first and second Euclidean distances and the indoor coordinates of the first and second objects.
- the user's first indoor predicted location and the second indoor predicted location may be estimated. That is, since the user's position and the positions of the first and second objects are known in the virtual space, the controller 150 can draw a triangular shape.
- the positions of the first and second objects and two distance values from the first and second objects to the user may be known. After all, if the figure of the virtual space is matched to the indoor space, the user can be located only at two points on the indoor map. In order to select the location of the user from among the two points, it is necessary to know which direction the first and second objects are based on the user.
- the controller 150 may use the concept of an outer product, or vector product.
- the cross product of two vectors from the user to the first and second objects in the virtual space has a direction. This orientation should be the same on the indoor map.
- the sign of the cross product of the two vectors to the first and second objects centered on the user of the virtual space and the sign of the cross product of the two vectors to the first and second objects centered on the user of the indoor map should be the same. will be.
- a point where the sign of the external product and the sign of the external product in the virtual space are the same among two points where the user can be located on the indoor map can be finally estimated as the location of the user.
- the controller 150 obtains the virtual space cross product of the first virtual vector and the second virtual vector, which are vectors from the user's virtual space position to the virtual space positions of the first and second objects. That is, the controller 150 is the external virtual space of the first virtual vector from the user's virtual space position to the virtual space position of the first object and the second virtual vector from the user's virtual space position to the virtual space position of the second object. You can get the extrinsic.
- the controller 150 controls the first indoor/outdoor product and the second indoor predicted position as an outer product of the first real vector and the second real vector, which are vectors from the first indoor predicted position to the indoor coordinates of the first and second objects.
- a second indoor and outdoor product is obtained as a cross product of the first real vector and the second real vector, which are vectors up to the indoor coordinates of the first and second objects based on . That is, the controller 150 may control the first indoor/outdoor product of the first real vector from the first indoor predicted position to the indoor coordinates of the first object and the second real vector from the first indoor predicted position to the indoor coordinates of the second object.
- the control unit 150 may control the second indoor/outdoor product of the first real vector from the second indoor predicted position to the indoor coordinates of the first object and the second real vector from the second indoor predicted position to the indoor coordinates of the second object.
- the controller 150 may estimate the predicted indoor location of the indoor/outer product having the same sign as the virtual space outer product among the first and second indoor/outdoor products as the user's indoor location.
- the possible user location on the indoor map may be two points a and b, and the user's location should be obtained from the two points.
- the orientation of the first and second objects shown in the virtual space should be considered.
- the cross product d2 * d1 of two vectors can have a + sign.
- the cross product (d2*d1) of two vectors is obtained for possible user positions a and b on the indoor map, and the user's position on the indoor map with the same sign of the cross product obtained in the virtual space can be determined as the final position as the user.
- the controller 150 may recognize two objects through the photographing unit 130 , and estimate the user's location by using the recognized location coordinates and distance estimation values of the two objects.
- the present invention utilizes a vectorized indoor map so that the user can accurately estimate his or her location even when only two objects are recognized through the photographing unit 130 of the terminal device 100 .
- the objects to be used for estimating the user's location are mapped on the indoor map in advance. That is, the coordinates of objects on the indoor map are stored. Then, when two objects are recognized in the virtual space shown on the screen of the terminal device 100, the geometric shape drawn by the user and the two objects on the indoor map and the geometric shape drawn by the user and the two objects on the virtual space are matched to position the user can be estimated.
- the controller 150 displays the user's location on the indoor map and displays the user's location on the display unit 140 together with the captured image. That is, when the user's location is estimated through the location estimation application, the controller 150 controls the location estimation screen 300 on which the user's location B is displayed on the captured image 310 and the indoor map 320 as shown in FIG. 4 . ) can be displayed.
- the control unit 150 may include at least one arithmetic unit, where the arithmetic unit is a general-purpose central processing unit (CPU), a programmable device device (CPLD, FPGA) implemented appropriately for a specific purpose, and an application-specific semiconductor operation. It can be a device (ASIC) or a microcontroller chip.
- the arithmetic unit is a general-purpose central processing unit (CPU), a programmable device device (CPLD, FPGA) implemented appropriately for a specific purpose, and an application-specific semiconductor operation. It can be a device (ASIC) or a microcontroller chip.
- the terminal device 100 configured as described above may be an electronic device capable of photographing the surrounding environment through the photographing unit 130 and applicable to various wired and wireless environments.
- the terminal device 100 is a personal digital assistant (PDA), a smart phone, a cellular phone, a PCS (Personal Communication Service) phone, a GSM (Global System for Mobile) phone, a W-CDMA (Wideband CDMA) phone, CDMA-2000. It includes a phone, a Mobile Broadband System (MBS) phone, and the like.
- the terminal device 100 may represent a small portable device, but may be referred to as a mobile communication terminal if it includes a camcorder or a laptop computer, and thus the embodiment of the present invention will not be particularly limited thereto.
- FIG. 6 is a block diagram showing the configuration of a service server according to an embodiment of the present invention.
- the service server 200 includes a communication unit 210 , a storage unit 220 , an object recognition model generation unit 230 , and a control unit 250 .
- the communication unit 210 receives a photographed image for each object from the terminal device 100 .
- the storage unit 220 is configured to store data related to the operation of the service server 200 .
- the storage unit 220 may store an indoor map in which preselected coordinate values for each object are stored.
- the object recognition model generation unit 230 receives a photographed image for each object from the terminal device 100 through the communication unit 210 , learns a photographed image of each received object, and generates an object recognition model for object recognition. .
- the object recognition model generator 230 may generate an object recognition model using deep learning.
- the object recognition model may be in a form in which coordinate values for each object are mapped. Accordingly, when an image whose location is unknown is input, the object recognition model may calculate an object and coordinates of the object from the image as an output value.
- the service server 200 when the service server 200 according to the present invention receives a position estimation request signal including a photographed image from the terminal device 100 through the communication unit 210 , the photographed image is input to the object recognition model to input the first and second Recognizes two objects, estimates the virtual space positions of the first and second objects, respectively, and uses the virtual space positions of the first and second objects and the user's virtual space positions between each of the first and second objects and the user It may further include a location estimator 240 for estimating the virtual space distance, estimating the indoor location of the user using the virtual space distance, and transmitting the estimated indoor location to the terminal device. When the first and second objects are recognized, the location estimator 240 may display the recognized first and second objects on the screen of the terminal device 100 .
- the location estimation model generator 230 and the location estimation unit 240 may be implemented by a processor or the like required to execute a program on the computing device, respectively.
- the position estimation model generator 230 and the position estimator 240 may be implemented by physically independent components, or may be implemented in a functionally separate form within one processor.
- the control unit 250 is configured to control the operation of various components of the service server 200 including the communication unit 210, the storage unit 220, the location estimation model generator 230, and the location estimation unit 240. , at least one arithmetic unit, wherein the arithmetic unit is a general-purpose central processing unit (CPU), a programmable device device (CPLD, FPGA) that is suitably implemented for a specific purpose, an application-specific processing unit (ASIC), or It may be a microcontroller chip.
- CPU central processing unit
- CPLD programmable device device
- ASIC application-specific processing unit
- FIG. 7 is a flowchart illustrating an object recognition-based indoor positioning method according to an embodiment of the present invention.
- the terminal device 100 drives the photographing unit 130 to photograph an image, and recognizes first and second objects in the photographed image ( S510 ).
- the terminal device 100 may recognize the first object and the second object using deep learning technology.
- the terminal device 100 estimates the virtual space positions of the first object and the second object, respectively (S520).
- the terminal device 100 may estimate the virtual space positions of the first and second objects, respectively, by using the point cloud generated in the virtual space.
- step S520 the terminal device 100 estimates a virtual space distance between each of the first and second objects and the user by using the virtual space positions of the first and second objects and the user's virtual space positions (S530). ).
- the terminal device 100 may track the user's virtual space location using a dead reckoning algorithm.
- the terminal device 100 calculates a first Euclidean distance between the virtual space location of the first object and the user's virtual space location, and the virtual space location of the second object and the user's virtual space A second Euclidean distance between spatial locations can be calculated.
- step S530 the terminal device 100 determines the virtual space location of the first and second objects, the user's virtual space location, and the Euclidean distance between the virtual space location of the first and second objects and the user's virtual space location. is used to estimate the user's location on the indoor map (S540).
- the terminal device 100 determines the virtual space location of the first and second objects, the user's virtual space location, and the Euclidean distance between the virtual space location of the first and second objects and the user's virtual space location. is used to estimate the user's location on the indoor map (S540).
- FIG. 8 For a detailed description of a method for the terminal device 100 to estimate a location of a user on an indoor map, reference will be made to FIG. 8 .
- FIG. 8 is a flowchart illustrating a method of estimating a user's location using a cross product according to an embodiment of the present invention.
- the terminal device 100 obtains the cross product of two vectors from the user to the first and second objects in the virtual space ( S610 ). That is, the terminal device 100 is an external virtual vector of the first virtual vector from the user's virtual space position to the virtual space position of the first object and the second virtual vector from the user's virtual space position to the virtual space position of the second object. You can find the spatial cross product.
- step S610 the terminal device 100 obtains the cross product of two vectors to the first and second objects centered on the user of the indoor map (S620). That is, the terminal device 100 uses the first indoor/outdoor product of the first and second actual vectors that are vectors up to the indoor coordinates of the first and second objects based on the first indoor predicted position and the second indoor predicted position as a reference. The second indoor and outdoor products of the first and second real vectors that are vectors up to the indoor coordinates of the first and second objects may be obtained.
- step S620 the terminal device 100 estimates a point having the same sign of the external product of the virtual space as the location of the user among two points where the user can be located on the indoor map (S630). That is, the terminal device 100 may estimate the predicted indoor location of the indoor/outer product having the same sign as the virtual space outer product among the first and second indoor/outdoor products as the user's indoor location.
- the user's location is estimated using an image captured by a camera of the user terminal device in an indoor environment. By doing so, it is possible to accurately estimate the indoor location without building expensive equipment or infrastructure.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (13)
- 기설정된 객체별 좌표값이 매칭된 실내 지도가 저장된 저장부;촬영부; 및상기 촬영부를 통해 획득된 이미지에서 제1 및 제2 객체를 인식하고, 상기 제1 및 제2 객체의 가상공간 위치를 각각 추정하며, 상기 제1 및 제2 객체의 가상공간 위치와 사용자의 가상공간 위치를 이용하여 상기 제1 및 제2 객체 각각과 상기 사용자 간의 가상공간 거리를 각각 추정하고, 상기 가상공간 거리를 이용하여 상기 사용자의 실내 위치를 추정하는 제어부;를 포함하는 단말장치.
- 제1항에 있어서,표시부를 더 포함하고,상기 제어부는, 상기 사용자의 위치를 상기 실내 지도에 표시하여 상기 촬영부를 통해 획득된 이미지와 함께 상기 표시부를 통해 디스플레이시키는 것을 특징으로 하는 단말장치.
- 제1항에 있어서,상기 제어부는,딥러닝을 이용하여 상기 제1 및 제2 객체를 인식하고, 가상공간 상에 생성되는 포인트 클라우드를 이용하여 상기 제1 및 제2 객체의 가상공간 위치를 각각 추정하는 것을 특징으로 하는 단말장치.
- 제1항에 있어서,상기 제어부는,Dead Reckoning 알고리즘을 이용하여 상기 사용자의 가상공간 위치를 추정하는 것을 특징으로 하는 단말장치.
- 제1항에 있어서,상기 제어부는,상기 제1 객체의 가상공간 위치와 사용자의 가상공간 위치 간의 제1 유클리디안 거리를 산출하고, 상기 제2 객체의 가상공간 위치와 상기 사용자의 가상공간 위치 간의 제2 유클리디안 거리를 산출하는 것을 특징으로 하는 단말장치.
- 제5항에 있어서,상기 제어부는,상기 실내 지도로부터 상기 제1 및 제2 객체의 실내 좌표를 각각 획득하고, 상기 제1 및 제2 유클리디안 거리, 상기 제1 및 제2 객체의 실내 좌표를 이용하여 상기 사용자의 제1 실내 예측 위치 및 제2 실내 예측 위치를 추정하며, 상기 사용자의 가상공간 위치를 기준으로 상기 제1 및 제2 객체의 가상공간 위치까지의 벡터인 제1 가상벡터와 제2 가상벡터의 외적인 가상공간 외적을 구하고, 상기 제1 실내 예측 위치를 기준으로 상기 제1 및 제2 객체의 실내 좌표까지의 벡터인 제1 실제벡터와 제2 실제벡터의 외적인 제1 실내 외적 및 상기 제2 실내 예측 위치를 기준으로 상기 제1 및 제2 객체의 실내 좌표까지의 벡터인 제1 실제벡터와 제2 실제벡터의 외적인 제2 실내 외적을 구하며, 상기 제1 및 제2 실내 외적 중 상기 가상공간 외적과 부호가 일치하는 실내 외적의 실내 예측 위치를 상기 사용자의 실내 위치로 추정하는 것을 특징으로 하는 단말장치.
- 제1항에 있어서,통신망을 통해 서비스 서버와 통신을 수행하는 통신부를 더 포함하고,상기 제어부는,상기 저장부에 저장된 이미지 수집 애플리케이션이 실행된 경우, 상기 촬영부를 통해 기선정된 객체들을 촬영하고, 상기 촬영된 각 객체의 이미지를 저장하며, 상기 저장된 각 객체의 촬영 이미지를 상기 통신부를 통해 상기 서비스 서버로 전송하는 것을 특징으로 하는 단말장치.
- 실내 지도에 설정된 각 객체의 촬영 이미지를 수신하는 통신부; 및상기 통신부를 통해 수신한 각 객체에서의 촬영 이미지를 학습하고, 상기 학습된 각 객체의 촬영 이미지를 통해 객체 및 그 객체의 좌표를 매칭하여 객체 인식을 위한 객체 인식 모델을 생성하는 객체 인식 모델 생성부를 포함하는 서비스 서버.
- 제8항에 있어서,상기 통신부를 통해 단말장치로부터 촬영 이미지를 포함하는 위치 추정 요청 신호가 수신되면, 상기 촬영 이미지를 상기 객체 인식 모델에 입력하여 제1 및 제2 객체를 인식하고, 상기 제1 및 제2 객체의 가상공간 위치를 각각 추정하며, 상기 제1 및 제2 객체의 가상공간 위치와 사용자의 가상공간 위치를 이용하여 상기 제1 및 제2 객체 각각과 상기 사용자 간의 가상공간 거리를 각각 추정하고, 상기 가상공간 거리를 이용하여 상기 사용자의 실내 위치를 추정하며, 상기 추정된 실내 위치를 상기 단말장치로 전송하는 위치 추정부를 더 포함하는 서비스 서버.
- 단말장치가 촬영부를 통해 획득된 이미지에서 제1 및 제2 객체를 인식하고, 상기 제1 및 제2 객체의 가상공간 위치를 각각 추정하는 단계;상기 단말장치가 상기 제1 및 제2 객체의 가상공간 위치와 사용자의 가상공간 위치를 이용하여 상기 제1 및 제2 객체 각각과 상기 사용자 간의 가상공간 거리를 각각 추정하는 단계; 및상기 단말장치가 상기 가상공간 거리를 이용하여 상기 사용자의 실내 위치를 추정하는 단계를 포함하는 객체 인식 기반 실내 측위 서비스 방법.
- 제10항에 있어서,상기 제1 및 제2 객체의 가상공간 위치를 각각 추정하는 단계에서,상기 단말장치는, 딥러닝을 이용하여 상기 제1 및 제2 객체를 인식하고, 가상공간 상에 생성되는 포인트 클라우드를 이용하여 상기 제1 및 제2 객체의 가상공간 위치를 각각 추정하는 것을 특징으로 하는 객체 인식 기반 실내 측위 서비스 방법.
- 제10항에 있어서,상기 가상공간 거리를 각각 추정하는 단계는,상기 단말장치가 dead Reckoning 알고리즘을 이용하여 상기 사용자의 가상공간 위치를 추정하는 단계; 및상기 단말장치가 상기 제1 객체의 가상공간 위치와 사용자의 가상공간 위치 간의 제1 유클리디안 거리를 산출하고, 상기 제2 객체의 가상공간 위치와 상기 사용자의 가상공간 위치 간의 제2 유클리디안 거리를 산출하는 단계를 포함하는 것을 특징으로 하는 객체 인식 기반 실내 측위 서비스 방법.
- 제10항에 있어서,상기 사용자의 실내 위치를 추정하는 단계는,상기 단말장치가 상기 실내 지도로부터 상기 제1 및 제2 객체의 실내 좌표를 각각 획득하고, 상기 제1 및 제2 유클리디안 거리, 상기 제1 및 제2 객체의 실내 좌표를 이용하여 상기 사용자의 제1 실내 예측 위치 및 제2 실내 예측 위치를 추정하는 단계;상기 단말장치가 상기 사용자의 가상공간 위치를 기준으로 상기 제1 및 제2 객체의 가상공간 위치까지의 벡터인 제1 가상벡터와 제2 가상벡터의 외적인 가상공간 외적을 구하는 단계;상기 단말장치가 상기 제1 실내 예측 위치를 기준으로 상기 제1 및 제2 객체의 실내 좌표까지의 벡터인 제1 실제벡터와 제2 실제벡터의 외적인 제1 실내 외적 및 상기 제2 실내 예측 위치를 기준으로 상기 제1 및 제2 객체의 실내 좌표까지의 벡터인 제1 실제벡터와 제2 실제벡터의 외적인 제2 실내 외적을 구하는 단계; 및상기 단말장치가 상기 제1 및 제2 실내 외적 중 상기 가상공간 외적과 부호가 일치하는 실내 외적의 실내 예측 위치를 상기 사용자의 실내 위치로 추정하는 단계를 포함하는 것을 특징으로 하는 객체 인식 기반 실내 측위 서비스 방법.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2020/015298 WO2022097765A1 (ko) | 2020-11-04 | 2020-11-04 | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 |
EP20960867.8A EP4242967A1 (en) | 2020-11-04 | 2020-11-04 | Terminal device for indoor positioning based on object recognition, service server, and method therefor |
US18/251,038 US20230410351A1 (en) | 2020-11-04 | 2020-11-04 | Terminal device for indoor positioning based on object recognition, service server, and method therefor |
CN202080106643.8A CN116472554A (zh) | 2020-11-04 | 2020-11-04 | 用于基于对象识别的室内定位的终端装置、服务服务器及其方法 |
JP2023547168A JP2023544913A (ja) | 2020-11-04 | 2020-11-04 | 客体認識基盤室内側位のための端末装置、サービスサーバーおよびその方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2020/015298 WO2022097765A1 (ko) | 2020-11-04 | 2020-11-04 | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022097765A1 true WO2022097765A1 (ko) | 2022-05-12 |
Family
ID=81458318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/015298 WO2022097765A1 (ko) | 2020-11-04 | 2020-11-04 | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230410351A1 (ko) |
EP (1) | EP4242967A1 (ko) |
JP (1) | JP2023544913A (ko) |
CN (1) | CN116472554A (ko) |
WO (1) | WO2022097765A1 (ko) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110025025A (ko) | 2009-09-02 | 2011-03-09 | 동국대학교 산학협력단 | 자기장 센서를 이용한 현재 위치 측정 장치 및 방법 |
KR20130108678A (ko) * | 2012-03-20 | 2013-10-07 | 삼성에스디에스 주식회사 | 위치 측정 장치 및 방법 |
KR20160003553A (ko) * | 2014-07-01 | 2016-01-11 | 삼성전자주식회사 | 지도 정보를 제공하기 위한 전자 장치 |
JP2017102861A (ja) * | 2015-12-04 | 2017-06-08 | トヨタ自動車株式会社 | 物体認識装置 |
KR20180126408A (ko) * | 2018-09-07 | 2018-11-27 | 이주형 | 사용자 단말의 위치를 판단하는 방법 |
JP2019046464A (ja) * | 2017-09-01 | 2019-03-22 | 株式会社コンピュータサイエンス研究所 | 歩道進行支援システム及び歩道進行支援ソフトウェア |
KR20210015226A (ko) * | 2019-08-01 | 2021-02-10 | 주식회사 다비오 | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 |
-
2020
- 2020-11-04 JP JP2023547168A patent/JP2023544913A/ja active Pending
- 2020-11-04 CN CN202080106643.8A patent/CN116472554A/zh active Pending
- 2020-11-04 US US18/251,038 patent/US20230410351A1/en active Pending
- 2020-11-04 EP EP20960867.8A patent/EP4242967A1/en active Pending
- 2020-11-04 WO PCT/KR2020/015298 patent/WO2022097765A1/ko active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110025025A (ko) | 2009-09-02 | 2011-03-09 | 동국대학교 산학협력단 | 자기장 센서를 이용한 현재 위치 측정 장치 및 방법 |
KR20130108678A (ko) * | 2012-03-20 | 2013-10-07 | 삼성에스디에스 주식회사 | 위치 측정 장치 및 방법 |
KR20160003553A (ko) * | 2014-07-01 | 2016-01-11 | 삼성전자주식회사 | 지도 정보를 제공하기 위한 전자 장치 |
JP2017102861A (ja) * | 2015-12-04 | 2017-06-08 | トヨタ自動車株式会社 | 物体認識装置 |
JP2019046464A (ja) * | 2017-09-01 | 2019-03-22 | 株式会社コンピュータサイエンス研究所 | 歩道進行支援システム及び歩道進行支援ソフトウェア |
KR20180126408A (ko) * | 2018-09-07 | 2018-11-27 | 이주형 | 사용자 단말의 위치를 판단하는 방법 |
KR20210015226A (ko) * | 2019-08-01 | 2021-02-10 | 주식회사 다비오 | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 |
Also Published As
Publication number | Publication date |
---|---|
JP2023544913A (ja) | 2023-10-25 |
US20230410351A1 (en) | 2023-12-21 |
EP4242967A1 (en) | 2023-09-13 |
CN116472554A (zh) | 2023-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102277503B1 (ko) | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 | |
WO2015014018A1 (zh) | 移动终端基于图像识别技术的室内定位与导航方法 | |
WO2017086561A1 (en) | Landmark location determination | |
WO2011093669A2 (ko) | 객체 인식시스템 및 이를 이용하는 객체 인식 방법 | |
WO2018164460A1 (en) | Method of providing augmented reality content, and electronic device and system adapted to the method | |
WO2011013910A2 (ko) | 증강 현실을 제공하는 방법과 그를 위한 서버 및 휴대용 단말기 | |
WO2013039306A1 (en) | Method and apparatus for providing information based on a location | |
WO2011162583A2 (ko) | 인빌딩 전파 환경 맵 생성 방법과 그를 위한 장치 | |
WO2017160026A2 (ko) | 무선 통신 시스템에서 액세스 포인트를 이용한 위치 추정 방법 및 장치 | |
WO2014073841A1 (ko) | 영상기반 실내 위치 검출방법 및 이를 이용한 휴대용 단말기 | |
WO2020042968A1 (zh) | 一种对象信息的获取方法、装置以及存储介质 | |
WO2021125578A1 (ko) | 시각 정보 처리 기반의 위치 인식 방법 및 시스템 | |
WO2020075954A1 (ko) | 다종 센서 기반의 위치인식 결과들을 혼합한 위치 측위 시스템 및 방법 | |
WO2021132947A1 (ko) | 실내 측위를 제공하는 전자 장치 및 그 방법 | |
WO2012091313A2 (ko) | 실내 위치 측정을 위한 장치 및 방법 | |
CN113556680B (zh) | 指纹数据的处理方法、介质和移动机器人 | |
WO2020235740A1 (ko) | 이미지 기반 실내 측위 서비스 시스템 및 방법 | |
WO2022097765A1 (ko) | 객체 인식 기반 실내 측위를 위한 단말장치, 서비스 서버 및 그 방법 | |
CN110095792B (zh) | 定位终端的方法及装置 | |
US20230035962A1 (en) | Space recognition system, space recognition method and information terminal | |
WO2021080163A1 (en) | Electronic device for detecting location and method thereof | |
WO2011071199A1 (ko) | 위치 추적 시스템 및 그 방법 | |
WO2024063231A1 (ko) | 경로 안내 단말, 경로 안내 시스템 및 경로 안내 방법 | |
WO2016035992A1 (ko) | 실내 지도 구축 장치 | |
WO2018199445A1 (ko) | 전파 맵 생성 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20960867 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023547168 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202080106643.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18251038 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020960867 Country of ref document: EP Effective date: 20230605 |