WO2019046962A1 - Method and system for target positioning and map update - Google Patents
Method and system for target positioning and map update Download PDFInfo
- Publication number
- WO2019046962A1 WO2019046962A1 PCT/CA2018/051101 CA2018051101W WO2019046962A1 WO 2019046962 A1 WO2019046962 A1 WO 2019046962A1 CA 2018051101 W CA2018051101 W CA 2018051101W WO 2019046962 A1 WO2019046962 A1 WO 2019046962A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- points
- local
- determining
- point
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Definitions
- This disclosure relates generally to an object positioning system and method for determining the location of a moving obj ect in a site using a plurality of anchor devices and a map of the site, and in particular, relates to a system and method for updating the positions of changed anchor devices in the site.
- IoT Internet of Things
- GPS Global Positioning System
- GPS generally requires line-of-sight connection between a GPS signal receiver and GPS signal transmitters on satellites. Therefore, GPS systems usually do not work well in indoor environment as the GPS signal strength is weakened by the building surrounding the GPS signal receiver.
- a multiple-sensor indoor positioning system uses a plurality of sensors deployed in an indoor environment such as a building for positioning one or more moving objects.
- a multiple-sensor indoor positioning system comprises a plurality of small anchor devices such as wall-mount tags, ceiling-mount beacons, and the like, mounted on the surface of different structures and/or inside various objects in the building for facilitating object positioning.
- the configurations of the anchor devices are generally stored in the system for ensuring proper system operation.
- a challenge of the multiple-sensor indoor positioning system is that the anchor devices may be redeployed from time to time such as being moved to different locations and/or remounted with different configurations by users for various reasons and without notification.
- the stored configurations of the redeployed anchor devices have to be updated to match the actual configurations thereof after their redeployment. For example, if the position of an anchor device has been changed, the position record of the anchor device stored in the system has to be updated.
- the indoor environment itself may also change over time. For example, a new wall may be constructed for reconfiguring a room, a door may be blocked or rebuilt, and/or furniture may be relocated. Such change of the indoor environment also leads to significant impact to the spatial and positioning-related applications of smart devices.
- a traditional approach for solving the above-described challenge is to survey or measure the indoor environment by using professional surveying equipment such as the total station, laser range-finder, digital level, and the like, regularly or when a change or reconfiguration to the indoor environment and/or the anchor devices is noticed.
- the principle in this approach is to measure the angles and distances from the targets (for example, changed or reconfigured anchor devices and/or building structures) to some known control points, and then use triangulation to estimate the targets' locations accordingly.
- Such a traditional survey method can provide high accuracy in the target's position and the indoor environment measurement.
- the professional surveying equipment used in surveying is usually expensive and operators require special training in order to properly use the equipment.
- the traditional survey methods are based on a pre-existing reference network primarily suitable for outdoor environments, and setting up a corresponding indoor reference network is usually difficult and time-consuming.
- Map-based localization methods have also been used for solving the indoor positioning problem.
- a map-based localization method generally determines a target's position by referencing the target on a known map.
- most indoor maps are floor plans which are effective for pedestrian navigation but usually do not have sufficient detail for small device localization. For example, such floor plans often contain few details and lack elevation information.
- SLAM Simultaneous Localization and Mapping
- prior- art documents [4] and [5] are a recently-developed technique which aims to provide a cost-effective way for mapping indoor environments in 3D via consumer devices such as Kinect, Tango phone, and the like (see prior-art documents [6] to [8]).
- SLAM was originally designed for mobile robotics navigation in an unknown environment, and the derived map is represented in an arbitrary coordinate frame which is decided by a robot's initial pose in the environment. For managing a large account of devices distributed in different buildings, a unified coordinate frame is a prerequisite.
- an indoor positioning system may comprise a large number (such as hundreds) of anchor devices distributed in a building, and the positions of some devices may change after the building map is generated.
- the positions of some devices may change after the building map is generated.
- references [9] and [10] teaches a submap SLAM method which focuses on solving an incremental SLAM problem of how to j oin a sequence of local SLAM into a global SLAM.
- a local map determined by local SLAM may not have enough control points in the area, it is difficult or even impossible to find a transformation to the reference frame.
- the challenge is how to transform the local map into the reference frame and then how to update the reference map by the newly-determined local map.
- Embodiments herein discloses a system for determining the three-dimensional (3D) position of a moving object in a site (such as an indoor environment) and updating the spatial information of the site using computer vision and point-cloud processing methods.
- the system disclosed herein comprises one or more anchor devices deployed in the site for providing data sources related to the moving obj ect and the site, a generalized 3D map (also denoted as a reference map or a reference 3D map hereinafter) of the site and a processing structure for determining the position of the moving object using the data obtained by the one or more anchor devices and the map.
- the reference map is first generated and all subsequent processings are established thereon. Unlike the traditional floor-plan maps and georeferenced-image maps, the reference map disclosed herein comprises three types of map layers.
- the first map layer is a data-source layer which supports various data sources such as sequences of optical images, depth images, 3D point cloud, geometric models, and the like.
- the second map layer is a structure-feature layer which is extracted from the data sources and is used for representing and indexing the primary structures of the site.
- the third map layer is a description layer which records information related to the data sources such as data capturing time, device type, data precision, and the like.
- the system uses the reference map and data obtained from the anchor devices for positioning one or more moving objects in the site.
- the system first constructs the reference map of the site using previously- captured data sources such as optical images, depth images, 3D point cloud, and the like. These data sources are rectified and georeferenced by several control points, and the primary structures of the site are extracted therefrom. Then, descriptions such as the time of data collection, the precision of the collection device, and the like, are determined and recorded in the reference map.
- data sources such as optical images, depth images, 3D point cloud, and the like.
- the system may update the reference map periodically or as needed.
- the system can determine and measure the positions of targets including at least a portion of the anchor devices and/or at least a portion of the site (such as a building), and update the reference map based on the obtained target measurement.
- a local measurement can be conducted in an area of interest, for example, by using a consumer device such as a camera, a RGB-D sensor (i.e., a sensor capturing color images with depth data), a Light Detection and Ranging (LiDAR) device, or other similar devices to collect data of one or more target devices and surrounding environment in the area of interest.
- a local 3D map (also denoted as a local map hereinafter) is constructed based on the obtained local measurement.
- the local 3D map may not have sufficient control points, the local data sources are maintained in a local frame for the area of interest without being rectified.
- the same type of structural features and descriptions constructed for the reference map is extracted from the local 3D map.
- the target anchor devices also denoted as target devices hereinafter
- the target anchor devices are detected in the local 3D map and their positions are determined in the local frame.
- a coarse-to-fine registration is applied to align the local 3D map with the reference 3D map, which is then used to estimate a geometric transformation from the local map to the reference map (a local-to-reference transformation) to convert the target devices' coordinates in local map to coordinates in the reference map.
- the local 3D map is also merged to the reference map by the local -to-reference transformation for updating the reference 3D map.
- a system for determining the position of a moving object in a site comprises: one or more anchor devices deployed in the site for providing data sources related to the moving object and the site; a reference map of the site; and a processing structure for determining the position of the moving object using the data obtained by the one or more anchor devices and the reference map.
- the reference map comprises: a first layer comprising the data sources; and a second layer comprising data extracted from the data sources for representing and indexing the primary structures of the site.
- the reference map further comprises a third layer comprising information related to the data sources.
- the third layer comprises characteristics of the data sources; and wherein said characteristics comprises at least one of a data capturing time, a device type, and a data precision.
- the processing structure is configured for executing a map-updating process for updating the reference map using images of at least a portion of the site and a point cloud of the site.
- the map-updating process comprises: obtaining a local measurement of at least a portion of the site; constructing a local map for the at least one portion of the site using obtained local measurement; determining the location of one or more anchor devices in the local map; aligning the local map with the reference map; determining a geometric transformation from the local map to the reference map; convert the coordinates of said one or more anchor devices in local map to coordinates in the reference map by using the determined geometric transformation; and merging the local map with the reference map.
- the map-updating process further comprises: determining a position of a target device in the at least one portion of the site.
- the reference map further comprises geometric structures of the site; and the geometric structures comprises geometric features extracted from three-dimensional (3D) point cloud of the site.
- the geometric features comprises ceiling and wall models and intersection graphs of the ceiling and wall models.
- the processing structure is configured for executing a map- construction process for constructing a map using a point cloud, the map-construction process comprising: classifying points of the point cloud into at least horizontal points and vertical points; determining one or more ceilings based on the horizontal points; determining one or more walls based on the vertical points; determining intersections of the one or more walls; and storing the determined one or more ceilings and one or more walls in a database as the ceiling and wall models, and storing the determined intersections of the one or more walls in the database as the intersection graphs of the ceiling and wall models.
- said classifying the points of the point cloud into at least horizontal points and vertical points comprises: for each point of the point cloud, estimating a normal of the point from the neighbors thereof; calculating a cross-angle between the estimated normal and a vertical direction; classifying the point as a horizontal point if the calculated cross-angle is greater than a first threshold angle; and classifying the point as a vertical point if the calculated cross- angle is smaller than a second threshold angle, the second threshold angle being smaller than the first threshold angle.
- said classifying the points of the point cloud into at least horizontal points and vertical points further comprises: for each point of the point cloud, classifying the point as an unclassified point if the calculated cross-angle is between the first and second threshold angles.
- the first threshold angle is about 80 degrees and the second threshold angle is about 15 degrees.
- said estimating the normal of the point from the neighbors thereof comprises: estimating the normal of the point from the neighbors thereof using an Eigen analysis.
- said determining the one or more ceilings based on the horizontal points comprises: detecting one or more planes based on the horizontal points; for each detected plane, calculating the area thereof; for each detected plane, determining the plane as a ceiling if the area thereof is greater than an area-threshold.
- said detecting the one or more planes based on the horizontal points comprises: detecting the one or more planes based on the horizontal points using a random sample consensus (RANSAC) algorithm.
- RANSAC random sample consensus
- said determining the one or more walls based on the vertical points comprises: detecting one or more planes based on the vertical points; for each detected plane, calculating a projection-density and a connected-length thereof; for each detected plane, determining the plane as a wall if the calculated projection-density is greater than or equal to a density -threshold and the calculated connected-length is greater than or equal to a length-threshold.
- said detecting the one or more planes based on the vertical points comprises: detecting the one or more planes based on the vertical points using a RANSAC algorithm.
- said for each detected plane, calculating the projection-density and the connected-length thereof comprises calculating the projection-density of the plane by projecting points of the plane onto a predefined horizontal plane, and counting projected points in a local area; and the density -threshold is:
- dc is the radius for point counting
- h 0 is the expected minimal height of a wall
- p si is the point sampling interval of the raw point cloud
- ⁇ represents multiplication.
- said for each detected plane, calculating the projection-density and the connected-length thereof comprises: finding a maximal connective part in the plane with a predefined radius; and determining the connected-length of the plane by calculating the projection length of the maximal connective part on a predefined horizontal plane.
- said determining the intersections of the one or more walls comprises: (1) converting points of the one or more walls into voxels with a predefined size; (2) determining the connectivity of walls by voxel analysis; (3) projecting wall points onto the predefined horizontal plane and extracting the intersection points of connected walls; and (4) adding the extracted intersections as vertices and the linked walls as edges into an intersection graph.
- said aligning the local map with the reference map comprises: combining intersection graphs of the local map with intersection graphs of the reference map by intersection-graph matching; combining ceiling and wall models of the reference map with ceiling and wall models of the local 3D map; and converting the local map to the reference map.
- a method for updating a reference map of a site comprises: obtaining a local measurement of at least a portion of the site; constructing a local map for the at least one portion of the site using obtained local measurement; determining the location of one or more anchor devices in the local map; aligning the local map with the reference map; determining a geometric transformation from the local map to the reference map; convert the coordinates of said one or more anchor devices in local map to coordinates in the reference map by using the determined geometric transformation; and merging the local map with the reference map.
- one or more non-transitory computer-readable storage media comprising computer-executable instructions, the instructions, when executed, causing a processor to perform actions comprising: obtaining a local measurement of at least a portion of the site; constructing a local map for the at least one portion of the site using obtained local measurement; determining the location of one or more anchor devices in the local map; aligning the local map with the reference map; determining a geometric transformation from the local map to the reference map; convert the coordinates of said one or more anchor devices in local map to coordinates in the reference map by using the determined geometric transformation; and merging the local map with the reference map.
- FIG. 1 is a schematic diagram of a navigation system, according to some embodiments of this disclosure.
- FIG. 2 is a schematic diagram of a movable object in the navigation system shown in FIG. 1;
- FIG. 3 is a schematic diagram showing a hardware structure of a computing device of the navigation system shown in FIG. 1;
- FIG. 4 is a schematic diagram showing a functional structure of the navigation system shown in FIG. 1 for surveying and map updating;
- FIG. 5 is a schematic diagram showing a main processing flow of the system shown in FIG. 1 for surveying and map updating;
- Fig. 6 is a schematic diagram showing a device-localization processing flow of the system shown in FIG. 1 ;
- FIG. 7 is a flowchart showing a process of reference map construction of the system shown in FIG. 1;
- FIG. 8 is a flowchart showing a process of local map construction of the system shown in
- FIG. 1 A first figure.
- FIG. 9 is a flowchart showing a computer vision method for detecting LEDs in RGB-D images
- FIG. 10 is a flowchart showing a process of aligning the local map to the reference map
- FIG. 11 is a flowchart showing a process of local-to-reference coordinate transformation
- FIG. 12 is a schematic diagram showing a device-localization processing flow for map update.
- FIG. 1 a navigation system is shown and is generally identified using reference numeral 100.
- the terms “tracking”, “positioning”, “navigation”, “navigating”, “localizing”, and “localization” may be used interchangeably with a similar meaning of determining at least the position of a movable object in a site. Depending on the context, these terms may also refer determining other navigation parameters of the movable object such as its pose, speed, heading, and/or the like.
- the navigation system 100 tracks one or more movable objects 108 in a site 102 such as a building complex.
- the movable object 108 may be autonomously movable in the site 102 (for example, a robot, a vehicle, an autonomous shopping cart, a wheelchair, a drone, or the like) or may be attached to a user and movable therewith (for example, a specialized tag device, a smartphone, a smart watch, a tablet, a laptop computer, a personal data assistant (PDA), or the like).
- PDA personal data assistant
- One or more anchor devices 104 are deployed in the site 102 and are functionally coupled to one or more computing devices 106.
- the anchor devices 104 may be any devices suitable for facilitating survey sensors (described later) of the movable object 108 to obtain observations that may be used for positioning, tracking, or navigating the movable object 108 in the site 102.
- the anchor devices 104 in some embodiments may be wireless access points or stations.
- the wireless access points or stations may be WI-FI ® stations (WI-FI is a registered trademark of Wi-Fi Alliance, Austin, TX, USA), BLUETOOTH ® stations (BLUETOOTH is a registered trademark of Bluetooth Sig.
- the anchor devices 104 may be functionally coupled to the one or more computing devices 106 via suitable wired and/or wireless communication structures 114 such as Ethernet, serial cable, parallel cable, USB cable, HDMI ® cable (HDMI is a registered trademark of HDMI Licensing LLC, San, Jose, CA, USA), WI-FI ® , BLUETOOTH ® , ZIGBEE ® , 3G or 4G or 5G wireless telecommunications, and/or the like.
- suitable wired and/or wireless communication structures 114 such as Ethernet, serial cable, parallel cable, USB cable, HDMI ® cable (HDMI is a registered trademark of HDMI Licensing LLC, San, Jose, CA, USA), WI-FI ® , BLUETOOTH ® , ZIGBEE ® , 3G or 4G or 5G wireless telecommunications, and/or the like.
- the movable object 108 comprises one or more survey sensors 118 for example, vision sensors such as cameras for object positioning using computer vision technologies, inertial measurement units (IMUs), received signal strength indicators (RSSIs) that measure the strength of received signals (such as BLUETOOTH ® low energy (BLE) signals, cellular signals, WI-FI ® signals, and/or the like), magnetometers, barometers, and/or the like.
- IMUs inertial measurement units
- RSSIs received signal strength indicators
- BLE BLUETOOTH ® low energy
- magnetometers barometers
- anchor devices 104 such as in wireless communication with wireless access points or stations, for object positioning.
- Such wireless communication may be in accordance with any suitable wireless communication standard such as WI-FI ® , BLUETOOTH ® , ZigBee ® , 3G or 4G or 5G wireless telecommunications or the like, and/or may be in any suitable form such as a generic wireless communication signal, a beacon signal, or a broadcast signal.
- the wireless communication signal may be in either a licensed band or an unlicensed band, and may be either a digital-modulated signal or an analog- modulated signal.
- the wireless communication signal may be an unmodulated carrier signal.
- the wireless communication signal is a signal emanating from a wireless transmitter (being one of the sensors 104 or 118) with an approximately constant time-averaged transmitting power known to a wireless receiver (being the other of the sensors 104 or 118) that measures the RSS thereof.
- the survey sensors 118 may be selected and combined as desired or necessary, based on the system design parameters such as system requirements, constraints, targets, and the like.
- the navigation system 100 may not comprise any barometers. In some other embodiments, the navigation system 100 may not comprise any magnetometers.
- GNSS Global Navigation Satellite System
- GPS receivers GLONASS receivers
- Galileo positioning system receivers Galileo positioning system receivers
- Beidou Navigation Satellite System receivers generally work well under relatively strong signal conditions in most outdoor environments, they usually have high power consumption and high network timing requirements when compared to many infrastructure devices. Therefore, while in some embodiments, the navigation system 100 may comprise GNSS receivers as survey sensors 118, at least in some other embodiments that the navigation system 100 is used for IoT object positioning, the navigation system 100 may not comprise any GNSS receiver.
- the RSS measurements may be obtained by the anchor device 104 having RSSI functionalities (such as wireless access points) or by the movable object 108 having RSSI functionalities (such as object having a wireless transceiver).
- a movable object 108 may transmit a wireless signal to one or more anchor devices 104.
- Each anchor device 104 receiving the transmitted wireless signal measures the RSS thereof and sends the RSS measurements to the computing device 106 for processing.
- a movable object 108 may receive wireless signals from one or more anchor devices 104. The movable object 108 receiving the wireless signals measures the RSS thereof, and sends the RSS observables to the computing device 106 for processing.
- some movable objects 108 may transmit wireless signals to anchor devices 104, and some anchor devices 104 may transmit wireless signals to one or more movable objects 108.
- the receiving devices being the anchor devices 104 and movable objects 108 receiving the wireless signals, measure the RSS thereof and send the RSS observables to the computing device 106 for processing.
- the movable objects 108 also send data collected by the survey sensors 118 to the computing device 106.
- the system 100 may use data collected by sensors 104 and 118, the following description does not differentiate the data received from the anchor devices 104 and the data received from the survey sensors 118. Therefore, the anchor devices 104 and the survey sensors 118 may be collectively denoted as sensors 104 and 118 hereinafter for ease of description, and the data collected from sensors 104 and 118 may be collectively denoted as reference sensor data or simply sensor date.
- the one or more computing devices 106 may be one or more stand-alone computing devices, servers, or a distributed computer network such as a computer cloud.
- one or more computing devices 106 may be portable computing devices such as laptops, tablets, smartphones, and/or the like, integrated with the movable object 108 and movable therewith.
- FIG. 3 shows a hardware structure of the computing device 106.
- the computing device 106 comprises one or more processing structures 122, a controlling structure 124, a memory 126 (such as one or more storage devices), a networking interface 128, a coordinate input 130, a display output 132, and other input modules and output modules 134 and 136, all functionally interconnected by a system bus 138.
- the processing structure 122 may be one or more single-core or multiple-core computing processors such as INTEL ® microprocessors (INTEL is a registered trademark of Intel Corp., Santa Clara, CA, USA), AMD ® microprocessors (AMD is a registered trademark of Advanced Micro Devices Inc., Sunnyvale, CA, USA), ARM ® microprocessors (ARM is a registered trademark of Arm Ltd., Cambridge, UK) manufactured by a variety of manufactures such as Qualcomm of San Diego, California, USA, under the ARM ® architecture, or the like.
- INTEL ® microprocessors INTEL is a registered trademark of Intel Corp., Santa Clara, CA, USA
- AMD is a registered trademark of Advanced Micro Devices Inc., Sunnyvale, CA, USA
- ARM ® microprocessors ARM is a registered trademark of Arm Ltd., Cambridge, UK manufactured by a variety of manufactures such as Qualcomm of San Diego, California, USA, under the ARM ® architecture, or the like.
- the controlling structure 124 comprises a plurality of controllers such as graphic controllers, input/output chipsets, and the like, for coordinating operations of various hardware components and modules of the computing device 106.
- the memory 126 comprises a plurality of memory units accessible by the processing structure 122 and the controlling structure 124 for reading and/or storing data, including input data and data generated by the processing structure 122 and the controlling structure 124.
- the memory 126 may be volatile and/or non-volatile, non-removable or removable memory such as RAM, ROM, EEPROM, solid-state memory, hard disks, CD, DVD, flash memory, or the like.
- the memory 126 is generally divided to a plurality of portions for different use purposes. For example, a portion of the memory 126 (denoted herein as storage memory) may be used for long- term data storing, for example storing files or databases. Another portion of the memory 126 may be used as the system memory for storing data during processing (denoted herein as working memory).
- the networking interface 128 comprises one or more networking modules for connecting to other computing devices or networks through the network 106 by using suitable wired or wireless communication technologies such as Ethernet, WI-FI ® , BLUETOOTH ® , ZIGBEE ® , 3G or 4G or 5G wireless mobile telecommunications technologies, and/or the like.
- suitable wired or wireless communication technologies such as Ethernet, WI-FI ® , BLUETOOTH ® , ZIGBEE ® , 3G or 4G or 5G wireless mobile telecommunications technologies, and/or the like.
- parallel ports, serial ports, USB connections, optical connections, or the like may also be used for connecting other computing devices or networks although they are usually considered as input/output interfaces for connecting input/output devices.
- the display output 132 comprises one or more display modules for displaying images, such as monitors, LCD displays, LED displays, projectors, and the like.
- the display output 132 may be a physically integrated part of the computing device 106 (for example, the display of a laptop computer or tablet), or may be a display device physically separate from but functionally coupled to other components of the computing device 106 (for example, the monitor of a desktop computer).
- the coordinate input 130 comprises one or more input modules for one or more users to input coordinate data from, for example, a touch-sensitive screen, a touch-sensitive whiteboard, a trackball, a computer mouse, a touch-pad, or other human interface devices (HID), and the like.
- the coordinate input 130 may be a physically integrated part of the computing device 106 (for example, the touch-pad of a laptop computer or the touch-sensitive screen of a tablet), or may be a display device physically separate from but functionally coupled to other components of the computing device 106 (for example, a computer mouse).
- the coordinate input 130 in some implementations, may be integrated with the display output 132 to form a touch-sensitive screen or a touch-sensitive whiteboard.
- the computing device 106 may also comprise other inputs 134 such as keyboards, microphones, scanners, cameras, and the like.
- the computing device 106 may further comprise other outputs 136 such as speakers, printers and the like.
- the system bus 138 interconnects various components 122 to 136 enabling them to transmit and receive data and control signals to/from each other.
- the navigation system 100 may be designed for robust indoor/outdoor seamless object positioning, and the processing structure 122 may use various signal-of-opportunities such as BLE signals, cellular signals, WI-FI ® , earth magnetic field, 3D building models, floor maps, point clouds, and/or the like, for object positioning.
- signal-of-opportunities such as BLE signals, cellular signals, WI-FI ® , earth magnetic field, 3D building models, floor maps, point clouds, and/or the like, for object positioning.
- the navigation system 100 uses a reference map of the site 102 stored in a database in the memory 126 to facilitate obj ect positioning and navigation.
- the processing structure 122 is functionally coupled to the sensors 104 and 118 and the reference map.
- the processing structure 122 executes computer-executable code stored in the memory 126 which implements an object positioning and navigation process for collecting sensor data from sensors 104 and 118, and uses the collected sensor data and the reference map for tracking the movable objects 108 in the site 102.
- the processing structure 122 also uses the collected sensor data to update the reference map.
- FIG. 4 shows a functional structure of the navigation system 100 for surveying and map updating.
- the system 100 in this aspect comprises a reference map management module 152, a three-dimensional (3D) map registration module 154 and a local data processing module 156.
- the reference map management module 152 is in charge of maintaining a reference 3D map, and comprises a reference 3D map constructor submodule 162 and a map updater submodule 164.
- the 3D map registration module 154 is responsible for aligning a local map to the reference map, and comprises a coarse-to-fine map-registration submodule 166, and a local - to-reference coordinate transformer submodule 168.
- the local data processing module 156 is for local map processing and target detection, and comprises a preprocessor submodule 172, a local 3D map constructor submodule 174, a target detector submodule 176, and a local 3D coordinate transformer submodule 178.
- FIG. 5 is a schematic diagram showing a main process 200 of the system 100.
- the reference 3D map constructor 162 uses captured reference data sources 202 to pre-construct a reference map database 204 of the site 102.
- the reference map database 204 comprises a reference 3D map which is sent (arrow 208) to the coarse-to-fine map-registration submodule 166 of the 3D map registration module 154 for alignment processing (described later).
- a user can use one or more suitable consumer devices such as cameras, RGB and depth (RGB-D) sensors, Light Detection and Ranging (LiDAR) devices and/or the like, to collect spatial information (such as local data sources 206) from a target device and its surrounding area.
- the raw local data 206 is processed by the preprocessor 172 of the local data processing module 156, and the processed data is used by the local 3D map constructor 174 to construct a local 3D map 210 representing the local area, which is sent to the coarse-to-fine map-registration submodule 166 of the 3D map registration module 154 for alignment processing (described later).
- the target detector 176 detects the targets from the local 3D map, and the local 3D coordinate transformer 178 applies a geometric transformation to the local 3D map to obtain device local coordinates therein, which is sent (arrow 212) to the local-to-reference coordinate transformer 168 of the 3D map registration module 154 for processing (described later).
- the coarse-to-fine map-registration submodule 166 aligns the local 3D map 210 obtained by the local 3D map constructor 174 to the reference 3D map 208 in the reference map database 204, and performs an estimation of the geometric transformation from the local frame to the reference frame.
- the determined local-to- reference transformation is used by the local-to-reference coordinate transformer 168 to convert the local map 210 and device's local coordinates 212 to the reference frame.
- the rectified local map and device coordinates/position is sent to the map updater 164 of the reference map management module 152 and to update the reference map database 204.
- FIG. 6 is a schematic diagram showing a device-localization process 240 of the system 100 for device localization in one embodiment.
- the device-localization process 240 is similar to the process 200 shown in FIG. 5 except that the map updater submodule 164 is not used and the local-to-reference coordinate transformer 168 outputs the device's position in the reference frame without being used for map updating. 2.1. Processing in the Reference map management module 152
- the reference map database is constructed from the georeferenced 3D point cloud.
- a 3D point cloud is obtained from the reference data sources 202 for example, being captured by LiDAR, RGB-D camera and other similar equipment.
- the 3D point cloud is rectified by a plurality of control points into a unified frame.
- a well-designed structure feature detector extracts from the 3D point cloud the geometric structures of the site 102 such as the wall/ceiling models and the intersection graphs thereof.
- the georeferenced 3D point cloud and the extracted geometric features are jointed to construct the reference map database 204.
- the wall/ceiling models are a group of models representing individual wall or ceiling plane of the site.
- Each wall/ceiling model has a set of plane parameters and a cluster of points belonging to the plane.
- intersection graph contains a group of vertices and edges, where each vertex represents an intersection of two joint walls, and each edge represents an individual wall.
- the system 100 only requires the 2D coordinates of intersections, and projects all wall points onto the X-Y plane to extract the 2D intersections.
- FIG. 7 is a flowchart showing a process 300 of reference map construction. As shown, the geometric features extraction can be accomplished in four steps 302 to 308.
- step 302 point cloud 342 is classified by normals thereof.
- step 302 estimates the normal of each point from its neighbors. Many methods may be used for estimating a discrete point's normal, and in this embodiment, an Eigen analysis method such as that taught in reference [11] is employed to determine point's normal robustly. Then, the cross-angle between each point's normal and the vertical direction is calculated. If the calculated cross-angle is greater than a first threshold angle such as 80 degrees, the point is classified as a horizontal point 344. If the calculated cross-angle is smaller than a second threshold angle such as 15 degrees, the point is classified as a vertical point 346.
- a first threshold angle such as 80 degrees
- a second threshold angle such as 15 degrees
- the point is considered/classified as an unclassified point.
- first and second threshold angles such as between 15 degrees and 80 degrees
- 80 degrees and 15 degrees as the first and second threshold angles
- other suitable angles as the first and second threshold angles.
- the horizontal points 344 are processed at step 304 for ceiling detection.
- a suitable algorithm such as the efficient random sample consensus (RANSAC) algorithm taught in reference [12] is used to detect planes from previously derived horizontal points 344, and the area of each detected plane is calculated. If the calculated area is smaller than a given area-threshold (such as 5 square-meters (m 2 )), the area is considered a fake ceiling and is filtered out. If the calculated area is greater than or equal to the area-threshold, the area is determined as a ceiling is used for obtaining ceiling & wall models 348 stored in the reference map database 204.
- RANSAC efficient random sample consensus
- the vertical points 346 are processed at step 306 for wall detection.
- a suitable algorithm such as the above-described efficient RANSAC algorithm is used to detect planes from previously derived vertical points 346.
- the projection-density and connected-length of each detected plane is then calculated. If the projection-density is less than a density -threshold or the connected-length is less than a given length-threshold (such as 3 meters (m)), the plane is considered a fake wall and is filtered out.
- the detected plane is considered a wall and is used for obtaining ceiling & wall models 348 stored in the reference map database 204.
- the projection-density of each detected plane is calculated by projecting points of the plane onto a predefined horizontal plane such as the X-Y plane and counting projected points in a local area.
- the density threshold dens th can be determined as:
- dc is the radius for point counting
- h 0 is the expected minimal height of a wall (for example 0.5m)
- p si is the point sampling interval of the raw point cloud
- " ⁇ " represents multiplication.
- the connected-length is calculated by finding a maximal connective part in a detected plane with a given radius (for example 0.2m) and calculating its projection length on the X-Y plane.
- the detected ceilings and walls are stored in the reference map database 204 as ceiling and wall models 348. Moreover, the detected walls are also processed at step 308 for intersection detection.
- all walls points are voxelized (i.e., converted into voxels) with a given size (such as 0.2m).
- the voxelization of wall points can be achieved in two steps. First, the 3D bounding box of the whole point cloud is partitioned into voxels (cubes). Then, the points are divided into corresponding voxels based on their coordinates.
- the connectivity of wall models are determined by voxel analysis, which can be achieved in two steps: first, points in each voxel are retrieved. If points belonging to different wall models are found, these wall models are marked as connected. Then the 4-connectivity neighbors, i.e., the left, right, top and bottom voxels which directly connected to the current voxel, are inspected in the same way to determine the connected wall models.
- intersection points are projected onto the X-Y plane and the intersection points are extracted among each connected wall models. Then, the extracted intersections (vertices) and linked walls (edges) are added into an intersection graph 350, also stored in the reference map database 204.
- a weight of vertex is defined based on the linked edges' length and linked edges' included angle.
- the weight can be determined by:
- Len(e ) is the projection length of e i 5
- Len_e is the maximum projection length from all connected walls of current intersection
- ⁇ .(e ⁇ , e ⁇ ) is the included angle between and e ⁇ .
- sin (e Q , e ⁇ ) 1 (i.e., walls e 0 and e 1 are perpendicular to each other) for ease of description.
- intersection with long edges and large and even included angles tends to have large weight.
- intersection with large weight implies a steady geometrical structure in the local area.
- the local data processing module 156 conducts the local map construction and the target detection.
- the RGB-D sensor is used for local data acquisition and the dichromatic LEDs (typically Red and Blue with the size of 1 centimeter (cm) by 1cm) are designed for device labeling.
- workflow of local data processing contains data collecting and data processing is described as follows.
- the local data sources 206 can be determined by data collecting.
- LEDs are attached to the target devices and turned on before data collection. Then the target devices and their surrounding area are measured by an RGB-D sensor. Since the geometric features play an important role in system 100, the distinct geometric structures of the local area surrounding the devices of interest, such ceilings, intersections of walls, beams, columns, and the like, are carefully captured.
- the captured data is then processed in four steps in the local data processing module 156, including preprocessing, local map construction, device detection and 2D-3D coordinate transform, by four submodules 172 to 178, respectively.
- the raw RGB-D data captured in the local area is preprocessed by the preprocessing submodule 172 to derive a local 3D point cloud and a batch of oriented RGB-D images.
- Many public SLAM tools may be used.
- a common RGB-D mapping toolkit called RTab-map as taught in reference [13] is used for preprocessing.
- RTab-map executes an optimized SLAM algorithm to build a local 3D scene from raw RGB-D data and outputs three different results, including the local 3D point cloud, RGB-D images, and an auxiliary file recording the position, and orientation of each image.
- FIG. 8 is a flowchart showing a process 400 of local map construction executed at this step.
- the process 400 is similar to the process 300 shown in FIG. 7 and extracts same type of features except that the process 400 is executed on the local point cloud and the extracted features such as ceiling and wall models 348' and intersection graphs 350' are stored in the local 3D map 402. Therefore, methods and parameter settings similar to those used in process 300 are applied to the local point cloud to detect the ceiling & wall models 348' and local intersection graph 350'.
- the target detector 176 detects devices of interest. Since a dichromatic LEDs was attached to the devices of interest, the positions of LEDs can be used to represent the devices' positions with an acceptable precision.
- a computer vision method 440 as shown in FIG. 9 is used to detect LEDs in RGB-D images.
- the bright blobs are detected (step 444) from the RGB-D images 462 by a thresholding method in Red and Blue bands simultaneously. Then, combined filters including the size constraint of blobs on one image and the spatial continuity constraint of blobs on adjacent images are used to eliminate the fake blobs (step 446).
- the blob size constraint used in the combined filters is as follows:
- r is the expired size of a LED on an image
- R is the actual size of the LED
- / is the camera focal length
- d is the depth of the detected blob. If the detected blob's size is close to the expired size r in an accepted interval (such as 5 pixels), the detected blob is likely an actual LED.
- the spatial continuity constraint used in the combined filters is as follows:
- R is the rotate matrix determined by the orientation of the i-ih image in the local frame
- t t is the position of the i-ih image in the local frame
- C t is the camera projection matrix of the i-ih image.
- C t can be derived from the camera focus length / and the principal point coordinates (c i x , c i y ) as follows:
- the spatial continuity constraint can be used to predict the LED's position in multiple images. If the blobs are detected in no less than 3 images with the coordinates near the expired position in each image, the blobs likely represent an actual LED.
- a coordinate transformation is used at the 2D-3D coordinate transform step 448 by the local 3D coordinate transformer submodule 178 to convert the 2D pixel coordinates to the local 3D frame.
- the tr i shown in the following equation:
- (3 ⁇ 4 Yi, Z ) T represent a device's coordinates in the local frame
- (u, v, d) T is the device's pixel coordinates and depth
- C, R and t are the camera projection matrix, the image's rotation matrix and the image's position, respectively, which can be determined from the auxiliary file 466.
- map registration is used to align the local map 402 generated by the process 400 shown in FIG. 8 with the reference map 204 generated by the process 300 shown in FIG. 7.
- a coarse-to-fine registering process 500 as shown in FIG. 10 is executed by the coarse-to-fine map-registration submodule 166 for aligning the local map with the reference map.
- a coarse registration by intersection graph matching is first conducted to combine the intersection graph 350 in the reference map database 204 and the intersection graph 350' in the local 3D map 402 with the following steps:
- the candidate is considered to correspond to O locallG
- fine registration is conducted to combine the ceiling & wall models 348 in the reference map database 204 and the ceiling & wall models 348' in the local 3D map 402 by using the Iterative closest point (ICP) algorithm taught in reference [ 14] with the following steps:
- FIG. 11 is a flowchart showing a process 540 performed by the local-to-reference coordinate transformer 168.
- the local-to-reference coordinate transformer submodule 168 calculates the device's position 546 in the reference frame by using the previously determined transformation parameters 506 (see FIG. 10) and the device's local coordinates 542 using the previously- derived 3D rigid transformation 544 (see step (s2) above) as follows: + t r igid > (7)
- ⁇ X re f, Y re f> Zref represent the device's coordinates in the reference frame
- (X u Yi, Z ) T represent the device's coordinates in the local frame
- R r i 9 id is the rotate matrix of previously- derived 3D rigid transformation
- t rig id is the translation vector of the 3D rigid transformation.
- Map update is another application of the system 100.
- FIG. 12 is a schematic diagram showing a device-localization process 600 of the system 100 for map update in one embodiment.
- the device-localization process 600 is similar to the process 200 shown in FIG. 5 except that the target detector 176 and the local 3D coordinate transformer 178 are not used.
- the device-localization process 600 is also similar to the process 240 shown in FIG. 6 except that the target detector 176 and the local 3D coordinate transformer 178 are not used, and that the map updater submodule 164 of the reference map management module 152 uses the determined local map to detect the environmental changes and strengthen the reference map progressively.
- map updater submodule 164 updates the reference map by a sequence of local maps with the following steps:
- a comparison is conducted to find the changed area.
- a point-to-point comparison is used to find the closest point correspondence in the reference map.
- the map updater submodule 164 calculates the distance between the point in the rectified local map and the correspondence in the reference map. If the distance is larger than a given threshold, the map updater submodule 164 marks this point as a changed point and records this change with a timestamp in the reference map database. Later, this changed point set can be further identified by suitable object recognition methods.
- the unchanged area is determined by excluding the changed area from the local maps. As the unchanged area may introduce redundancy into the reference map, it is valuable to merge the unchanged area of the local maps with those in the reference map to improve the quality of the reference map.
- the local maps are collected by various devices and processed by different methods.
- a weighted-merging method is used.
- the precision values such as the device's precision, calibration precision, processing precision, and the like
- the precision values recorded in the description layer of each 3D map are used to qualify the local map.
- a weight is calculated by the inverse precisions, and the weighted merging is applied to balance the quality of each local map and the reference map.
- system 100 is for determining the 3D position of a device in an indoor environment
- system 100 may also be used for determining the 3D position of a device in an outdoor environment, or in a site mixed with indoor and outdoor environments.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Instructional Devices (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762555414P | 2017-09-07 | 2017-09-07 | |
US62/555,414 | 2017-09-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019046962A1 true WO2019046962A1 (en) | 2019-03-14 |
Family
ID=65633371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2018/051101 WO2019046962A1 (en) | 2017-09-07 | 2018-09-07 | Method and system for target positioning and map update |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019046962A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111402332A (en) * | 2020-03-10 | 2020-07-10 | 兰剑智能科技股份有限公司 | AGV composite mapping and navigation positioning method and system based on S L AM |
CN111862215A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN112179358A (en) * | 2019-07-05 | 2021-01-05 | 东元电机股份有限公司 | Map data comparison auxiliary positioning system and method thereof |
WO2021083529A1 (en) * | 2019-10-31 | 2021-05-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Object handling in an absolute coordinate system |
CN113359154A (en) * | 2021-05-24 | 2021-09-07 | 邓良波 | Indoor and outdoor universal high-precision real-time measurement method |
CN114419268A (en) * | 2022-01-20 | 2022-04-29 | 湖北亿咖通科技有限公司 | Track edge connecting method for incremental map construction, electronic equipment and storage medium |
CN114581287A (en) * | 2022-02-18 | 2022-06-03 | 高德软件有限公司 | Data processing method and device |
EP4024339A1 (en) * | 2020-12-29 | 2022-07-06 | Faro Technologies, Inc. | Automatic registration of multiple measurement devices |
CN117589153A (en) * | 2024-01-18 | 2024-02-23 | 深圳鹏行智能研究有限公司 | Map updating method and robot |
CN118172422A (en) * | 2024-05-09 | 2024-06-11 | 武汉大学 | Method and device for positioning and imaging interest target by combining vision, inertia and laser |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100776977B1 (en) * | 2006-10-11 | 2007-11-21 | 전자부품연구원 | Position tracking system using a sensor network and method for tracking object using the same |
US9125019B1 (en) * | 2014-05-01 | 2015-09-01 | Glopos Fzc | Positioning arrangement, method, mobile device and computer program |
US9292961B1 (en) * | 2014-08-26 | 2016-03-22 | The Boeing Company | System and method for detecting a structural opening in a three dimensional point cloud |
US20170082727A1 (en) * | 2015-09-20 | 2017-03-23 | Nextnav, Llc | Position estimation of a receiver using anchor points |
-
2018
- 2018-09-07 WO PCT/CA2018/051101 patent/WO2019046962A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100776977B1 (en) * | 2006-10-11 | 2007-11-21 | 전자부품연구원 | Position tracking system using a sensor network and method for tracking object using the same |
US9125019B1 (en) * | 2014-05-01 | 2015-09-01 | Glopos Fzc | Positioning arrangement, method, mobile device and computer program |
US9292961B1 (en) * | 2014-08-26 | 2016-03-22 | The Boeing Company | System and method for detecting a structural opening in a three dimensional point cloud |
US20170082727A1 (en) * | 2015-09-20 | 2017-03-23 | Nextnav, Llc | Position estimation of a receiver using anchor points |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112179358A (en) * | 2019-07-05 | 2021-01-05 | 东元电机股份有限公司 | Map data comparison auxiliary positioning system and method thereof |
WO2021083529A1 (en) * | 2019-10-31 | 2021-05-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Object handling in an absolute coordinate system |
CN111402332B (en) * | 2020-03-10 | 2023-08-18 | 兰剑智能科技股份有限公司 | AGV composite map building and navigation positioning method and system based on SLAM |
CN111402332A (en) * | 2020-03-10 | 2020-07-10 | 兰剑智能科技股份有限公司 | AGV composite mapping and navigation positioning method and system based on S L AM |
CN111862215B (en) * | 2020-07-29 | 2023-10-03 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111862215A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
EP4024339A1 (en) * | 2020-12-29 | 2022-07-06 | Faro Technologies, Inc. | Automatic registration of multiple measurement devices |
CN113359154A (en) * | 2021-05-24 | 2021-09-07 | 邓良波 | Indoor and outdoor universal high-precision real-time measurement method |
CN114419268A (en) * | 2022-01-20 | 2022-04-29 | 湖北亿咖通科技有限公司 | Track edge connecting method for incremental map construction, electronic equipment and storage medium |
CN114581287A (en) * | 2022-02-18 | 2022-06-03 | 高德软件有限公司 | Data processing method and device |
CN117589153A (en) * | 2024-01-18 | 2024-02-23 | 深圳鹏行智能研究有限公司 | Map updating method and robot |
CN117589153B (en) * | 2024-01-18 | 2024-05-17 | 深圳鹏行智能研究有限公司 | Map updating method and robot |
CN118172422A (en) * | 2024-05-09 | 2024-06-11 | 武汉大学 | Method and device for positioning and imaging interest target by combining vision, inertia and laser |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019046962A1 (en) | Method and system for target positioning and map update | |
Xu et al. | An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments | |
US9870624B1 (en) | Three-dimensional mapping of an environment | |
Acharya et al. | BIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model | |
Park et al. | Three-dimensional tracking of construction resources using an on-site camera system | |
US9154919B2 (en) | Localization systems and methods | |
US9222771B2 (en) | Acquisition of information for a construction site | |
JP6002126B2 (en) | Method and apparatus for image-based positioning | |
US20230236280A1 (en) | Method and system for positioning indoor autonomous mobile robot | |
Raza et al. | Comparing and evaluating indoor positioning techniques | |
US11867818B2 (en) | Capturing environmental scans using landmarks based on semantic features | |
Blaser et al. | Development of a portable high performance mobile mapping system using the robot operating system | |
AU2015330966B2 (en) | A method of setting up a tracking system | |
Feng et al. | Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments | |
US11741631B2 (en) | Real-time alignment of multiple point clouds to video capture | |
Singh et al. | Ubiquitous hybrid tracking techniques for augmented reality applications | |
Shu et al. | 3D point cloud-based indoor mobile robot in 6-DoF pose localization using a Wi-Fi-aided localization system | |
Tao et al. | Automated processing of mobile mapping image sequences | |
EP4332631A1 (en) | Global optimization methods for mobile coordinate scanners | |
US11561553B1 (en) | System and method of providing a multi-modal localization for an object | |
Masiero et al. | Aiding indoor photogrammetry with UWB sensors | |
Rossmann et al. | Discussion of a self-localization and navigation unit for mobile robots in extraterrestrial environments | |
Rossmann et al. | Advanced self-localization and navigation for mobile robots in extraterrestrial environments | |
Ai et al. | Surround Mask Aiding GNSS/LiDAR SLAM for 3D Mapping in the Dense Urban Environment | |
Wongphati et al. | Bearing only FastSLAM using vertical line information from an omnidirectional camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18853430 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18853430 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/12/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18853430 Country of ref document: EP Kind code of ref document: A1 |