WO2023021233A1 - Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur - Google Patents

Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur Download PDF

Info

Publication number
WO2023021233A1
WO2023021233A1 PCT/FI2021/050566 FI2021050566W WO2023021233A1 WO 2023021233 A1 WO2023021233 A1 WO 2023021233A1 FI 2021050566 W FI2021050566 W FI 2021050566W WO 2023021233 A1 WO2023021233 A1 WO 2023021233A1
Authority
WO
WIPO (PCT)
Prior art keywords
localization
image data
mapping
measurement system
measurement
Prior art date
Application number
PCT/FI2021/050566
Other languages
English (en)
Inventor
Francois DES PALLIÈRES
Nicolas Veau
Mikael Haag
Joonas JOKELA
Original Assignee
Kone Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kone Corporation filed Critical Kone Corporation
Priority to CN202180101594.3A priority Critical patent/CN117836233A/zh
Priority to EP21762744.7A priority patent/EP4387914A1/fr
Priority to PCT/FI2021/050566 priority patent/WO2023021233A1/fr
Publication of WO2023021233A1 publication Critical patent/WO2023021233A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B19/00Mining-hoist operation

Definitions

  • the present invention relates to a method and a measurement system related to elevators. More particularly, the invention discloses a method and measurement system for generating a 3D map of an elevator shaft and a trajectory of an elevator car traveling in the elevator shaft.
  • SLAM simultaneous localization and mapping
  • Popular approximate solution methods used for SLAM include particle filter, extended Kalman filter (EKF), Covariance intersection, and GraphSLAM.
  • SLAM algorithms are used in navigation, robotic mapping and odometry for virtual reality or augmented reality.
  • Kalman filtering also known as linear quadratic estimation (LQE)
  • LQE linear quadratic estimation
  • An orientation filter is a filter that estimates attitude/orientation/angle of an IMU in the world frame.
  • an orientation filter fuses angular velocities, accelerations, and optionally magnetic readings from a generic IMU device into an orientation.
  • Examples of known orientation filters are a Madgwick filter and a Mahony filter.
  • Inertial positioning refers to determining position of a moving object based on detected movement of the object. Movement can be detected for instance with an inertial measurement unit, IMU, that comprises inertial sensors such as accelerometers, gyroscopes and optionally magnetometers or other suitable sensors.
  • IMU inertial measurement unit
  • the IMU preferably comprises three accelerometers measuring acceleration along three mutually orthogonal axes (x, y, z) and three gyroscopes measuring angular rate about three mutually orthogonal axes (x, y, z). Inertial positioning does not require external references, but accuracy of the position determined using inertial positioning methods tends to decrease quickly due to sensor bias.
  • Triangulation refers to determining distances from fixed reference points. Triangulation provides precise positions, but only if the objects are visible.
  • Patent EP3507227 Bl discloses a method and a system for measuring an elevator shaft using a measurement system having a camera system and an inertial measurement unit. A digital model of the elevator shaft is created based on the measured data.
  • An object is to provide a method and apparatus so as to solve the problem of generating an accurate three-dimensional model of an elevator shaft using a moving measurement device, while simultaneously determining trajectory of the measurement device within the shaft.
  • the objects of the present invention are achieved with a method according to the claim 1.
  • the objects of the present invention are further achieved with an apparatus according to the claim 11.
  • a method for modeling an elevator shaft extending in a main extension direction is provided.
  • the elevator shaft is measured with a measurement system comprising two camera devices having mutually different image resolutions, and an inertial measurement unit IMU comprising acceleration and angular rate sensors. Fields of view of the two camera devices face the main extension direction.
  • the method comprises performing a first travel of the measurement system by moving the measurement system in a first direction along the main extension direction, during the first travel, performing first aggregated measurements by simultaneously obtaining first image data comprising a plurality of image frames using a first camera device, second image data comprising a plurality of image frames using a second camera device, and positioning data using the IMU, performing a first localization by integrating the first image data and the positioning data, wherein the first localization has a first level accuracy, performing a second localization and mapping by integrating the second image data with output from the first localization, wherein the second localization and mapping has a second level of accuracy that is more accurate than the first level of accuracy, generating a geometry based on the second localization and mapping, and generating a 3D model of the elevator shaft based on the generated geometry.
  • the first camera device is a stereo camera with a wide field of view and a relatively low resolution
  • the second camera device has a narrow field of view and relatively high resolution
  • the second camera device has a large focal distance
  • the second camera device performs range sensing.
  • said integrating the first image data and the positioning data comprises performing a confidence estimation to define relative weights of the first image data and the positioning data, and/or the first localization comprises filtering the first image data by a first Kalman filter and filtering the positioning data by an orientation filter.
  • the method comprises cancelling any residual fixed objects in the second image data to avoid residual fixed object from being included in the mapping.
  • said second localization utilizes pre-stored information on an expected motion scenario of the measurement device to improve estimation of motion of the measurement device.
  • the method comprises performing, after the first travel, a second travel of the measurement system by moving the measurement system in a second direction along the main extension direction, the second direction being opposite to the first direction, performing further aggregated measurements, first localization, and second localization and mapping during the second travel, and updating the second localization and mapping obtained during the first travel based on the further second level localization and mapping.
  • the second localization comprises generating a pose-graph determining positions and orientations of the measurement device in world reference.
  • the localization is determined in world reference
  • the mapping comprises determining a point cloud in reference to the measurement device, and the geometry is defined in world reference.
  • the measurement device is removably attached to an elevator car.
  • a trajectory of the elevator car is defined based on the second localization, and deviation of the trajectory from optimal is used for adjusting guides of the elevator to correct its trajectory.
  • a measurement system for modeling an elevator shaft extending in a main extension direction comprises a first camera device having a field of view facing the main extension direction and configured to obtain first image data comprising a plurality of image frames, a second camera device having a field of view facing the main extension direction and configured to obtain second image data comprising a plurality of image frames, the first camera device and the second camera device having mutually different image resolutions and an inertial measurement unit IMU comprising acceleration and angular rate sensors and configured to obtain positioning data.
  • the measurement system is configured to simultaneously obtain first image data, second image data and positioning data during a first travel, during which the measurement system is moved in a first direction along the main extension direction.
  • the measurement system further comprises a computer device or system comprising a first simultaneous localization and mapping module (SLAM) configured to perform a first localization by integrating the first image data and the positioning data, wherein the first localization has a first level of accuracy, a second SLAM module configured perform a second localization and mapping by integrating the second image data with the first localization, wherein the second localization and mapping has a second level of accuracy that is more accurate than the first level of accuracy, a geometry processing module configured to generate a geometry based on the second localization and mapping, and a 3D modeling module configured to generate a 3D model of the elevator shaft based on the generated geometry.
  • SLAM simultaneous localization and mapping module
  • the first camera device is a stereo camera with a wide field of view and a relatively low resolution
  • the second camera device has a narrow field of view and relatively high resolution
  • the second camera device has a large focal distance
  • the second camera device performs range sensing.
  • said first SLAM module is configured to perform a confidence estimation to define relative weights of the first image data and the positioning data, and/or said first SLAM module comprises a first Kalman filter configured to filter the first image data and an orientation filter configured to filter the positioning data.
  • the second SLAM module is configured to cancel any residual fixed objects in the second image data to avoid residual fixed object from being included in the mapping.
  • the second SLAM module is configured to utilize pre-stored information on an expected motion scenario of the measurement device to improve estimation of motion of the measurement device.
  • the measurement system is configured to wherein the measurement system is configured to simultaneously obtain further first image data, further second image data and further positioning data during a second travel, during which the measurement system is moved in a second direction along the main extension direction, the second direction being opposite to the first direction.
  • the first SLAM module is configured to perform further first localization based on the further first image data and the further positioning data.
  • the second SLAM module is configured to perform further second localization and mapping based on the further first localization and the further second image data, and to integrate the further second localization and mapping for improving the accuracy of the second localization and mapping.
  • the second SLAM module is configured to perform said second localization by generating a pose-graph determining positions and orientations of the measurement device in world reference.
  • the first SLAM module and the second SLAM module are configured to determine the localization in world reference, wherein the second slam module is configured to perform said mapping by determining a point cloud in reference to the measurement device, and wherein the geometry is defined in world reference.
  • the measurement device is removably attached to an elevator car.
  • the present invention is based on the idea of aggregating and integrating data from a plurality of sensors, in particular two camera sensors and an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the present invention has the advantage that it enables fast and accurate analysis of an elevator shaft. No specific plumb lines or pre-installed reference strips are needed.
  • the method is easy to use because aggregation of the measurements is handled by the measurement instrument itself. High accuracy is achieved by computing exact reference points calculated with a fusion algorithm combining data received from several sensors instead of individual sensors.
  • the method is also robust, since combining motion and distance information simultaneously mitigates reference point non-visibility and sensor bias issues.
  • the method achieves advanced elevator shaft properties measurements. A submillimeter precision can be achieved, and analysis of complex elevator shaft geometry is enabled, including verticality, wall parallelism assessment, wall scanning for imperfections and so on.
  • the method is also cost efficient, since low-cost sensors can be used instead of costly long-range telemetry sensors.
  • new sensors can be seamlessly added to upgrade the functionalities of the measuring device. Additional measurements enabled by new sensors are directly mapped on the shaft geometry in the resulting three-dimensional model and therefore can be precisely located.
  • Additional measurement may comprise for example one or more of landing door images, wall density, thickness, humidity, temperature.
  • Figure 1 shows a schematic illustration of an elevator shaft
  • FIG. 2 illustrates aspects of the invention
  • Figure 4 illustrates main steps of the 3D model construction process
  • Figure 5 illustrates steps of a coarse localization process
  • Figure 6 illustrates steps of a fine localization and mapping process
  • Figure 7 illustrates steps of a geometry process
  • Figure 8 illustrates a measurement device. Detailed description
  • a position refers to a process of determining position of an object over time, in other words a trajectory, preferably in world reference.
  • a position may be determined in various ways, for example using coordinates, such as cartesian coordinates (x, y, z) or vectors.
  • mapping refers to tracking landmarks, i.e. singular points, over a plurality of image frames, preferably in world reference.
  • main extension direction of an elevator shaft refers to the direction in which an elevator car of the completed elevator system is moved.
  • the main extension direction typically extends vertically, but it may also be tilted relative to the vertical or extend horizontally.
  • positioning data refers to data obtained using an inertial measurement unit.
  • the position data may comprise unprocessed data, such as acceleration and/or angular rate data, and/or the positioning data may comprise processed data, such as coordinate or vector data.
  • SLAM simultaneous localization and mapping
  • the figure 1 shows a schematic illustration of an elevator system.
  • An elevator car (15) referred also in short as the car (15), travels along a main extension direction (z) of an elevator shaft (10), referred herein in short as the shaft (10).
  • a hoisting engine (16) is attached to the car (15) and to one or more counterweights (18) via traction roping (17), often referred to as the roping (17), for moving the car (15).
  • the car (15) may be moveably coupled to one or more guide rails (13). In normal operation, the car (15) travels within the shaft (10) between a pre-determined lowest and highest positions.
  • the car (15) is typically designed so that it never reaches the top of the shaft (10), but a headroom (11) remains at the (15) top of the shaft (10) and a pit (12) remains at the bottom of the shaft (15), which the car (15) does not enter in normal operation.
  • the headroom (11) and the pit (12) typically comprise various functional elements of the elevator system.
  • the pit (12) comprises buffers (19) that soften the stop of the car and the counterweight, if either would run at high speed to the bottom of the shaft. Acceleration and traveling speed of the elevator car (15) are typically relatively well known, although these may vary based for example on load.
  • a measurement device (20) is attached to the car (15), for example on the roof thereof, illustrated by the alternative 20a, or below the floor of the car (15), as illustrated by the alternative 20b.
  • portion of the measurement device may be located inside the elevator car and communicatively coupled to the portion of the measurement device (20a, 20b) outside the car by a wireless or wired connections.
  • the measurement device (20) is operable for collecting information about the elevator shaft during a travel of the car (15). Information collected by the measurement device (20) is then used for creating a 3D model of the shaft (10).
  • the measurement device (20) may be provided with a magnetic fixture that enables fast installation and removal of the measurement device simply by attaching it with the magnetic fixture to a metal structure in the elevator car.
  • the figure 2 illustrates some aspects of the invention. Elements inside the shaft have been omitted for clarity.
  • the optical center of the measurement device (20) points upwards in the main extension direction (z).
  • the measurement device (20) comprises two camera devices, one of which has a wide field of view, illustrated by the wider sector (201) and the other one having a narrow field of view, illustrated by a narrower sector (202).
  • the optical center of each of the camera device which defines center of the field of view of the respective camera device, preferably points upwards, at least approximately in the main extension dimension.
  • wide and narrow field of view are determined in relative terms so that the narrow field of view is narrower than the wide field of view.
  • the trajectory of the elevator car during the travel and the measurement device (20) attached thereto, in the main extension direction (z) may deviate from an intended, perfectly vertical line (z') in world frame (world coordinates), as defined by gravitation. Tilt of the main extension direction (z) from the vertical (z') has been exaggerated for visualization. Instead or in addition to tilting from optimal, the trajectory may also be for example bent. Results of the localization may be used to for determining how to compensate deviations of the trajectory by adjusting guides in the shaft so that the trajectory of the car can be optimized by bringing the trajectory into the center of the shaft and perfectly vertical direction.
  • Image data received from each camera device is used for tracks location of a plurality of landmarks (205).
  • a single landmark is illustrated with a star.
  • a landmark can be any distinguishable point that is preferably visible in at least two image frames obtained by each of the camera devices.
  • a landmark may be a corner, an intersection, a wall imperfection or equivalent.
  • the figure 3 illustrates further aspects of the invention. Like in the figure 2, elements of the elevator within the shaft have been omitted for clarity.
  • the localization and mapping is improved by performing the localization and mapping process during two consecutive travels, in which the elevator car and the measurement device attached thereto travels between the two end points of its trajectory.
  • the first trajectory (301) refers to the measurement device traveling, attached to the elevator car, from towards the top of the shaft.
  • the measurement device tracks each of the plurality of landmarks (205) on three consecutive frames. This is illustrated with the arrows pointing towards the landmark (205) from three different positions of the measurement device along the respective trajectory (301, 302).
  • a second trajectory (302) is performed in the opposite direction along the main extension direction of the shaft, here downwards from the top of the shaft towards its bottom.
  • each landmark is mapped again.
  • landmarks are mapped during the second trajectory based on two image frames, and this mapping information can be used to fine tune both the localization and the map.
  • this mapping information can be used to fine tune both the localization and the map.
  • the figure 4 illustrates main steps of the 3D model construction process according to embodiments of the invention.
  • the measurement device (20) travels through the shaft along its trajectory, from one end to the opposite end of the normal operation range of the car (15), it uses simultaneously both its cameras (21, 22) and the inertial measurement unit IMU (25) to obtain data that is preferably processed in real time. Normal data transmission and/or processing delays are allowed for real time processing as known in the art.
  • the first camera device (21) preferably has a wide field of view for visual odometry.
  • the first camera should have at least 180° field of view, but it may even have a up to 360° field of view.
  • each image frame obtained by the first camera preferably covers at least the entire space within the shaft (10) above the current location of the first camera device (21), and when the first camera device (21) faces downwards in the vertical shaft, each image frame obtained by the first camera covers at least the entire space within the shaft (10) visible below the current location of first camera device (21).
  • Visual odometry refers a process of determining position and orientation of a device by analyzing the associated camera images.
  • the first camera device (21) may have a relatively low resolution so that number of pixels in each image frame is not extensive.
  • the first camera device (21) obtains first image data, preferably in form of a plurality of image frames.
  • the first camera device (21) used in an exemplary, non-limiting prototype implementation has a resolution of 848x800 pixels and obtains 30 image frames per second.
  • “low resolution” should be considered as a relative term, and when the technology advances, absolute values understood as low resolution will eventually increase.
  • the inertial measurement unit IMU Simultaneously with obtaining the first image data with the first camera device (21), the inertial measurement unit IMU (25) obtains inertial measurements that are used for estimating movement of the measurement device.
  • the IMU determines position of the measurement device within the shaft based on signals received from its motion sensors. Odometry refers to use of data obtained from inertial sensors for estimating change in position over time.
  • Inertial measurements may comprise for example measurement of acceleration at least along the main extension dimension (z-axis), preferably along all three cartesian axes (x, y, z), and measurement of angular about at least one axis, preferably about the same three axis.
  • inertial data should be obtained with relatively high frequency.
  • the term inertial data may refer to unprocessed data received from the inertial sensors, or it may refer to pre-processed or processed data in any suitable form.
  • the inertial data may comprise an array of combined acceleration and angular rate data.
  • the IMU preferably uses a high frequency.
  • a sampling frequency of 62,5 Hz was used for accelerometers and a sampling rate of 200 Hz for gyroscopes. Sampling rates are naturally dependent on the type(s) of accelerometer(s) and gyroscope(s) used as well as characteristics such as traveling speed of the measurement device within the shaft.
  • the inertial information is preferably integrated in time to achieve more accurate localization in absence of visual cues.
  • First image data obtained using the first camera device (21) and inertial data obtained by the IMU (25) is used as input data for a process step that is referred to a as coarse localization, performed by a first simultaneous localization and mapping (SLAM) module (210).
  • the coarse localization runs continuously in real time during the travel.
  • the coarse localization integrates visual odometry information obtained on basis of image data obtained by the first camera device (21) and odometry information based on inertial data obtained by the IMU (25). Integration of these two pieces of information in time improves accuracy of coarse localization for example in case of absence of visual cues in some of the respective image frames.
  • the first SLAM module (210) is preferably implemented as embedded software that is executed by a processing device that is comprised in hardware of the measurement system.
  • Main purpose and output of the first SLAM module is localization of the measurement device, in other words defining position of the measurement device over time. Since the measurement device is attached to the elevator car, a localization of the measurement device allows localization of the elevator car simply by determining relative locations of the measurement device and the elevator car and summing a predefined difference based on this relative location to the coordinates. It is apparent that similar coordinate correction for determining a trajectory of any individual module or sensor device of the measurement system can be performed if and when needed.
  • the process yields a localization with a first level of accuracy.
  • the first localization defines position of the measuring device during each image frame obtained.
  • triangulation is used for defining position of the measurement device based on landmarks shown.
  • triangulation is used for determining position of landmarks based on position of the measurement device.
  • a second camera device (22) is used for obtaining second image data in form of a plurality of image frames.
  • the second camera device (22) has preferably a high resolution. Absolute values of high and low resolution are dependent on available technology and processing capabilities. When the technology advances, absolute values understood as low resolution will eventually increase. Thus, high resolution of the second camera device (22) mainly refers to a higher resolution than the low resolution of the first camera device (21).
  • the second camera device was implemented using an Intel R RealSenseTM 455 depth camera, with up to 1280x720 active stereo depth resolution and up to 1280x800 RGB resolution.
  • Sampling rate of this exemplary second camera device was possible to be adjusted up to 90 frames per second for both stereo and RGB images. Best value for the sampling rate is a design parameter.
  • the second camera device (22) may have a more restricted field of view than the first camera device (21).
  • the above-mentioned second camera device used in the prototype has a diagonal field of view over 90°, while the first camera device used in the prototype had a fish-eye type camera with clearly wider field of view of 173°, in other words almost up to 180°.
  • the second camera device (22) thus provides image data that represents a more accurate image of a smaller field of view, which enables more accurate odometry on this smaller field of view.
  • the second camera device (22) also preferably has a large focal distance so that objects near the second camera device (22) are not reproduced clearly in the image frames.
  • the above-mentioned second camera device has a focal distance adjustable between 0.4 to over 10 meters, varying with lighting conditions. This facilitates rejection of residual objects in the frame.
  • a balustrade that is part of the elevator car is not an interesting object of the elevator shaft and should not be included in the 3D model of the elevator shaft, although it may be seen in every single image frame.
  • such residual objects are omitted from the visual odometry.
  • the second camera device (22) may have capability for range sensing.
  • the second camera device is capable of capturing a three-dimensional structure of the world form the viewpoint of the second camera device.
  • Second image data obtained using the second camera device (22) and output data of the coarse localization are used as input data for a process that is referred to a as fine localization and mapping, performed by a second SLAM module (220).
  • the fine localization and mapping process run continuously in real time during the travel.
  • the second SLAM module (220) integrates over time further visual odometry information obtained on basis of second image data obtained by the second camera device (22) and the trajectory of received from the coarse localization output data. Integration of these two pieces of information in time further improves accuracy of localization.
  • the second SLAM module (220) is preferably implemented as embedded software that is executed by a processing device that is comprised in hardware of the measurement system.
  • the fine localization and mapping yields a second level of localization, which is more accurate than the first level of localization.
  • Level of localization can also be referred to as accuracy of the localization.
  • the fine localization and mapping comprises defining both the position of the measuring device during each frame, in other words a trajectory of the measuring device over time, as well as mapping data of visual cues (landmarks) shown in the image frames.
  • the mapping data obtained from the second localization and mapping is a point cloud.
  • a point cloud refers to a collection of sample points from the mapped shape's surface.
  • the second camera device (22) may comprise and IMU or the IMU (25).
  • inertial data may be obtained from the IMU (25) of either of the camera devices or the measurement device may be provided by an IMU (25) that is not comprised in either of the camera devices.
  • Geometry processing also known as mesh processing, refers to using concepts from applied mathematics, computer science and engineering to reconstruct the point cloud received from the fine localization and mapping module into a complex 3D structure represented as a mesh.
  • the mesh is constructed into a three-dimensional model in the 3D model construction module (240) that can either construct a new 3D model or reconstruct or update an existing 3D model.
  • the 3D model is typically implemented as a polygon mesh is that is a collection of polygons, also referred to as planes or plane segments, that are connected at their vertices and edges for defining the shape of a polyhedral object in 3D.
  • each landmark is preferably tracked over three consecutive frames in which it appears, and during a second, subsequent trajectory, each landmark is preferably tracked over two consecutive frames.
  • the mapping process can be performed during a single two-way travel sequence up and down, or down and up, between the upmost and lowest positions of the elevator car within the shaft.
  • the figure 5 illustrates an exemplary implementation of the coarse localization process by the first SLAM module (210).
  • the first image data received from the first camera device (21) and the positioning data received from the IMU (25) are processed by the first SLAM module that comprises a Kalman filter (211) for filtering the image data and an orientation filter (212), such as a Madgwick filter or a Mahony filter or equivalent, for filtering orientation data.
  • a Kalman filter 211
  • an orientation filter 212
  • the first SLAM module (210) performs fusion of the first image data and IMU data.
  • coarse localization coarse location of the measurement device as a function of time is obtained and forwarded as input for further processing as indicated by the connection "A".
  • the figure 6 illustrates an exemplary implementation of the fine localization and mapping performed by the second SLAM module (220).
  • the second SLAM module (220) combines information obtained from second image data received from the second camera device (22) and information received from the first SLAM module (210).
  • Intrinsic parameters of the second camera device (22) are estimated by the intrinsic parameter estimation sub-module (301).
  • the intrinsic parameters of the camera device represent optical center and focal length of the camera.
  • the optical center (image center) refers to the point of intersection of the lens' optical axis with the camera's sensing plane.
  • World points are transformed to camera coordinates using extrinsic parameters, which take into account rotation and translation.
  • the camera coordinates are mapped into the image plane using these intrinsic parameters.
  • fixed features refer to any features or objects that are shown in fixed positions in each frame of the obtained second image data.
  • Fixed features may be for example objects that are move with the elevator car, and thus relative position of the fixed features compared to the measurement device attached to the elevator car, and thus also to the second camera device remains fixed. Fixed features may thus appear in all image frames obtained by the second camera device but are not relevant as elements of the elevator shaft and should be omitted in the fine localization and mapping module.
  • An example of a fixed feature in an elevator is a balustrade on the roof of the elevator car.
  • a plurality of points is extracted based on the second image data by the point extraction sub-module (303), thus obtaining a point cloud in which all points have a position (coordinates) in space defined on basis of the second image data.
  • the second SLAM module (220) receives as input output of the first SLAM module (210), illustrated with the input "A”. Results of the first SLAM module (210) are further subjected to a second Kalman filtering (320).
  • a motion scenario provided by a motion scenario sub-module (319) may be used as a further input data for the second Kalman filtering (320).
  • the motion scenario refers to pre-stored information on an expected motion pattern of the elevator car and thus also the measurement device. When the measurement device is attached to the elevator car, the expected motion scenario can be obtained based on motion scenario of the elevator car. Expected acceleration, movement speed and deceleration of the elevator car during a travel are carefully designed and controlled and thus typically well-known, and this information can be used a priori to improve speed estimation made based on IMU measurements, and thus further improving accuracy of localization.
  • the second SLAM module (220) preferably performs a process loop that improves localization of the measurement device to achieve the wanted accuracy of the localization of the measurement device, here referred to as the "fine localization”.
  • This loop comprises a point tracking sub-module (321) configured to track points in the second image data. Tracking refers to following a movement track of selected points, also referred to as landmarks, shown in the image while the landmark moves between each image frames. These landmarks can be any distinguishable points such as corners, intersections, wall imperfections etc.
  • the second SLAM module (220) tries to track each landmark in as many image frames as possible. From the tracks of the landmarks, the system can infer the trajectory of the measurement device as well as the point cloud generated based on the mapping.
  • Point tracking enables defining a pose-graph by a pose-graph submodule (325).
  • pose-graph refers to defining the most probable positions and orientations of the measurement device based on observations made from image frames obtained by the second camera. Over time, the probability of positions and orientations is reassessed and improved for improving accuracy of the localization.
  • a loop detection sub-module (323) detects landmark overlaps, which may occur in different image frames, to resolve any ambiguities in the pose graph.
  • the pose-graph is used as additional input for the Kalman filter (320) for further improving localization of the measurement device with respect to the point cloud. As a result, the fine localization can achieve even sub millimeter accuracy for the points and position of the measurement device at any time.
  • Position of the measurement device is preferably defined in world frame, in other words in reference to earth, while the positions of the points in the point cloud can be defined in reference to the measurement device.
  • the second SLAM module (220) outputs the point cloud (connection point "B") and the fine localization of the measurement device (connection point “C”) are provided as inputs for the geometry processing module (230).
  • the figure 7 illustrates geometry processing steps performed by an exemplary geometry processing module (230) that automatically generates a geometry by transforming the point cloud into a mesh based on the point cloud ("B") and fine localization ("C") received from the second SLAM module (220).
  • a mesh can be generated out of a point cloud using different methods and algorithms.
  • planes also referred to as plane segments, are extracted from the point cloud by a plane extraction sub-module (401) and edges are extracted by an edge extraction sub-module (402).
  • the reference frame of the point cloud is preferably changed from that of the measurement device referential to the world frame, if not performed earlier.
  • the planes are matched by a plane matching sub-module (403) for matching plane segments with the and a perframe 3D mesher sub-module (404) first processes each image frame separately to generate a mesh.
  • a multi-frame 3D mesher sub-module (405) then combines meshes from multiple frames into a single multi-frame 3D mesh.
  • the mesh generated by the geometry processing module (230) is then used as input for the 3D model construction module (240) that constructs the wanted 3D of the shaft based on plane segments comprised in the multi-frame 3D mesh.
  • various methods are known and being developed for creating a 3D mesh on basis of a point cloud, and any alternative method known in the art may be applicable instead of the above-given exemplary implementation.
  • the Figure 8 illustrates an exemplary measurement device (20) according to some embodiments.
  • the first camera device (21) and the second camera device (22) have their fields of view, as defined by respective optical axis of the camera devices, in the main extension direction (z).
  • the positive z-axis refers to direction up, towards the headroom of the elevator shaft.
  • the first and second camera devices may be installed facing downwards, towards the pit of the elevator shaft.
  • the field of view (221) of the first camera device is wider than the field of view (222) of the second camera device.
  • the field of view may be defined as an angle.
  • the measurement device comprises the IMU (25), which may be implemented as part of either of the camera devices (21, 22) or it may be a module within the measurement device (20).
  • the measurement device also comprises at least one processor (26) configured to process information received from the cameras and the IMU.
  • the measurement device comprises at least one memory (27) capable of storing measurement data and/or data processed by the at least one processor.
  • the measurement unit may further comprise a communication module, preferably a wireless communication module (28), which preferably enables two-way communication.
  • the measurement device has preferably a small footprint.
  • the measurement device (20) comprises magnetic fixture means (not shown), which enables removably attaching the measurement device for example at an outer surface of an elevator car, for example on the roof or below the floor thereof.
  • magnetic fixture means instead of magnetic fixture means, other fixture means that enable temporarily and removably attaching the measurement device in the elevator shaft may be used.
  • fixture means preferably do not have loose parts nor require use of hand-held tools.
  • the measurement device (20) is configured to generate the 3D model, thus being provided with sufficient memory and processing capacity to perform all steps of the 3D modeling process.
  • the measurement de vice (20) has more limited memory and/or processing capacity, and it is configured to produce and wirelessly transmit raw measurement data, pre-processed measurement data, such as results of the localization and mapping process(es) and/or a pre-processed geometry for further processing by an external computer or computer system for generating the geometry and/or finalizing the 3D model.
  • pre-processed measurement data such as results of the localization and mapping process(es) and/or a pre-processed geometry for further processing by an external computer or computer system for generating the geometry and/or finalizing the 3D model.
  • processing capacity in the measurement device itself facilitates efficient real-time localization and mapping, geometry processing and/or 3D model generation.
  • Division of method step processing functionalities between the measurement device and an external computer or computer system is a design option, and not limited to the given examples.

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne un procédé et un appareil destinés à modéliser une cage d'ascenseur s'étendant dans une direction d'extension principale. La cage d'ascenseur est mesurée avec un système de mesure comprenant deux dispositifs de caméra et une IMU d'unité de mesure inertielle tout en se déplaçant dans la direction d'extension principale de la cage. Des premières mesures agrégées sont réalisées, dans lesquelles une première localisation est réalisée en intégrant des premières données d'image obtenues avec la première caméra et les données de positionnement. Une précision de la mesure est améliorée par la réalisation d'une seconde localisation et d'un mappage par intégration des secondes données d'image avec une sortie provenant de la première localisation. Un modèle 3D de la cage d'ascenseur est généré par traitement de la géométrie reçue à partir de la seconde localisation et du second mappage en un modèle 3D.
PCT/FI2021/050566 2021-08-20 2021-08-20 Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur WO2023021233A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180101594.3A CN117836233A (zh) 2021-08-20 2021-08-20 用于电梯井道三维建模的方法和测量系统
EP21762744.7A EP4387914A1 (fr) 2021-08-20 2021-08-20 Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur
PCT/FI2021/050566 WO2023021233A1 (fr) 2021-08-20 2021-08-20 Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2021/050566 WO2023021233A1 (fr) 2021-08-20 2021-08-20 Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur

Publications (1)

Publication Number Publication Date
WO2023021233A1 true WO2023021233A1 (fr) 2023-02-23

Family

ID=77543532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2021/050566 WO2023021233A1 (fr) 2021-08-20 2021-08-20 Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur

Country Status (3)

Country Link
EP (1) EP4387914A1 (fr)
CN (1) CN117836233A (fr)
WO (1) WO2023021233A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170355558A1 (en) * 2016-06-10 2017-12-14 Otis Elevator Company Detection and Control System for Elevator Operations
CN109002633A (zh) * 2018-08-01 2018-12-14 陈龙雨 基于独立空间的设备网络建模方法
US10547974B1 (en) * 2019-03-19 2020-01-28 Microsoft Technology Licensing, Llc Relative spatial localization of mobile devices
US20200265598A1 (en) * 2019-02-20 2020-08-20 Dell Products, L.P. SYSTEMS AND METHODS FOR HANDLING MULTIPLE SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) SOURCES AND ALGORITHMS IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
US20200357181A1 (en) * 2019-04-30 2020-11-12 Carl Zeiss Ag Method for adjusting and visualizing parameters for focusing an objective lens on an object and system for implementing the method
EP3507227B1 (fr) 2016-08-30 2021-01-06 Inventio AG Procede d'analyse et systeme de mesure destine a mesurer une cage d'ascenseur

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170355558A1 (en) * 2016-06-10 2017-12-14 Otis Elevator Company Detection and Control System for Elevator Operations
EP3507227B1 (fr) 2016-08-30 2021-01-06 Inventio AG Procede d'analyse et systeme de mesure destine a mesurer une cage d'ascenseur
CN109002633A (zh) * 2018-08-01 2018-12-14 陈龙雨 基于独立空间的设备网络建模方法
US20200265598A1 (en) * 2019-02-20 2020-08-20 Dell Products, L.P. SYSTEMS AND METHODS FOR HANDLING MULTIPLE SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) SOURCES AND ALGORITHMS IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
US10547974B1 (en) * 2019-03-19 2020-01-28 Microsoft Technology Licensing, Llc Relative spatial localization of mobile devices
US20200357181A1 (en) * 2019-04-30 2020-11-12 Carl Zeiss Ag Method for adjusting and visualizing parameters for focusing an objective lens on an object and system for implementing the method

Also Published As

Publication number Publication date
EP4387914A1 (fr) 2024-06-26
CN117836233A (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
CN113781582B (zh) 基于激光雷达和惯导联合标定的同步定位与地图创建方法
WO2020037492A1 (fr) Procédé et dispositif de mesure de distance
CN110411444B (zh) 一种地面下采掘移动设备用惯性导航定位系统与定位方法
US8494225B2 (en) Navigation method and aparatus
KR20190035496A (ko) 항공 비파괴 검사를 위한 포지셔닝 시스템
CN111380514A (zh) 机器人位姿估计方法、装置、终端及计算机存储介质
CN107478214A (zh) 一种基于多传感器融合的室内定位方法及系统
CN110865650B (zh) 基于主动视觉的无人机位姿自适应估计方法
CN109459039A (zh) 一种医药搬运机器人的激光定位导航系统及其方法
US11226201B2 (en) Automated mobile geotechnical mapping
TW201832185A (zh) 利用陀螺儀的相機自動校準
WO2020103049A1 (fr) Procédé et dispositif de prédiction de terrain d'un radar à micro-ondes rotatif et système et véhicule aérien sans pilote
CN111882597A (zh) 测量对象物的上表面推测方法、引导信息显示装置以及起重机
CN115371665B (zh) 一种基于深度相机和惯性融合的移动机器人定位方法
CN112506200A (zh) 机器人定位方法、装置、机器人及存储介质
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN113639722B (zh) 连续激光扫描配准辅助惯性定位定姿方法
WO2023021233A1 (fr) Procédé et système de mesure pour modélisation tridimensionnelle d'une cage d'ascenseur
CN116380039A (zh) 一种基于固态激光雷达和点云地图的移动机器人导航系统
CN114543786B (zh) 一种基于视觉惯性里程计的爬壁机器人定位方法
Popov et al. UAV navigation on the basis of video sequences registered by onboard camera
CN117128951B (zh) 适用于自动驾驶农机的多传感器融合导航定位系统及方法
KR102408478B1 (ko) 경로 추정 방법 및 이를 이용하는 장치
CN117739972B (zh) 一种无全球卫星定位系统的无人机进近阶段定位方法
CN117268404B (zh) 一种利用多传感器融合的无人机室内外自主导航方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21762744

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180101594.3

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2021762744

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021762744

Country of ref document: EP

Effective date: 20240320