CN117836233A - Method and measuring system for three-dimensional modeling of elevator hoistway - Google Patents

Method and measuring system for three-dimensional modeling of elevator hoistway Download PDF

Info

Publication number
CN117836233A
CN117836233A CN202180101594.3A CN202180101594A CN117836233A CN 117836233 A CN117836233 A CN 117836233A CN 202180101594 A CN202180101594 A CN 202180101594A CN 117836233 A CN117836233 A CN 117836233A
Authority
CN
China
Prior art keywords
positioning
image data
mapping
measurement system
measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180101594.3A
Other languages
Chinese (zh)
Inventor
F·德斯帕里勒斯
N·沃
M·哈格
J·约凯拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kone Corp
Original Assignee
Kone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kone Corp filed Critical Kone Corp
Publication of CN117836233A publication Critical patent/CN117836233A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B19/00Mining-hoist operation

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention relates to a method and a device for modeling an elevator hoistway extending in a main extension direction. The elevator hoistway is measured with a measuring system comprising two camera devices and an inertial measurement unit IMU while moving in the main extension direction of the elevator hoistway. A first aggregate measurement is performed, wherein a first positioning is performed by integrating the first image data and positioning data obtained with the first camera. The second positioning and mapping is performed by integrating the second image data with the output from the first positioning, thereby improving the accuracy of the measurement. A 3D model of the elevator hoistway is generated by processing the geometry received from the second positioning and mapping into a 3D model.

Description

Method and measuring system for three-dimensional modeling of elevator hoistway
Technical Field
The present invention relates to a method and a measuring system in connection with an elevator. More specifically, the present invention discloses a method and a measurement system for generating a 3D map of an elevator hoistway and a trajectory of an elevator car traveling in the elevator hoistway.
Background
Currently, accurate measurement of an elevator hoistway requires the use of a measurement reference such as a plumb line. On the other hand, it is very difficult and laborious to correctly install the measurement references and to measure from each landing height to the plumb line, and thus also error-prone operations. This applies in particular to the case of elevator modernization, where the hoistway should be measured quickly and accurately in order to provide the best possible equipment for our customers. Furthermore, in the case of new elevator buildings, measurements made in the empty hoistway before installation begins are very important. Errors in the initial measurement (which is not uncommon) can lead to later very expensive repairs or even to problems with lasting the entire life of the elevator.
Because the geometry of the hoistway is high and narrow, no distance sensor can provide the required range and accuracy to perform the correct measurements from a fixed location. The higher the elevator hoistway, the greater the problem.
In computing geometry and robotics, the term simultaneous localization and mapping (SLAM) is used to refer to the computational problem of building or updating a map of an unknown environment while maintaining the location of a tracking agent therein. Common approximate solutions for SLAM include particle filtering, extended Kalman Filtering (EKF), covariance intersection, and GraphSLAM. SLAM algorithms are used for virtual or augmented reality navigation, robotic mapping, and odometry. Kalman filtering, also known as Linear Quadratic Estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces an estimate of an unknown variable by estimating a joint probability distribution over the variables for each time frame, which tends to be more accurate than an estimate based on only a single measurement. When SLAM is performed based on image data, feature points, in other words, distinguishable points appearing in the obtained image frame are tracked.
An orientation filter is a filter that estimates the pose/orientation/angle of the IMU in the world coordinate system. In other words, the orientation filter fuses angular velocity, acceleration, and optionally magnetic readings from the generic IMU device into the orientation. Examples of known orientation filters are Madgwick filters and mahonyl filters.
Typical indoor positioning solutions are based on two main principles: inertial positioning and triangulation. Inertial positioning refers to determining the position of a moving object based on the detected movement of the object. For example, the movement may be detected with an inertial measurement unit IMU comprising inertial sensors such as accelerometers, gyroscopes and optionally magnetometers or other suitable sensors. In order to be able to accurately determine the position in three-dimensional space based on IMU measurements, the IMU preferably comprises three accelerometers measuring accelerations along three mutually orthogonal axes (x, y, z) and three gyroscopes measuring angular rates about the three mutually orthogonal axes (x, y, z). Inertial positioning does not require an external reference, but the accuracy of a position determined using the inertial positioning method tends to decrease rapidly due to sensor bias.
Triangulation refers to determining the distance from a fixed reference point. Triangulation provides an accurate location, but only when the object is visible.
Patent EP 3507227B 1 discloses a method and a system for measuring an elevator hoistway using a measurement system with a camera system and an inertial measurement unit. A digital model of the elevator hoistway is created based on the measurement data.
Disclosure of Invention
It is an object to provide a method and apparatus to solve the problem of generating an accurate three-dimensional model of an elevator hoistway using a mobile measuring device while determining the trajectory of the measuring device within the hoistway. The object of the invention is achieved by a method according to claim 1. The object of the invention is further achieved by an apparatus according to claim 11.
Preferred embodiments of the invention are disclosed in the dependent claims.
According to a first method aspect, a method for modeling an elevator hoistway extending in a main extension direction is provided. The elevator hoistway is measured with a measuring system comprising two camera devices with mutually different image resolutions, and an inertial measurement unit IMU comprising an acceleration sensor and an angular rate sensor. The fields of view of the two camera devices face the main extension direction.
The method comprises the following steps: performing a first travel of the measurement system by moving the measurement system in a first direction along the main extension direction, during the first travel, performing a first aggregate measurement by simultaneously obtaining first image data comprising a plurality of image frames using a first camera device, obtaining second image data comprising a plurality of image frames using a second camera device, and obtaining positioning data using the IMU, performing a first positioning by integrating the first image data and the positioning data, wherein the first positioning has a first level of accuracy, performing a second positioning and mapping by integrating the second image data with an output from the first positioning, wherein the second positioning and mapping has a second level of accuracy that is more accurate than the first level of accuracy, generating a geometry based on the second positioning and mapping, and generating a 3D model of the elevator hoistway based on the generated geometry.
According to a second method aspect, the first camera device is a stereoscopic camera having a wide field of view and a relatively low resolution, and the second camera device has a narrow field of view and a relatively high resolution, and/or the second camera device has a large focal length, and/or the second camera device performs range sensing.
According to a third method aspect, integrating the first image data and the positioning data comprises performing a confidence estimation to define relative weights of the first image data and the positioning data, and/or the first positioning comprises filtering the first image data by a first kalman filter and filtering the positioning data by an orientation filter.
According to a fourth method aspect, the method comprises eliminating any residual stationary objects in the second image data to avoid that residual stationary objects are included in the map.
According to a fifth method aspect, the second positioning utilizes pre-stored information about an expected motion scenario of the measurement device to improve motion estimation of the measurement device.
According to a sixth method aspect, the method comprises: after the first travel, performing a second travel of the measurement system by moving the measurement system in a second direction along the main extension direction, the second direction being opposite to the first direction, performing further aggregate measurements, first positioning and second positioning and mapping during the second travel, and updating the second positioning and mapping obtained during the first travel based on the further second stage positioning and mapping.
According to a seventh method aspect, the second positioning comprises generating a pose map determining a position and an orientation of the measuring device in the world reference.
According to an eighth method aspect, determining a position fix in a world reference, mapping comprises determining a point cloud with reference to a measurement device, and defining a geometry in the world reference.
According to a ninth method aspect, the measuring device is removably attached to the elevator car.
According to a tenth method aspect, the trajectory of the elevator car is defined based on the second positioning, and the deviation of the trajectory from the optimum is used to adjust the guide of the elevator to correct its trajectory.
According to a first system aspect, a measurement system for modeling an elevator hoistway extending in a main extension direction is provided. The measurement system comprises a first camera device having a field of view facing the main extension direction and configured to obtain first image data comprising a plurality of image frames, a second camera device having a field of view facing the main extension direction and configured to obtain second image data comprising a plurality of image frames, the first camera device and the second camera device having mutually different image resolutions, and an inertial measurement unit IMU comprising an acceleration sensor and an angular rate sensor and configured to obtain positioning data. The measurement system is configured to obtain first image data, second image data, and positioning data simultaneously during a first travel during which the measurement system moves in a first direction along the main extension direction. The measurement system further includes a computer device or system including a first simultaneous localization and mapping module (SLAM) configured to perform a first localization by integrating the first image data and the localization data, wherein the first localization has a first level of precision, a second SLAM module configured to perform a second localization and mapping by integrating the second image data with the first localization, wherein the second localization and mapping has a second level of precision that is more accurate than the first level of precision, a geometry processing module configured to generate a geometry based on the second localization and mapping, and a 3D modeling module configured to generate a 3D model of the elevator hoistway based on the generated geometry.
According to a second system aspect, the first camera device is a stereoscopic camera having a wide field of view and a relatively low resolution, and the second camera device has a narrow field of view and a relatively high resolution, and/or the second camera device has a large focal length, and/or the second camera device performs range sensing.
According to a third system aspect, the first SLAM module is configured to perform confidence estimation to define relative weights of the first image data and the positioning data, and/or the first SLAM module comprises a first kalman filter configured to filter the first image data and an orientation filter configured to filter the positioning data.
According to a fourth system aspect, the second SLAM module is configured to eliminate any residual stationary objects in the second image data to avoid the residual stationary objects being included in the map.
According to a fifth system aspect, the second SLAM module is configured to utilize pre-stored information about an expected motion scenario of the measurement device to improve motion estimation of the measurement device.
According to a sixth system aspect, the measurement system is configured wherein the measurement system is configured to obtain further first image data, further second image data and further positioning data simultaneously during a second travel, during which the measurement system is moved in a second direction along the main extension direction, the second direction being opposite to the first direction. The first SLAM module is configured to perform a further first positioning based on the further first image data and the further positioning data. The second SLAM module is configured to perform a further second positioning and mapping based on the further first positioning and the further second image data and integrate the further second positioning and mapping to improve accuracy of the second positioning and mapping.
According to a seventh system aspect, the second SLAM module is configured to perform the second positioning by generating a pose map that determines a position and an orientation of the measurement device in the world reference.
According to an eighth system aspect, the first SLAM module and the second SLAM module are configured to determine a location in a world reference, wherein the second SLAM module is configured to perform the mapping by referencing a measurement device to determine a point cloud, and wherein the geometry is defined in the world reference.
According to a ninth system aspect, the measuring device is removably attached to the elevator car.
The invention is based on the idea of aggregating and integrating data from multiple sensors, in particular two camera sensors and an Inertial Measurement Unit (IMU). As a result, not only the 3D model of the hoistway but also the exact trajectory of the measuring device is received. When the measuring device is attached to the elevator car, the trajectory of the elevator car is also received, which can be used to correct and optimize the trajectory of the elevator car.
The invention has the advantage that it enables a quick and accurate analysis of the elevator hoistway. No specific plumb line or pre-installed reference bar is required. This method is easy to use, since the aggregation of the measurements is handled by the measuring instrument itself. High accuracy is achieved by calculating accurate reference points calculated with a fusion algorithm that combines data received from multiple sensors rather than a single sensor. The method is also stable in that combining motion and distance information simultaneously alleviates fiducial point invisibility and sensor bias issues. The method enables advanced measurement of elevator hoistway performance. Sub-millimeter accuracy can be achieved and complex hoistway geometries can be analyzed, including verticality, wall parallelism assessment, wall scanning defects, and the like. This approach is also cost effective in that low cost sensors can be used instead of expensive long range telemetry sensors. Furthermore, new sensors can be added seamlessly to upgrade the functionality of the measurement device. Additional measurements made by the new sensors are mapped directly onto the hoistway geometry in the resulting three-dimensional model and thus can be accurately located.
Additional measurements may include, for example, one or more of landing door images, wall density, thickness, humidity, temperature.
Drawings
The invention will be described in more detail hereinafter with reference to the preferred embodiments, with reference to the accompanying drawings, in which:
fig. 1 shows a schematic view of an elevator hoistway.
Fig. 2 illustrates aspects of the present invention.
Fig. 3 illustrates other aspects of the invention.
Fig. 4 shows the main steps of the 3D model building process.
Fig. 5 shows the steps of the coarse positioning procedure.
Fig. 6 shows the steps of the fine positioning and mapping process.
Fig. 7 shows the steps of the geometry procedure.
Fig. 8 shows a measuring device.
Detailed Description
In this context, the term localization refers to a process of determining the position (in other words, the trajectory) of an object over time, preferably in a world reference. The location may be determined in various ways, for example using coordinates such as cartesian coordinates (x, y, z) or vectors, as is known in the art.
In this context, the term mapping refers to tracking landmarks, i.e. singularities, over a plurality of image frames, preferably in a world reference.
In this case, the term main extension direction of the elevator hoistway refers to the direction of movement of the elevator car of the completed elevator system. The main extension direction is usually vertical, but it may also extend obliquely or horizontally with respect to the vertical.
In this context, the term positioning data refers to data obtained using an inertial measurement unit. The position data may comprise unprocessed data, such as acceleration and/or angular rate data, and/or the positioning data may comprise processed data, such as coordinate or vector data.
An important feature of simultaneous localization and mapping (SLAM) modules is to converge towards accurate localization even when, for example, localization based on distance measurements only diverges.
Fig. 1 presents a schematic view of an elevator system. The figures are not to scale and only show a portion of the elements of the elevator system. An elevator car (15) (also referred to simply as car (15)) travels along a main direction of extension (z) of an elevator hoistway (10) (referred to herein simply as hoistway (10)). The hoisting motor (16) is connected to the car (15) and one or more counterweights (18) by means of traction ropes (17), commonly referred to as ropes (17), for moving the car (15). The car (15) is movably coupled to one or more guide rails (13). In normal operation, the car (15) travels within the hoistway (10) between a predetermined lowest position and highest position. The car (15) is typically designed such that it never reaches the top of the hoistway (10), but leaves a headroom (11) at the top (15) of the hoistway (10) and a pit (12) at the bottom of the hoistway (15), the car (15) not entering the pit (12) during normal operation. The headroom (11) and pit (12) typically include various functional elements of the elevator system. For example, the pit (12) includes a buffer (19), and if the car and counterweight travel to the bottom of the hoistway at high speed, the buffer (19) softens the stopping of the car and counterweight. The acceleration and the travel speed of the elevator car (15) are generally relatively well known, although these acceleration and travel speed may vary based on e.g. the load.
According to one embodiment of the invention, the measuring device (20) is attached to the car (15), for example on the top plate of the car (15), as shown by the substitution 20a, or below the bottom plate of the car (15), as shown by the substitution 20 b. Alternatively, a part of the measuring device may be located inside the elevator car and communicatively coupled to a part of the measuring device (20 a,20 b) outside the car by a wireless or wired connection. The measuring device (20) is operable to collect information about the elevator hoistway during travel of the car (15). The information collected by the measuring device (20) is then used to create a 3D model of the hoistway (10). The measuring device (20) can be provided with a magnetic fixture that enables quick installation and removal of the measuring device simply by attaching the measuring device and the magnetic fixture to a metal structure in the elevator car.
Fig. 2 illustrates some aspects of the invention. Elements within the hoistway have been omitted for clarity. In this example, the optical center of the measuring device (20) points upwards in the main direction of extension (z). The measuring device (20) comprises two camera devices, one having a wide field of view shown by a wider sector (201) and the other having a narrow field of view shown by a narrower sector (202). The optical center of each of the camera devices (which defines the center of the field of view of the respective camera device) is preferably directed at least approximately upward in the main extension dimension. In this case, the wide field of view and the narrow field of view are relatively determined such that the narrow field of view is narrower than the wide field of view. The trajectory of the elevator car and the measuring device (20) attached thereto in the main extension direction (z) during travel can deviate from an intended completely vertical line (z') in the world coordinate system (world coordinate) defined by gravity. For visualization, the inclination of the main extension direction (z) with respect to the vertical (z') is exaggerated. Instead of or in addition to tilting from optimum, the track may also be curved, for example. The result of the positioning can be used to determine how to compensate for the deviation of the trajectory by adjusting the guides in the hoistway so that the trajectory of the car can be optimized by bringing the trajectory to the center of the hoistway and to a perfect vertical direction.
The image data received from each camera device is used to track the locations of a plurality of landmarks (205). In this example, the individual landmarks are illustrated with a star. The landmark may be any distinguishable point, preferably visible in at least two image frames obtained by each camera device. For example, landmarks may be corners, intersections, wall defects, or equivalents.
Fig. 3 illustrates other aspects of the invention. As in fig. 2, elements of the elevator within the hoistway have been omitted for clarity. In some embodiments, positioning and mapping are improved by performing the positioning and mapping process during two consecutive travels, with the elevator car and the measuring device attached thereto traveling between the two endpoints of its trajectory. In this example, the first trajectory (301) refers to the measurement device attached to the elevator car traveling towards the top of the hoistway. During the first trajectory (301), the measurement device tracks each of the plurality of landmarks (205) over three consecutive frames. This is illustrated by arrows pointing to the landmark (205) from three different positions of the measuring device along the respective trajectories (301, 302). After completion of the first trajectory (301), a second trajectory (302) is performed in the opposite direction along the main extension direction of the hoistway, here downwards from the top of the hoistway towards its bottom. During the second trajectory, each landmark is mapped again. In this example, landmarks are mapped during the second trajectory based on the two image frames, and this mapping information may be used to fine tune both positioning and mapping. By allowing further fine tuning of this repeated positioning and mapping of the positioning and mapping, the final positioning and 3D point cloud can be determined with sub-millimeter accuracy. The above-described number of image frames for positioning and mapping during each track is intended to be exemplary, and not limiting in scope.
Fig. 4 shows the main steps of a 3D model building process according to an embodiment of the invention.
As the measuring device (20) travels through the hoistway along its trajectory from one end of the normal operating range of the car (15) to the opposite end, it uses both its cameras (21, 22) and the inertial measurement unit IMU (25) to obtain data that is preferably processed in real time. As is known in the art, real-time processing allows for normal data transmission and/or processing delays.
The first camera device (21) preferably has a wide field of view for visual ranging. For example, the first camera should have a field of view of at least 180 °, but it may even have a field of view of up to 360 °. Thus, each image frame obtained by the first camera preferably covers at least the entire space within the hoistway 10 above the current position of the first camera device 21 when the first camera device 21 is facing upwards in the vertical hoistway, and each image frame obtained by the first camera covers at least the entire space within the hoistway 10 visible below the current position of the first camera device 21 when the first camera device 21 is facing downwards in the vertical hoistway. As is known in the art, visual ranging refers to the process of determining the position and orientation of a device by analyzing an associated camera image. In order to limit the required processing power required for processing the image frames, the first camera device (21) may have a relatively low resolution such that the number of pixels in each image frame is not large. The first camera device (21) obtains first image data, preferably in the form of a plurality of image frames. The first camera device (21) used in the exemplary non-limiting prototype of the embodiment has a resolution of 848×800 pixels and obtains 30 image frames per second. The skilled person will understand that "low resolution" should be regarded as a relative term and that as the technology advances, the absolute value of the low resolution will eventually increase.
An inertial measurement unit IMU (25) obtains inertial measurements for estimating movement of the measurement device while obtaining first image data with the first camera device (21). The IMU determines the position of the measurement device within the hoistway based on signals received from its motion sensors. Ranging refers to estimating the change in position over time using data obtained from inertial sensors. The inertial measurement may comprise, for example, a measurement of acceleration along at least a main extension dimension (z-axis), preferably along all three cartesian axes (x, y, z), and a measurement of angle around at least one axis, preferably around the same three axes. Preferably, the inertial data should be obtained at a relatively high frequency. The term inertial data may refer to unprocessed data received from inertial sensors, or it may refer to any suitable form of preprocessed or processed data. For example, the inertial data may include an array of combined acceleration and angular rate data. The IMU preferably uses high frequencies. In the exemplary non-limiting prototype embodiment described above, a sampling frequency of 62.5Hz was used for the accelerometer and a sampling rate of 200Hz was used for the gyroscope. The sampling rate naturally depends on the type of accelerometer and gyroscope used and the characteristics such as the speed of travel of the measurement device within the hoistway. The inertial information is preferably integrated over time to achieve a more accurate localization without visual cues.
The first image data obtained using the first camera device (21) and the inertial data obtained by the IMU25 are used as input data for a processing step, called coarse positioning, performed by the first simultaneous localization and mapping (SLAM) module 210. The coarse positioning runs continuously in real time during travel. The coarse positioning integrates visual ranging information obtained based on image data obtained by the first camera device (21) and ranging information based on inertial data obtained by the IMU (25). Integration of these two information over time improves the accuracy of the coarse localization, for example in case no visual cues are present in some of the respective image frames. The first SLAM module (210) is preferably implemented as embedded software executed by a processing device comprised in the hardware of the measurement system. The main purpose and output of the first SLAM module is the positioning of the measuring device, in other words defining the position of the measuring device over time. Since the measuring device is attached to the elevator car, the positioning of the measuring device allows to position the elevator car simply by determining the relative position of the measuring device and the elevator car and adding a predefined difference based on the relative position to the coordinates. Obviously, similar coordinate corrections for determining the trajectory of any individual module of the measurement system or the sensor device may be performed if and when required.
For coarse positioning, the information received in the two input data streams (i.e., the visual ranging information and the ranging information) may be weighted according to confidence estimates made for the two input data streams. As a result, the process produces a position fix with a first level of accuracy. Preferably, the first positioning defines the position of the measuring device during each image frame obtained. In visual ranging, triangulation is used to define the location of a measurement device based on the landmarks shown. Likewise, triangulation is used to determine the location of landmarks based on the location of the measurement device. Although the SLAM module will be able to both locate and map, only locate is preferably used as output from the first SLAM module (210). This is because it would be unnecessarily complex to merge two different maps.
Simultaneously with the acquisition of the first image data by the first camera device (21) and the acquisition of the inertial data by the IMU (25), a second phaseThe camera device (22) is used for obtaining second image data in the form of a plurality of image frames. The second camera device (22) preferably has a high resolution. The absolute values of high and low resolution depend on the available technology and processing power. As technology advances, it is understood that the absolute value of the low resolution will eventually increase. Therefore, the high resolution of the second camera device (22) mainly refers to a higher resolution than the low resolution of the first camera device 21. In the exemplary non-limiting prototype embodiment described above, intel was used R RealSense TM The 455 depth camera enables a second camera device with up to 1280 x 720 active stereoscopic depth resolution and up to 1280 x 800RGB resolution. The sample rate of the exemplary second camera device may be adjusted to 90 frames per second for both stereoscopic and RGB images. The optimal value of the sampling rate is the design parameter.
The second camera device (22) may have a more limited field of view than the first camera device (21). For example, the above-described second camera device used in the prototype has a diagonal field of view exceeding 90 °, whereas the first camera device used in the prototype has a fish-eye type camera with a significantly wider field of view of 173 °, in other words, almost up to 180 °. Thus, the second camera device (22) provides image data representing a more accurate image of a smaller field of view, which enables more accurate ranging over the smaller field of view. The second camera device (22) also preferably has a large focal length such that objects in the vicinity of the second camera device (22) are not clearly reproduced in the image frame. For example, the second camera device described above has a focal length adjustable between 0.4 meters and more than 10 meters, which varies with the lighting conditions. This helps reject the residual object in the frame. For example, a balustrade that is part of an elevator car is not an object of interest for an elevator hoistway and should not be included in a 3D model of the elevator hoistway, although it can be seen in every single image frame. Preferably, such residual objects are omitted from the visual ranging.
The second camera device (22) may have range sensing capabilities. In other words, the second camera device is capable of capturing a three-dimensional structure of the world from the viewpoint of the second camera device.
The second image data obtained using the second camera device (22) and the coarsely located output data are used as input data for a process called fine location and mapping performed by the second SLAM module (220). The fine positioning and mapping process runs continuously in real time during travel. The second SLAM module (220) integrates over time further visual ranging information obtained based on second image data obtained by the second camera device (22) and a trajectory received from the coarse positioning output data. Integration of these two information over time further improves the accuracy of the positioning. The second SLAM module (220) is preferably implemented as embedded software executed by a processing device comprised in the hardware of the measurement system. Thus, the fine positioning and mapping yields a second positioning level that is more accurate than the first positioning level. The positioning level may also be referred to as positioning accuracy. Preferably, the fine positioning and mapping includes mapping data defining the position of the measuring device during each frame (in other words, the trajectory of the measuring device over time) and the visual cues (landmarks) shown in the image frames. Preferably, the mapping data obtained from the second positioning and mapping is a point cloud. A point cloud refers to a collection of sample points from a mapped shape surface. In some embodiments, the second camera device (22) may include an IMU or IMU25. Since both the first camera device and the second camera device are part of the measurement device and are preferably coupled to the measurement device such that the positions of the camera devices are fixed relative to each other, inertial data may be obtained from the IMU (25) of either camera device or the measurement device may be provided by an IMU (25) not included in either camera device.
After the second SLAM module (220) completes the data processing, the obtained point cloud is processed by a geometry processing module (230), the geometry processing module (230) determining a geometry including geometric elements such as edges and planes based on the point cloud. Geometry processing (also referred to as mesh processing) refers to reconstructing the point cloud received from the fine positioning and mapping module into a complex 3D structure represented as a mesh using concepts from applied mathematics, computer science, and engineering.
Finally, the mesh is built into a three-dimensional model in a 3D model building module (240), and the 3D model building module (240) may build a new 3D model or reconstruct or update an existing 3D model. As known in the art, a 3D model is typically implemented as a polygonal mesh, which is a collection of polygons (also called planes or planar segments), connected at their vertices and edges, for defining the shape of a polyhedral object in 3D.
The accuracy of the point cloud, map, and 3D model may be further improved by repeating the entire positioning and mapping process while traveling again through the hoistway in the opposite direction to fine tune the positioning and/or point cloud. In an exemplary embodiment, during a first trajectory, each landmark is preferably tracked over three consecutive frames in which it appears, and during a second subsequent trajectory, each landmark is preferably tracked over two consecutive frames. Conveniently, the mapping process may be performed during a single bi-directional travel sequence of the elevator car up and down or up and down between the highest and lowest positions within the hoistway.
In the following figures, exemplary embodiments of a first SLAM module (210), a second SLAM module (220), and a geometry processing module (230) will be provided. As described herein, the sub-modules that perform the disclosed functions of these modules are preferably implemented as software, hardware, firmware, or a combination thereof.
Fig. 5 illustrates an exemplary embodiment of a coarse positioning process by the first SLAM module (210). The first image data received from the first camera device (21) and the positioning data received from the IMU (25) are processed by a first SLAM module comprising a kalman filter (211) for filtering the image data and an orientation filter (212) for filtering the orientation data, such as a Madgwick filter or mahonyl filter or equivalent. The first SLAM module (210) performs fusion of the first image data and IMU data by means of at least the kalman filter (211) and the orientation filter (212). As a result of the coarse positioning, the coarse position of the measuring device as a function of time is obtained and forwarded as input for further processing, as indicated by connection "a".
FIG. 6 illustrates an exemplary embodiment of fine positioning and mapping performed by the second SLAM module (220). Basically, the second SLAM module (220) combines information obtained from the second image data received from the second camera device (22) and information received from the first SLAM module (210).
-estimating, by an intrinsic parameter estimation sub-module (301), an intrinsic parameter of the second camera device (22). Intrinsic parameters of the camera device represent the optical center and focal length of the camera. The optical center (image center) refers to the intersection of the optical axis of the lens and the sensing plane of the camera. The world points are transformed into camera coordinates using external parameters that take into account rotation and translation. The camera coordinates are mapped into the image plane using these intrinsic parameters.
The fixed features are removed from the second image data by a fixed feature removal sub-module (302). In this context, a fixed feature refers to any feature or object shown in a fixed position in each frame of the obtained second image data. The fixing feature may be e.g. an object moving with the elevator car and thus the relative position of the fixing feature with respect to the measuring device attached to the elevator car and thus also with respect to the second camera device remains fixed. Thus, the fixed features may appear in all image frames obtained by the second camera arrangement, but are not relevant as elements of the elevator hoistway and should be omitted in the fine positioning and mapping module. An example of a fixed feature in an elevator is a railing on the roof of an elevator car.
After the fixed features are eliminated, a plurality of points are extracted based on the second image data by a point extraction sub-module (303), thereby obtaining a point cloud in which all points have positions (coordinates) in a space defined based on the second image data.
The second SLAM module (220) receives as input the output of the first SLAM module (210), shown as input "A". The results of the first SLAM module (210) are further subjected to a second kalman filter (320). Alternatively, the motion scene provided by the motion field Jing Zi module (319) may be used as further input data for the second kalman filter (320). The movement scene refers to pre-stored information about the expected movement pattern of the elevator car and thus also about the expected movement pattern of the measuring device. When the measuring device is attached to the elevator car, the expected motion scenario can be obtained based on the motion scenario of the elevator car. The expected acceleration, movement speed and deceleration of the elevator car during travel are carefully designed and controlled and are therefore generally well known, and this information can be used a priori to improve the speed estimation based on IMU measurements, thereby further improving the accuracy of the positioning.
The second SLAM module (220) preferably performs a process loop that improves the positioning of the measurement device to achieve a desired accuracy of the positioning of the measurement device, referred to herein as "fine positioning". The loop includes a point tracking sub-module (321) configured to track points in the second image data. Tracking refers to a moving track that follows a selected point (also referred to as a landmark) shown in an image as the landmark moves between each image frame. These landmarks may be any distinguishable points such as corners, intersections, wall defects, and the like. The second SLAM module (220) attempts to track each landmark in as many image frames as possible. From the trajectories of the landmarks, the system may infer the trajectories of the measurement devices and the point clouds generated based on the mapping. The point tracking enables a gesture map to be defined by a gesture map sub-module (325). The term pose map refers to defining the most likely position and orientation of the measuring device based on observations made from image frames obtained by the second camera. Over time, the probability of position and orientation is re-evaluated and improved to increase the accuracy of the positioning. The loop detection sub-module (323) detects landmark overlap that may occur in different image frames to resolve any ambiguity in the pose map. The pose map is used as an additional input to a kalman filter (320) for further improving the positioning of the measuring device relative to the point cloud. Thus, fine positioning can achieve even sub-millimeter accuracy of the point and position of the measuring device at any time. The position of the measuring device is preferably defined in the world coordinate system, in other words with reference to the earth, whereas the position of a point in the point cloud can be defined with reference to the measuring device.
The second SLAM module (220) outputs a point cloud (connection point "B") and provides fine positioning of the measurement device (connection point "C") as input to the geometry processing module (230).
FIG. 7 illustrates geometric processing steps performed by an exemplary geometric processing module (230), the geometric processing module (230) automatically generating a geometric shape by transforming a point cloud into a grid based on the point cloud ("B") and fine positioning ("C") received from the second SLAM module (220). As known in the art, grids can be generated from point clouds using different methods and algorithms. In an exemplary embodiment, planes (also referred to as plane segments) are extracted from the point cloud by a plane extraction sub-module (401), and edges are extracted by an edge extraction sub-module (402). During the extraction of the plane and the associated edges, the reference frame of the point cloud is preferably changed from the reference frame of the measuring device relative to the world coordinate system, if not performed before. The planes are matched by a plane matching sub-module (403) for matching the plane segments to each image frame, and each image frame is first processed separately by a per-frame 3D grid sub-module (404) to generate a grid. Then, a multi-frame 3D mesh sub-module (405) combines meshes from multiple frames into a single multi-frame 3D mesh. The mesh generated by the geometry processing module (230) is then used as input to a 3D model building module (240), the 3D model building module (240) building the desired 3D of axes based on the planar segments included in the multi-frame 3D mesh. As known in the art, various methods are known and methods for creating a 3D mesh based on a point cloud are being developed, and any alternative method known in the art may be applied instead of the exemplary embodiments given above.
Fig. 8 illustrates an exemplary measurement device (20) according to some embodiments. The first camera device (21) and the second camera device (22) have their fields of view in a main extension direction (z) defined by respective optical axes of the camera devices. Herein, the positive z-axis refers to a direction in the headroom direction toward the elevator hoistway. Alternatively, the first camera device and the second camera device may be mounted facing downwards, towards the pit of the elevator hoistway. The field of view 221 of the first camera device is wider than the field of view 222 of the second camera device. As known in the art, the field of view may be defined as an angle. The measurement device comprises an IMU (25), which may be implemented as part of either of the camera devices (21, 22), or which may be a module within the measurement device (20). The measurement device further includes at least one processor (26) configured to process information received from the camera and the IMU. The measuring device comprises at least one memory (27) capable of storing measurement data and/or data processed by the at least one processor. The measurement unit may further comprise a communication module, preferably a wireless communication module (28), which preferably enables bi-directional communication. In order to provide a flexible measuring device that is easy to move to any desired elevator hoistway and easy and light to install by a technician or operator, the measuring device preferably has a small footprint. According to some embodiments, the measuring device (20) comprises a magnetic fixing device (not shown) that enables removable attachment of the measuring device e.g. at the outer surface of the elevator car, e.g. on top of the car or under the floor thereof. Instead of a magnetic fixture, other fixtures that can temporarily and removably attach the measuring device in the elevator hoistway may be used. To improve safety in the elevator hoistway environment, the fixture preferably has no loose parts and does not require the use of hand tools.
According to some embodiments, the measurement device (20) is configured to generate a 3D model, and is therefore provided with sufficient memory and processing power to perform all steps of the 3D modeling process. According to some embodiments, the measurement device (20) has more limited memory and/or processing power and is configured to generate and wirelessly transmit raw measurement data, pre-processed measurement data (such as the results of a positioning and mapping process), and/or pre-processed geometry for further processing by an external computer or computer system to generate the geometry and/or to finalize the 3D model. By utilizing the wireless communication capabilities of the measuring device, it is enabled to perform some tasks of the method to an external computer or computer system, which has the advantage that the measuring device itself can be made smaller and lighter and thus easier to handle. On the other hand, including processing power in the measurement device itself facilitates efficient real-time localization and mapping, geometry processing, and/or 3D model generation. Dividing the method step processing function between the measuring device and an external computer or computer system is a design option and is not limited to the examples given.
It is obvious to a person skilled in the art that as the technology advances, the basic idea of the invention can be implemented in various ways. The invention and its embodiments are thus not limited to the examples described above, but they may vary within the scope of the claims.

Claims (19)

1. A method for modeling an elevator hoistway extending in a main extension direction, wherein the elevator hoistway is measured with a measurement system, the measurement system comprising:
-two camera devices having mutually different image resolutions, wherein the fields of view of the two camera devices face the main extension direction, and
an inertial measurement unit IMU comprising an acceleration sensor and an angular rate sensor,
and wherein the method comprises:
performing a first travel of the measurement system by moving the measurement system in a first direction along the main extension direction,
-during said first travel, performing a first aggregate measurement by simultaneously obtaining first image data comprising a plurality of image frames using a first camera device, obtaining second image data comprising a plurality of image frames using a second camera device, and obtaining positioning data using said IMU;
-performing a first positioning by integrating the first image data and the positioning data, wherein the first positioning has a first level of accuracy;
-performing a second localization and mapping by integrating the second image data and the output from the first localization, wherein the second localization and mapping has a second level of accuracy which is more accurate than the first level of accuracy;
-generating a geometry based on the second positioning and mapping; and
-generating a 3D model of the elevator hoistway based on the generated geometry.
2. The method of claim 1, wherein the first camera device is a stereoscopic camera having a wide field of view and a relatively low resolution, and the second camera device has a narrow field of view and a relatively high resolution, and/or
The second camera device has a large focal length, and/or
The second camera device performs range sensing.
3. The method of claim 1 or 2, wherein integrating the first image data and the positioning data comprises performing a confidence estimation to define relative weights of the first image data and the positioning data, and/or
Wherein the first positioning comprises filtering the first image data by a first kalman filter and filtering the positioning data by an orientation filter.
4. A method according to any of the preceding claims, wherein the method comprises eliminating any residual stationary objects in the second image data to avoid residual stationary objects being included in the map.
5. The method according to any of the preceding claims, wherein the second positioning utilizes pre-stored information about an expected motion scenario of the measurement device to improve the estimation of the motion of the measurement device.
6. The method according to any of the preceding claims, wherein the method comprises:
-after the first travel, performing a second travel of the measurement system by moving the measurement system in a second direction along the main extension direction, the second direction being opposite to the first direction;
-performing further aggregate measurements, a first positioning and a second positioning and mapping during the second travel; and
-updating the second positioning and mapping obtained during the first travel based on a further second horizontal positioning and mapping.
7. The method of any of the preceding claims, wherein the second positioning comprises generating a pose map that determines a position and an orientation of the measurement device in a world reference.
8. The method of any of the preceding claims, wherein the positioning is determined in a world reference, the mapping comprising determining a point cloud with reference to the measurement device, and wherein the geometry is defined in the world reference.
9. The method of any of the preceding claims, wherein the measurement device is removably attached to an elevator car.
10. The method of claim 9, wherein a trajectory of the elevator car is defined based on the second positioning, and the deviation of the trajectory from optimal is used to adjust a guide of the elevator to correct its trajectory.
11. A measurement system for modeling an elevator hoistway extending in a main extension direction, the measurement system comprising:
-a first camera device having a field of view facing the main extension direction and configured to obtain first image data comprising a plurality of image frames;
-a second camera device having a field of view facing the main extension direction and configured to obtain second image data comprising a plurality of image frames, the first and second camera devices having mutually different image resolutions;
-an inertial measurement unit IMU comprising an acceleration sensor and an angular rate sensor and configured to obtain positioning data;
wherein the measurement system is configured to obtain first image data, second image data, and positioning data simultaneously during a first travel during which the measurement system moves in a first direction along the main extension direction, an
Wherein the measurement system further comprises a computer device or system comprising:
-a first simultaneous localization and mapping module (SLAM) configured to perform a first localization by integrating the first image data and the localization data, wherein the first localization has a first level of accuracy;
-a second SLAM module configured to perform a second localization and mapping by integrating the second image data and the first localization, wherein the second localization and mapping has a second level of accuracy that is more accurate than the first level of accuracy; and
-a geometry processing module configured to generate a geometry based on the second positioning and mapping; and
-a 3D modeling module configured to generate a 3D model of the elevator hoistway based on the generated geometry.
12. The measurement system of claim 11, wherein the first camera device is a stereo camera with a wide field of view and a relatively low resolution, and the second camera device has a narrow field of view and a relatively high resolution, and/or
The second camera device has a large focal length, and/or
The second camera device performs range sensing.
13. The measurement system of claim 11 or 12, wherein the first SLAM module is configured to perform confidence estimation to define relative weights of the first image data and the positioning data, and/or the first SLAM module includes a first kalman filter configured to filter first image data and an orientation filter configured to filter positioning data.
14. The measurement system of any of claims 11 to 13, wherein the second SLAM module is configured to eliminate any residual stationary objects in the second image data to avoid residual stationary objects from being included in the map.
15. The measurement system of any one of claims 11 to 14, wherein the second SLAM module is configured to utilize pre-stored information about an expected motion scenario of the measurement device to improve the estimation of the motion of the measurement device.
16. The measurement system of any one of claims 11 to 15, wherein the measurement system is configured to obtain further first image data, further second image data, and further positioning data simultaneously during a second travel during which the measurement system moves in a second direction along the main extension direction, the second direction being opposite to the first direction, and
-the first SLAM module is configured to perform a further first positioning based on the further first image data and the further positioning data, and
the second SLAM module is configured to perform a further second positioning and mapping based on the further first positioning and the further second image data and to integrate the further second positioning and mapping to improve the accuracy of the second positioning and mapping.
17. The measurement system of any one of claims 11 to 17, wherein the second SLAM module is configured to perform the second positioning by generating a pose map that determines a position and an orientation of the measurement device in a world reference.
18. The measurement system of any one of claims 11 to 18, wherein the first SLAM module and the second SLAM module are configured to determine a location in the world reference, wherein the second SLAM module is configured to perform the mapping by determining a point cloud with reference to the measurement device, and wherein the geometry is defined in the world reference.
19. The measurement system of any one of claims 11 to 18, wherein the measurement device is removably attached to an elevator car.
CN202180101594.3A 2021-08-20 2021-08-20 Method and measuring system for three-dimensional modeling of elevator hoistway Pending CN117836233A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2021/050566 WO2023021233A1 (en) 2021-08-20 2021-08-20 A method and a measurement system for three-dimensional modeling of an elevator shaft

Publications (1)

Publication Number Publication Date
CN117836233A true CN117836233A (en) 2024-04-05

Family

ID=77543532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180101594.3A Pending CN117836233A (en) 2021-08-20 2021-08-20 Method and measuring system for three-dimensional modeling of elevator hoistway

Country Status (3)

Country Link
EP (1) EP4387914A1 (en)
CN (1) CN117836233A (en)
WO (1) WO2023021233A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10407275B2 (en) * 2016-06-10 2019-09-10 Otis Elevator Company Detection and control system for elevator operations
US10745242B2 (en) 2016-08-30 2020-08-18 Inventio Ag Method for analysis and measurement system for measuring an elevator shaft of an elevator system
CN109002633B (en) * 2018-08-01 2019-09-03 陈龙雨 Device network modeling method based on separate space
US10922831B2 (en) * 2019-02-20 2021-02-16 Dell Products, L.P. Systems and methods for handling multiple simultaneous localization and mapping (SLAM) sources and algorithms in virtual, augmented, and mixed reality (xR) applications
US10547974B1 (en) * 2019-03-19 2020-01-28 Microsoft Technology Licensing, Llc Relative spatial localization of mobile devices
DE102019111238A1 (en) * 2019-04-30 2020-11-05 Carl Zeiss Ag Method for setting and visualizing parameters for focusing an objective on an object and system for carrying out the method

Also Published As

Publication number Publication date
WO2023021233A1 (en) 2023-02-23
EP4387914A1 (en) 2024-06-26

Similar Documents

Publication Publication Date Title
KR102001728B1 (en) Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone
EP3450637B1 (en) Excavator bucket positioning via mobile device
CN104704384B (en) Specifically for the image processing method of the positioning of the view-based access control model of device
US20170123066A1 (en) Apparatus, Systems and Methods for Point Cloud Generation and Constantly Tracking Position
CN110411444B (en) Inertial navigation positioning system and positioning method for underground mining mobile equipment
KR101944210B1 (en) Measuring method of elevator shaft dimension and elevator shaft dimension
CN110412616A (en) A kind of mining area underground mining stope acceptance method and device
CN107289910B (en) Optical flow positioning system based on TOF
US20180202129A1 (en) Work machine control system, work machine, and work machine control method
CN111882597A (en) Method for estimating upper surface of measurement object, guidance information display device, and crane
US20100226541A1 (en) System and method for detecting position of underwater vehicle
JP6966218B2 (en) Imaging equipment calibration equipment, work machines and calibration methods
CN112197741B (en) Unmanned aerial vehicle SLAM technology inclination angle measuring system based on extended Kalman filtering
US10302669B2 (en) Method and apparatus for speed or velocity estimation using optical sensor
CN107923744B (en) Point cloud based surface construction
JP2017026487A (en) Shape measurement device, shape measurement system and shape measurement method
CN108387222B (en) Position positioning system for surveying and mapping
JP4565348B2 (en) Shape measuring apparatus and method
CN113639722B (en) Continuous laser scanning registration auxiliary inertial positioning and attitude determination method
Sternberg et al. Precise indoor mapping as a basis for coarse indoor navigation
CN111006645A (en) Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction
CN112530010B (en) Data acquisition method and system
CN117836233A (en) Method and measuring system for three-dimensional modeling of elevator hoistway
CN117488887A (en) Foundation pit multi-measuring-point integrated monitoring method based on monocular vision
EP3943979A1 (en) Indoor device localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination