CA3131587A1 - 2d and 3d floor plan generation - Google Patents
2d and 3d floor plan generationInfo
- Publication number
- CA3131587A1 CA3131587A1 CA3131587A CA3131587A CA3131587A1 CA 3131587 A1 CA3131587 A1 CA 3131587A1 CA 3131587 A CA3131587 A CA 3131587A CA 3131587 A CA3131587 A CA 3131587A CA 3131587 A1 CA3131587 A1 CA 3131587A1
- Authority
- CA
- Canada
- Prior art keywords
- image
- camera
- module
- images
- modelling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 99
- 230000001131 transforming effect Effects 0.000 claims abstract description 5
- 230000009466 transformation Effects 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 238000010835 comparative analysis Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000011158 quantitative evaluation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000011960 computer-aided design Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000013432 robust analysis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
A floorplan modelling method and system. The floorplan modelling method includes receiving 2D images of each corner of an interior space from a camera, generating a corresponding camera position and camera orientation in a 3D coordinate system in the interior space for each 2D image, generating a depth map for each 2D image to estimate depth for each pixel, generating a corresponding edge map for each 2D image, and generating a 3D point cloud for each 2D image using the corresponding depth map and parameters of the camera. The floorplan modelling method includes transforming the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system of the camera, regularizing the 3D point clouds into 2D boundary lines, and generating a 2D plan of the interior space from the boundary lines.
Description
TECHNICAL FIELD
[0001] Example embodiments relate to modelling floor layouts using two-dimensional images.
BACKGROUND
[0001] Example embodiments relate to modelling floor layouts using two-dimensional images.
BACKGROUND
[0002] Generating Building Information Models (BIM) in two or three dimensions (2D/3D) from indoor views has many uses for real estate websites, indoor robot navigation, and augmented/virtual reality, among other applications. BIM often includes a global layout of an entire floor plan of the space, which typically involves multiple rooms in different arrangements.
The most accurate way to create a floor plan is to manually measure the dimensions of each room and enter all of the measurements into Computer-Aided Design (CAD) software to generate a global layout. However, measuring and compiling such measurements manually is a tedious undertaking, especially if the floor has many rooms. Consequently, such manual methods generally require significant amounts of time to accomplish.
The most accurate way to create a floor plan is to manually measure the dimensions of each room and enter all of the measurements into Computer-Aided Design (CAD) software to generate a global layout. However, measuring and compiling such measurements manually is a tedious undertaking, especially if the floor has many rooms. Consequently, such manual methods generally require significant amounts of time to accomplish.
[0003] In order to speed up the process, some known applications use RGB-depth images and/or panorama images to solve this problem. For example, some applications reconstruct an indoor scene in 3D using RGB-D monocular images and estimate the layout using vanishing points and depth features. Another application generates room layout from pictures taken from multiple views and reconstructs them using structure from motion (SfM) techniques and region classification. In another application, layouts are estimated in a cluttered indoor scene by identifying label for a pixel from RGB images, using deep fully convolutional neural networks (FCNN), and refining the layout using geometrical techniques.
[0004] While such methods provide good accuracy, they require special hardware (such as a depth camera) or a particular photo capture mode (such as panorama) in order to be Date Recue/Date Received 2021-09-22 implemented. Accurate use of panorama images also requires the rooms to be clear, so that the captured images have little to no occlusion. Such requirements can be restrictive, thereby generally limiting their widespread adoption.
[0005] Additional difficulties of conventional modelling of interior space systems and methods may be appreciated in view of the Detailed Description, herein below.
SUMMARY
SUMMARY
[0006] Example embodiments relate to a modelling system and method for modelling an interior space of a room. The modelling method can use standard 2D RGB images that can be taken with a camera on a smart phone. The 2D RGB images can be extracted from a video taken from the smart phone. The modelling system and modelling method can also be referred to as a floorplan modelling system and floorplan modelling method.
[0007] An example modelling method comprises: receiving two-dimensional (2D) images of corners of an interior space captured by a camera; generating, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D image; generating a corresponding depth map for each 2D
image by using a depth module to estimate depth for each pixel in each 2D
image; generating a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge; generating, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera; transforming, using a transformation module, the 3D
point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera; regularizing, using a regularization module, the 3D point clouds in the 2D space into boundary lines; and generating a 2D plan of the interior space from the boundary lines.
image by using a depth module to estimate depth for each pixel in each 2D
image; generating a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge; generating, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera; transforming, using a transformation module, the 3D
point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera; regularizing, using a regularization module, the 3D point clouds in the 2D space into boundary lines; and generating a 2D plan of the interior space from the boundary lines.
[0008] In another example embodiment, the transforming comprises: mapping each 3D
point cloud with the corresponding edge map to identify boundary pixels and projecting them in the 2D space to generate a partial point cloud for each 3D point cloud; and assembling the partial Date Recue/Date Received 2021-09-22 point clouds in the 3D coordinate system from the perspective of the camera using the corresponding camera positions and camera orientations.
point cloud with the corresponding edge map to identify boundary pixels and projecting them in the 2D space to generate a partial point cloud for each 3D point cloud; and assembling the partial Date Recue/Date Received 2021-09-22 point clouds in the 3D coordinate system from the perspective of the camera using the corresponding camera positions and camera orientations.
[0009] In another example embodiment, the regularizing comprises:
translating each partial point cloud into boundary corner lines using a clustering algorithm;
and adjusting the boundary corner lines to be perpendicular boundary lines.
translating each partial point cloud into boundary corner lines using a clustering algorithm;
and adjusting the boundary corner lines to be perpendicular boundary lines.
[0010] In another example embodiment, the regularizing further comprises:
forming a polygon with the boundary lines; and adjusting the boundary lines such that adjacent lines are collinear.
forming a polygon with the boundary lines; and adjusting the boundary lines such that adjacent lines are collinear.
[0011] In another example embodiment, the 2D images are RGB monocular images.
[0012] In another example embodiment, the 2D images are 2D images of each corner of the interior space, each 2D image corresponding with one corner of the interior space.
[0013] In another example embodiment, the positioning module comprises ARCore for generating the camera position and camera orientation for each 2D image.
[0014] In another example embodiment, the depth map for each 2D image is generated by an encoder-decoder architecture that extracts image features with a pre-trained DenseNet-169.
[0015] In another example embodiment, the edge map for each 2D image is generated by an encoder-decoder architecture that estimates layout with LayoutNet network.
[0016] In another example embodiment, the edge map for each 2D image is generated presuming a Manhattan world.
[0017] In another example embodiment, the method further includes identifying the focal length and center coordinates of the camera prior to generate the 3D point cloud for each 2D
image.
image.
[0018] In another example embodiment, coordinates for each pixel in each 3D point cloud is generated by:
Z= D,,, v S
Date Recue/Date Received 2021-09-22 x = (u - cx) * z f Y = (v ¨ C) * z f wherein X, Y are coordinates corresponding to a real world, Z is a depth coordinate, Dõ,, is a depth value corresponding to the (u, v) pixel in the depth map, S is a scaling factor of each corresponding 2D image, f is the focal length of the camera, and Cy , Cy are the center coordinates of the camera.
Z= D,,, v S
Date Recue/Date Received 2021-09-22 x = (u - cx) * z f Y = (v ¨ C) * z f wherein X, Y are coordinates corresponding to a real world, Z is a depth coordinate, Dõ,, is a depth value corresponding to the (u, v) pixel in the depth map, S is a scaling factor of each corresponding 2D image, f is the focal length of the camera, and Cy , Cy are the center coordinates of the camera.
[0019] In another example embodiment, the method further includes detecting, using an object detecting module, a presence and a door position of a door in one or more of the 2D
images; and generating a door symbol in the door position in the 2D plan of the interior space.
images; and generating a door symbol in the door position in the 2D plan of the interior space.
[0020] In another example embodiment, the generating the door symbol in the door position is carried out using the following equations:
RatioD = dist(CzE_Wi) Lwi dist(CBBF , Wif - ) ¨ LWF * RatioD
wherein CBB/ is a centroid of a bounding box of the door in the corresponding 2D image, dist(CBB/ , WI) is a distance between CBB/ and Wi (wall), Lw/ is a distance between two corners of the walls in the corresponding 2D
image, RatioD is the ratio between dist(CBB/ , Wi ) and Lwi , LWF is a distance between the two corners of the walls in the 2D plan of the interior space, dist(CBBF, W/F) is a distance between a centroid of the door symbol (CBBF) and the wall (WO in the 2D plan of the interior space.
RatioD = dist(CzE_Wi) Lwi dist(CBBF , Wif - ) ¨ LWF * RatioD
wherein CBB/ is a centroid of a bounding box of the door in the corresponding 2D image, dist(CBB/ , WI) is a distance between CBB/ and Wi (wall), Lw/ is a distance between two corners of the walls in the corresponding 2D
image, RatioD is the ratio between dist(CBB/ , Wi ) and Lwi , LWF is a distance between the two corners of the walls in the 2D plan of the interior space, dist(CBBF, W/F) is a distance between a centroid of the door symbol (CBBF) and the wall (WO in the 2D plan of the interior space.
[0021] In another example embodiment, the interior space is a floor with multiple rooms;
wherein the generating of the boundary lines are for the multiple rooms;
wherein the generating Date Recue/Date Received 2021-09-22 of the 2D plan includes generating respective 2D plans of the multiple rooms and arranging the respective 2D plans on the floor.
wherein the generating of the boundary lines are for the multiple rooms;
wherein the generating Date Recue/Date Received 2021-09-22 of the 2D plan includes generating respective 2D plans of the multiple rooms and arranging the respective 2D plans on the floor.
[0022] In another example embodiment, the method further comprises generating an outer boundary by finding a convex hull for all of the multiple 2D plans.
[0023] In another example embodiment, the method further comprises aligning all of the multiple 2D plans with the generated outer boundary.
[0024] In another example embodiment, the method is performed by at least one processor.
[0025] In another example embodiment, the method further comprises outputting the 2D
plan on a display or on another device.
plan on a display or on another device.
[0026] Another example embodiment is a modelling system for modelling an interior space of a room, the system comprising: at least one processor; and memory containing instructions which, when executed by the at least one processor, cause the processor to perform the modelling method of any of the above.
[0027] In another example embodiment, the system further comprises a camera configured to capture the 2D images of the interior space.
[0028] In another example embodiment, the camera is a monocular, RGB
camera.
camera.
[0029] In another example embodiment, the system further comprises a local processor coupled to the camera; and a local memory containing instructions which, when executed by the local processor, causes the local processor to generate the camera position and camera orientation for each 2D image captured.
[0030] In another example embodiment, the camera, the at least one processor and the memory are part of a smart phone.
[0031] In another example embodiment, the system further comprises a display for displaying the 2D plan.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[0032] Another example embodiment is a non-transitory memory containing instructions which, when executed by at least one processor, cause the at least one processor to perform the modelling method of any of the above.
[0033] Another example embodiment is a computer program product by a machine learning training process, the computer program product comprising instructions stored in a non-transitory computer-readable medium which, when executed by at least one processor, causes the at least one processor to perform the modelling method of any of the above.
BRIEF DESCRIPTION OF THE DRAWINGS
BRIEF DESCRIPTION OF THE DRAWINGS
[0034] Reference will now be made, by way of example, to the accompanying drawings which show example embodiments, and in which:
[0035] Figure lA shows a panoramic image of an example office space;
[0036] Figure 1B are multiple 2D images of the office space of Figure 1A;
[0037] Figure 2A shows the panoramic image of the office space of Figure lA with occluded corners highlighted;
[0038] Figure 2B are corresponding 2D images of the occluded corners highlighted in Figure 2A;
[0039] Figure 3 illustrates a schematic block diagram of an example system for modelling an interior space of a room, in accordance with an example embodiment;
[0040] Figures 4A and 4B depict a step-by-step visual illustration of the use of the modelling system shown in Figure 3;
[0041] Figure 5 is a schematic illustration of how a smart phone from the system of Figure 3 collects data;
[0042] Figure 6 is an illustration of how depth maps shown in Figure 4A
is generated;
is generated;
[0043] Figure 7 shows example images with corresponding depth estimation models/maps;
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[0044] Figure 8 illustrates example network architecture of how edge maps shown in Figure 4A are generated with its respective inputs and outputs;
[0045] Figure 9 shows example images with corresponding edge estimation models/maps;
[0046] Figures 10A, 10B, and 10C depict an example partial indoor scene at different stages of 2D boundary line generation;
[0047] Figures 11A, 11B, and 11C depict the translation, assembling, and adjustment of boundary corner lines;
[0048] Figures 12A and 12B are illustrations of the intermediate stages of a regularization process;
[0049] Figures 13A, 13B, 13C, and 13D illustrate step by step global regularization of the floor plan of an indoor scene;
[0050] Figure 14 is a schematic representation of the network architecture of YOLO for door detection;
[0051] Figures 15A, 15B, and 15C are illustrations of the performance of the door detection and placement algorithm in a floor plan;
[0052] Figure 16 is a flowchart illustrating the steps of a modelling method for modelling an interior space of a room, in accordance with an example embodiment;
[0053] Figure 17 are sample illustrations of images from the experimental datasets;
[0054] Figures 18A and 18B are illustrations of sets of captured images and their corresponding estimated layouts for the labs and office datasets;
[0055] Figures 19A, 19B, and 19C are illustrations of Graphical User Interfaces (GUI) for three different layout estimation applications;
[0056] Figures 20A, 20B, 20C are illustrations of applications during scenes capture in low light environments;
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[0057] Figure 21 are bar graphs showing comparative analysis of area error across different devices;
[0058] Figure 22 are bar graphs showing comparison of aspect ratio error across different devices; and
[0059] Figure 23 is a graph comparing power consumption across devices.
[0060] Similar reference numerals may have been used in different figures to denote similar components.
DETAILED DESCRIPTION
DETAILED DESCRIPTION
[0061] Example embodiments relate to a modelling system and modelling method for generating layouts of rooms and floors from the real world.
[0062] An example of the modelling system and modelling method can be applied to enhance Building Information Models (BIM), making BIM easier to apply, for example, in the fields of extended reality, including augmented and virtual reality applications. Rather than relying on typical data heavy inputs, the system and method takes in standard 2D images of a space from a camera. The simple inputs are processed using the camera pose information and generate a reasonably accurate layout of the room and floor plan. By requiring far less user interaction and intervention, and requiring less computer processing power than other known modelling systems, generating a room or floor's layout becomes far simpler and cheaper to achieve. This simplification of the modelling process may help to allow building layouts to be used in more day-to-day functions. For example, additional augmenting information may readily be added to the generated layout so the generated layout can be used as an interactive virtual 2D
map.
map.
[0063] As noted above, most existing systems and methods for 3D
reconstruction of a room and floor plan typically require specific hardware such as a depth camera, a Kinect camera, or LiDAR. Although some methods exist for layout generation from monocular images, they rely Date Recue/Date Received 2021-09-22 on occlusion-free panoramic photos, which are very difficult to take in office or home spaces that are in use. An example of a typical panoramic image of a large office space is shown in Figure 1A, while multiple 2D images for the same space is shown in Figure 1B. It can be seen that the panoramic image of a vast space may have several occlusions, which may make their 3D
reconstruction therefrom difficult and inaccurate. For example, as seen in Figure 2A, most of the corners and other important edges of the office space are occluded in the panorama view because of the furniture and limitations of panorama capture. Boxes 1, 2, and 3 in Figure 2A highlight the corners of the office that are occluded. As such, it is often impossible to capture the whole room in a single panoramic image without losing important information.
reconstruction of a room and floor plan typically require specific hardware such as a depth camera, a Kinect camera, or LiDAR. Although some methods exist for layout generation from monocular images, they rely Date Recue/Date Received 2021-09-22 on occlusion-free panoramic photos, which are very difficult to take in office or home spaces that are in use. An example of a typical panoramic image of a large office space is shown in Figure 1A, while multiple 2D images for the same space is shown in Figure 1B. It can be seen that the panoramic image of a vast space may have several occlusions, which may make their 3D
reconstruction therefrom difficult and inaccurate. For example, as seen in Figure 2A, most of the corners and other important edges of the office space are occluded in the panorama view because of the furniture and limitations of panorama capture. Boxes 1, 2, and 3 in Figure 2A highlight the corners of the office that are occluded. As such, it is often impossible to capture the whole room in a single panoramic image without losing important information.
[0064] However, Figure 2B shows corresponding 2D images of boxes 1, 2, and 3, which have more information about their respective corners than their panoramic counterpart. In that regard, it is understood that images of important corners and edges tend to be easier to capture with multiple 2D images. Hence, generating the room's layout from their separate 2D images provides more accurate results than generation from a panoramic image, since separate 2D
images encapsulate more information.
images encapsulate more information.
[0065] Figure 3 illustrates a schematic block diagram of an example modelling system 100 for modelling an interior space of a room using multiple 2D images, in accordance with an example embodiment. As shown in Figure 3, the modelling system 100 may include a smart phone 102, at least one processor 104, and one or more display devices 106. In some examples, the at least one processor 104 is on a separate device than the smart phone 102, such as a server or cloud server. In other examples, the at least one processor 104 is resident on the smart phone 102. Figures 4A and 4B depict a step-by-step visual illustration of the use of the modelling system 100 shown in Figure 3.
[0066] As best seen in Figures 3 and 5, the depicted smart phone 102 comprises a camera 108 and a local processor 110 coupled to the camera 108 with a local memory 112. The camera 108 may be a typical monocular camera configured to capture standard 2D RGB
images. Example 2D RGB images are shown in Figure 4A, under column (a). Notably, the 2D images each include at least one corner of its respective room. The local memory 112 comprises a positioning module Date Recue/Date Received 2021-09-22 114, being instructions which, when executed by the local processor 110, causes the local processor 110 to generate the camera position and camera orientation (or pose) for each 2D image captured. To that end, the smart phone 102 is further shown having an accelerometer 116, a magnetometer 118, and a gyroscope 120, which are used by the local processor 110 to track the camera motion and determine the camera pose when capturing the 2D images.
images. Example 2D RGB images are shown in Figure 4A, under column (a). Notably, the 2D images each include at least one corner of its respective room. The local memory 112 comprises a positioning module Date Recue/Date Received 2021-09-22 114, being instructions which, when executed by the local processor 110, causes the local processor 110 to generate the camera position and camera orientation (or pose) for each 2D image captured. To that end, the smart phone 102 is further shown having an accelerometer 116, a magnetometer 118, and a gyroscope 120, which are used by the local processor 110 to track the camera motion and determine the camera pose when capturing the 2D images.
[0067] In particular, positioning module 114 may involve ARCore, a mobile augmented reality library for pose estimation, which is readily available on most Android devices or smart phones. ARCore is a library by Google, which uses the phone's inertial measurement unit (IMU) sensor's (i.e. accelerometer 116, magnetometer 118, and gyroscope 120) data, along with image feature points for tracking the pose of the camera 108 utilizing a Simultaneous Localization and Mapping (SLAM) algorithm. ARCore can perform pose estimation in real-time. In that regard, to track the motion of the camera 108, an android application (i.e. the positioning module 114) using ARCore was developed in Unity3D environment for capturing RGB images along with the real world location of smart phone 102. In the present case, the positioning module 114 generates or determines the position and orientation of the camera 108 in a three-dimensional coordinate system in the interior space for each 2D image. Figure 5 is a schematic illustration of how smart phone 102 acquires images and collects data using ARCore in positioning module 114.
[0068] At least one processor 104 comprises, or is coupled to, a memory 122. Memory 122 contains instructions or a number of modules for execution by the at least one processor 104. In particular, memory 122 comprises a depth module 124, an edge module 126, a reconstruction module 128, a transformation module 130, a regularization module 132, and an object detection module 134.
[0069] The depth module 124 is configured to estimate depth for each pixel in each captured 2D image (from the camera 108) in order to generate a depth map for each 2D image.
Traditionally, a device with a built-in depth camera, such as Google Tango or Microsoft Kinect, may be used for capturing point clouds directly from the scene. However, in the example modelling system 100, the input is one or more RGB images taken with a smart phone camera 108. Thus, depth perception is essential for estimating the correct dimensions of the targeted floor plan. For Date Recue/Date Received 2021-09-22 depth perception from RGB images, multiple methods are known to exploit feature matching techniques in multiple images of the same scene and to reconstruct a 3D model from that. However, such schemes typically require a trained user to capture the data to ensure correspondence across images.
Traditionally, a device with a built-in depth camera, such as Google Tango or Microsoft Kinect, may be used for capturing point clouds directly from the scene. However, in the example modelling system 100, the input is one or more RGB images taken with a smart phone camera 108. Thus, depth perception is essential for estimating the correct dimensions of the targeted floor plan. For Date Recue/Date Received 2021-09-22 depth perception from RGB images, multiple methods are known to exploit feature matching techniques in multiple images of the same scene and to reconstruct a 3D model from that. However, such schemes typically require a trained user to capture the data to ensure correspondence across images.
[0070] Hence, the depth module 124 is configured to estimate depth from a single image using a pre-trained machine learning model. Depth for RGB images can be learned in a supervised manner from ground truth depth-maps, and a trained neural network can be used for estimating depth for new images. In the present embodiment, the depth module 124 is a modification of the depth estimation process set out in Alhashim, I., Wonka, P.: High quality monocular depth estimation via transfer learning, arX- ivpreprint arXiv:1812.11941 (2018), incorporated herein by reference. In that regard, the depth module 124 comprises an encoder-decoder architecture for extracting image features with DenseNet-169, which results in high-resolution depth maps. The encoder used in the example modelling system 100 is a pre-trained truncated DenseNet-169. The decoder consists of basic blocks of convolutional layers, concatenated with successive 2x bilinear upsampling blocks, and two 3 x 3 convolutional layers, where the output filter is half the size of the input. Figure 6 depicts an illustration of how the depth map is computed by the depth module 124 from a given image. Figure 7 shows the results of the depth estimation model on example images. Example depth maps corresponding to the 2D RGB images from column (a) in Figure 4A
are shown under column (b) of Figure 4A.
are shown under column (b) of Figure 4A.
[0071] The edge module 126 is configured to identify whether each pixel in each 2D image (from the camera 108) is a wall or an edge, in order to generate an edge map for each 2D image.
This classification or segmentation helps in the identification of the layout of the interior space of the room. In the present embodiment, the edge module 126 is a modification of the technique proposed in Zou, C., Colburn, A., Shan, Q., Hoiem, D.: Layoutnet:
Reconstructing the 3d room layout from a single rgb image, CVPR, pp. 2051-2059 (2018), incorporated herein by reference.
In that regard, the edge module 126 involves an encoder-decoder architecture that estimates/identifies a 2D image's edge/boundary with the LayoutNet network to generate an edge Date Recue/Date Received 2021-09-22 map for each 2D image. Figure 8 illustrates an example network architecture of LayoutNet and its inputs and respective outputs.
This classification or segmentation helps in the identification of the layout of the interior space of the room. In the present embodiment, the edge module 126 is a modification of the technique proposed in Zou, C., Colburn, A., Shan, Q., Hoiem, D.: Layoutnet:
Reconstructing the 3d room layout from a single rgb image, CVPR, pp. 2051-2059 (2018), incorporated herein by reference.
In that regard, the edge module 126 involves an encoder-decoder architecture that estimates/identifies a 2D image's edge/boundary with the LayoutNet network to generate an edge Date Recue/Date Received 2021-09-22 map for each 2D image. Figure 8 illustrates an example network architecture of LayoutNet and its inputs and respective outputs.
[0072] The encoder consists of seven convolutional layers with a filter size of 3 x 3 and ReLU (Rectified Linear Unit) function and max-pooling layer follow each convolutional layer.
The decoder structure contains two branches, one for predicting boundary edge maps and the other for corner map prediction. Both decoders have similar architecture, containing seven layers of nearest neighbor up-sampling operation, each followed by a convolution layer with a kernel size of 3 x 3 with the final layer being the Sigmoid layer. The corner map predictor decoder additionally has skip connections from the top branch for each convolution layer. Since the FOV (field of view) of the images is smaller, an additional predictor for predicting room type is added to improve corner prediction performance.
The decoder structure contains two branches, one for predicting boundary edge maps and the other for corner map prediction. Both decoders have similar architecture, containing seven layers of nearest neighbor up-sampling operation, each followed by a convolution layer with a kernel size of 3 x 3 with the final layer being the Sigmoid layer. The corner map predictor decoder additionally has skip connections from the top branch for each convolution layer. Since the FOV (field of view) of the images is smaller, an additional predictor for predicting room type is added to improve corner prediction performance.
[0073] The example modelling system 100 presumes Manhattan or weak Manhattan scenes (i.e. scenes built with walls and edges generally or dominantly aligned or parallel to the axes of a 3D Cartesian grid). Thus, the edge module 126 also takes Manhattan line segments as additional input to the RGB image of the scene, which provides other input features and improves the network's performance. Figure 9 shows predicted edge maps for example input 2D images.
Additional example edge maps that correspond to the 2D RGB images from column (a) in Figure 4A are shown under column (c) in Figure 4A. All the annotations for performance evaluation is done using the annotation tool proposed in Dutta, A., Zisserman, A.: The VIA
annotation software for images, audio and video, Proceedings of the 27th ACM International Conference on Multimedia, MM '19. ACM, New York, NY, USA (2019), DOT
10.1145/3343031.3350535, URL:
https://doi. org/10.1145/3343031 .3350535, incorporated herein by reference.
It is worth noting that the example 2D images have occluded corners and wall edges. However, the example modelling system 100 does not require manual addition of corners in order for the corners to be identified in the corresponding edge maps.
Additional example edge maps that correspond to the 2D RGB images from column (a) in Figure 4A are shown under column (c) in Figure 4A. All the annotations for performance evaluation is done using the annotation tool proposed in Dutta, A., Zisserman, A.: The VIA
annotation software for images, audio and video, Proceedings of the 27th ACM International Conference on Multimedia, MM '19. ACM, New York, NY, USA (2019), DOT
10.1145/3343031.3350535, URL:
https://doi. org/10.1145/3343031 .3350535, incorporated herein by reference.
It is worth noting that the example 2D images have occluded corners and wall edges. However, the example modelling system 100 does not require manual addition of corners in order for the corners to be identified in the corresponding edge maps.
[0074] The reconstruction module 128 is coupled to receive data from the depth module 124 and from the smart phone 102. The reconstruction module 128 is configured to generate a 3D point cloud for each 2D image using the corresponding depth map from the depth module Date Recue/Date Received 2021-09-22 124 and using intrinsic parameters of the camera 108, i.e. a focal length and center coordinates of the camera 108. The reconstruction module 128 may receive the focal length and center coordinates of the camera 108 for each 2D image from the local processor 110 of the smart phone 102.
[0075] In cases where depth cameras or specialized hardware are used to capture the images and/or point clouds, this 3D reconstruction would not be required.
However, as the present modelling system uses 2D RGB images as inputs, 3D reconstruction of each scene image is required. To that end, every pixel of the RGB image is mapped to its corresponding depth map pixel (generated by the depth module 124) to create a 3D point cloud for each 2D image. In the present embodiment, each coordinate or pixel in each 3D point cloud is generated according to the equations:
Z= D,,, v S
X= (u ¨ Cs) * z f Y = (v ¨ C) * z f
However, as the present modelling system uses 2D RGB images as inputs, 3D reconstruction of each scene image is required. To that end, every pixel of the RGB image is mapped to its corresponding depth map pixel (generated by the depth module 124) to create a 3D point cloud for each 2D image. In the present embodiment, each coordinate or pixel in each 3D point cloud is generated according to the equations:
Z= D,,, v S
X= (u ¨ Cs) * z f Y = (v ¨ C) * z f
[0076] X, Y are coordinates corresponding to the real world, Z is the depth coordinate, D,,,v is the depth value corresponding to the (u, v) pixel in the depth map, S is the scaling factor of the corresponding scene, which is obtained empirically, comparing dimensions of real-world objects and point clouds. As noted above, f G ,G are the intrinsic parameters of the camera, generated by calibration. fis the focal length of the camera 108, and G, G are the center coordinates of the camera 108. Example 3D point clouds/reconstructions corresponding to the 2D
RGB images from column (a) in Figure 4A are shown under column (d) of Figure 4A. The red triangle markers show the pose (camera orientation) of the camera 108 while capturing the 2D RGB
image of the scene.
RGB images from column (a) in Figure 4A are shown under column (d) of Figure 4A. The red triangle markers show the pose (camera orientation) of the camera 108 while capturing the 2D RGB
image of the scene.
[0077] The transformation module 130 is coupled to receive data from the edge module 126, the reconstruction module 128, and the positioning module 114. The transformation module 130 is configured to transform the inputted 3D point clouds (from the reconstruction module 128) Date Recue/Date Received 2021-09-22 with their corresponding edge maps (from the edge module 126) into a 2D space in a 3D coordinate system (i.e. in the interior space of the real-world room) from a perspective of the camera 108 based on the pose of the camera 108 (from the positioning module 114).
[0078] In other words, the transformation module 130 is configured to take the generated 3D point clouds from the reconstruction module 128 and map them with the edge maps from the edge module 126 to identify the boundary pixels in the 3D point cloud, then project them into a 2D space to generate a partial point cloud for each 3D point cloud. As noted above, the edge maps are used to classify the pixels in the 3D point clouds to be either in wall or edge pixels, in order to identify the room's geometry. The resulting partial point clouds are scattered 3D points of the layout, see column (e) in Figures 4A or 4B for example. The transformation module 130 is further configured to assemble the partial point clouds into the 3D coordinate system from the perspective of the camera 108 using the corresponding camera positions and camera orientations (pose information) from the positioning module 114. Example assembled partial point clouds, assembled according to the camera 108's pose information are shown under panel (f) of Figure 4B. The positioning module 114 (such as, ARCore) is configured to extract the 3D
position and trajectory of the camera 108, which is depicted by dotted arrows, as shown in panel (f) of Figure 4B. The positioning module 114 returns rotational and translation coordinates for each 2D image taken from column (a) in Figure 4A. All the captured 2D images are mapped to the local 3D coordinate system from the perspective of the camera 108. There is no requirement of rotating the coordinate system while considering the transformation.
position and trajectory of the camera 108, which is depicted by dotted arrows, as shown in panel (f) of Figure 4B. The positioning module 114 returns rotational and translation coordinates for each 2D image taken from column (a) in Figure 4A. All the captured 2D images are mapped to the local 3D coordinate system from the perspective of the camera 108. There is no requirement of rotating the coordinate system while considering the transformation.
[0079] Given the imprecise nature of point clouds, they must be regularized to reduce the error in the generated 2D plan layout's geometry. Thus, the transformation module 130 is coupled to the regularization module 132, which receives the partial point clouds from the transformation module 130. The regularization module 132 is configured to regularize the partial point clouds of each 2D image for every room in all of the rooms in a scene dataset. In the present case, regularization of each room is referred to as local regularization, while regularization of the entire floor is referred to as global regularization.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[0080] Thus, for a given room, the regularization module 132 is configured to translate each partial point cloud into boundary corner lines using a clustering algorithm and adjust the translated boundary corner lines to be perpendicular boundary lines.
Alternately, a kmeans algorithm may be used. The regularization module 132 is further configured to form a polygon with the boundary lines and adjust the boundary lines such that adjacent lines are collinear (given the Manhattan world assumption).
Alternately, a kmeans algorithm may be used. The regularization module 132 is further configured to form a polygon with the boundary lines and adjust the boundary lines such that adjacent lines are collinear (given the Manhattan world assumption).
[0081] In the present embodiment, the regularization module 132 achieves this local regularization with Algorithm 1.
Algorithm 1 Regularize point clouds (PC) 1: eRi E R t>
R: Total number of rooms 2: for = 1 : n do t>
n: no of PC
3: P1= 2DPointClouds 4: K = boundary(P) 5:
C(C1, C2, ..., Ck ) = kmeans(Pi(K)) t> C: Clusters 6: ml, T/12, M3 = mean(c1), mean(c2), mean(c3) 7: linel = line(ml, m2) 8: line2 = line(m2, m3) 9: while angle(linel, line2) <= 90 do 10: Rotate(1ine2) 11: RPi = (linel, line2) t> RPi: local PC
12: TPi = (Rot(0x, OY, Oz) * Tr(tx, tY )) * RPi 13: FP = polygon(TP1, TP2, ..., TP) t>
FP: Final PC
14: for i = i : p do t>
p: no of sides of polygon 15: =
angle(si, s1+1) t> s: sides of polygon 16: if cp > 90 or cp < 90 then 17: s1+1) = o
Algorithm 1 Regularize point clouds (PC) 1: eRi E R t>
R: Total number of rooms 2: for = 1 : n do t>
n: no of PC
3: P1= 2DPointClouds 4: K = boundary(P) 5:
C(C1, C2, ..., Ck ) = kmeans(Pi(K)) t> C: Clusters 6: ml, T/12, M3 = mean(c1), mean(c2), mean(c3) 7: linel = line(ml, m2) 8: line2 = line(m2, m3) 9: while angle(linel, line2) <= 90 do 10: Rotate(1ine2) 11: RPi = (linel, line2) t> RPi: local PC
12: TPi = (Rot(0x, OY, Oz) * Tr(tx, tY )) * RPi 13: FP = polygon(TP1, TP2, ..., TP) t>
FP: Final PC
14: for i = i : p do t>
p: no of sides of polygon 15: =
angle(si, s1+1) t> s: sides of polygon 16: if cp > 90 or cp < 90 then 17: s1+1) = o
[0082] Algorithm 1 regularizes the local point cloud of each partial scene image for every room (R1) in all the room in a scene dataset (R). Here, P, is the point cloud of each i-th scene where n is the total number of point clouds. Boundary points for each P, is extracted in Pi(K). Using the k-means algorithm, clusters of point set are made for k = 3 on the Euclidean distance between them, where mi, m2, m3 are the cluster means (line 6). Since we are presuming the Manhattan world for the scene, the lines joining means are re-adjusted to have a right angle (line 10). Each regularized partial point cloud (RP,) is transformed (TP,) using rotation angle Oy, Oy, Oz, along each Date Recue/Date Received 2021-09-22 x, y, z axis and translation coordinates [tx, ty] returned by ARCore (line 12). For global regularization, using each transformed point cloud, polygon (FP is formed (line 13), with p number of sides (s). For each pair of sides, the angle between them (y) is checked and if they are not perpendicular, they are made collinear (line 17) presuming the world to be Manhattan.
[0083] Figures 10A, 10B, and 10c depict an example partial indoor scene showing different stages of the 2D layout generation. Figure 10A shows the 2D RGB
image in consideration. Figure 10B is its 3D reconstruction in the form of a 3D point cloud. Figure 10C is the partial point cloud extracted from the 3D reconstructed point cloud in Figure 10B by the transformation module 130. Figure 10C shows the 2D projection of the partial point cloud, where m2, m3 are the means of three clusters extracted. Figure 10C also shows the lines joining m2, m3, thereby regularizing the particular partial point cloud from the projected set of points into boundary corner lines.
image in consideration. Figure 10B is its 3D reconstruction in the form of a 3D point cloud. Figure 10C is the partial point cloud extracted from the 3D reconstructed point cloud in Figure 10B by the transformation module 130. Figure 10C shows the 2D projection of the partial point cloud, where m2, m3 are the means of three clusters extracted. Figure 10C also shows the lines joining m2, m3, thereby regularizing the particular partial point cloud from the projected set of points into boundary corner lines.
[0084] Figures 11A, 11B, 11C depict the translation, assembling, and adjustment of the boundary corner lines. Figure 11A shows the coordinate system in the real-world (Xw Yw, Zw) and in ARCore with the smart phone 102 (XA, Li, ZA). ARCore transformations have to be rotated about the ZA axis to align the coordinate systems. Each set of boundary corner lines is adjusted to be perpendicular (forming perpendicular boundary lines), then rotated and translated with the transformation in view of the camera 108's pose information to form a polygon (see Figure 11B) with boundary lines. As the present system presumes a Manhattan world, the angles between adjacent boundary lines are assessed. If they are found to be non-zero (or not collinear), they are adjusted to be collinear. Figure 11C shows the regularized boundary lines forming a 2D plan (or 2D layout) for a set of rooms, which agrees with the real world dimensions.
See also panel (g) in Figure 4B for another example of regularized boundary lines forming a 2D plan layout for a set of rooms.
See also panel (g) in Figure 4B for another example of regularized boundary lines forming a 2D plan layout for a set of rooms.
[0085] The regularization module 132 may be further configured to regularize, not just individual rooms, but multiple rooms arranged on a floor. To that end, the regularization may include generating an outer boundary by finding a convex hull for all of the (2D plan layouts of the) rooms and then aligning all of the rooms within the outer boundary generated.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[0086] In the present embodiment, the regularization module 132 achieves this global regularization with Algorithms 2 and 3. Algorithm 2 depicts the process of finding the outer boundary for of all the regularized layouts, and Algorithm 3 depicts their post-processing to align them along the outer boundary polygon.
Algorithm 2 Finding the points inside the boundary polygon 1: for i = : n do 2: L, = (P,, Da) if intersection(Li, Chun) .= even then t>= Cht,/r: Convex hull forming boundary 4: Pr Poutsvde t>
Polaside Pool of points outside the boundary 5! else 6: t Rnside: Pool of points inside the boundary
Algorithm 2 Finding the points inside the boundary polygon 1: for i = : n do 2: L, = (P,, Da) if intersection(Li, Chun) .= even then t>= Cht,/r: Convex hull forming boundary 4: Pr Poutsvde t>
Polaside Pool of points outside the boundary 5! else 6: t Rnside: Pool of points inside the boundary
[0087] Algorithm 2 identifies the points for each room polygons inside the outer boundary polygon or on the outer boundary polygon so that individual room polygons may be aligned with the outer boundary. Points that are supposed to be on the outer boundary, but lie inside are identified using this algorithm. In Algorithm 2, line 2, a line L, is traced for each point P, to co, where line 3 checks if the intersection of line L, with the boundary of Convex hull Chun is an even number of times or an odd number of times. If the intersection has happened 0 or an even number of times, then the point is considered to be outside the outer boundary.
Otherwise, the point is considered to be inside or on the outer boundary.
Otherwise, the point is considered to be inside or on the outer boundary.
[0088] Figure 12A illustrates an example of such a process. For each room polygon, points closer to the outer boundary line are identified, and a line is drawn from that point to infinity. If the line intersects with the outer boundary polygon 0 or even times (such as points A, F, and G), then that point is outside the boundary polygon. Otherwise, the point is considered to be inside or on the polygon outer boundary (such as points B, C, D, and E). The purpose of using Algorithm 2 is to find the points which are lying inside the outer boundary and use them for further post-processing. If the point is identified to be inside the boundary polygon, then using Algorithm 3, they are aligned to the outer boundary line.
Date Recue/Date Received 2021-09-22 Algorithm 3 Aligning the points of each polygon to the boundary 1: V P,G polygon 2: Li= Pi I_ CF
3: L2 = CF
4: find equation of each line y mL, * (x ¨ Xc) rnLi (Yp, ¨VA) 7: 1711.2 miq * ML2 = v y t>
Perpendicularity condition 9: XA = * (Y ¨ Y4 t> Sobst it uting the known values to find unknowns 10: YA = Ye + mL, * ¨ Xt-) Xp, = XA t>
Replacing the points of polygon with respective points on boundary 12: YA VA
Date Recue/Date Received 2021-09-22 Algorithm 3 Aligning the points of each polygon to the boundary 1: V P,G polygon 2: Li= Pi I_ CF
3: L2 = CF
4: find equation of each line y mL, * (x ¨ Xc) rnLi (Yp, ¨VA) 7: 1711.2 miq * ML2 = v y t>
Perpendicularity condition 9: XA = * (Y ¨ Y4 t> Sobst it uting the known values to find unknowns 10: YA = Ye + mL, * ¨ Xt-) Xp, = XA t>
Replacing the points of polygon with respective points on boundary 12: YA VA
[0089] Algorithm 3 shows the process of aligning the points of room polygons to the outer boundary polygon which are found to be inside. Figure 12B shows the example of the polygon P/P2P3P4 which is required to be aligned with the outer boundary line CF.
Points Pi and P4 are found to be inside the outer boundary polygon and needs to be aligned with line CF. Hence, they are replaced with points A and B respectively. Algorithm 3 finds the location of points A and B on line CF and replaces P1 with A and P4 with B by dropping a perpendicular line P/A on CF and using properties of perpendicular line segments for identifying the coordinates for A and B.
Algorithm 3,checks slopes of both line segments (Algorithm 3, line 6 and line 7) and checks the property of slopes of perpendicular line segments to identify (XA, YA) and (XB
, YB), (Algorithm 3, line 8). Once identified, Algorithm 3 replaces both P1 and P4 with A and B
(Algorithm 3, line 11 and line 12).
Points Pi and P4 are found to be inside the outer boundary polygon and needs to be aligned with line CF. Hence, they are replaced with points A and B respectively. Algorithm 3 finds the location of points A and B on line CF and replaces P1 with A and P4 with B by dropping a perpendicular line P/A on CF and using properties of perpendicular line segments for identifying the coordinates for A and B.
Algorithm 3,checks slopes of both line segments (Algorithm 3, line 6 and line 7) and checks the property of slopes of perpendicular line segments to identify (XA, YA) and (XB
, YB), (Algorithm 3, line 8). Once identified, Algorithm 3 replaces both P1 and P4 with A and B
(Algorithm 3, line 11 and line 12).
[0090] Figures 13A, 13B, 13C, and 13D depict the global regularization phases of generating a floor plan from its 2D projection layouts to the final 2D floor plan. Figure 13A depicts the 2D layout of each room's partial point clouds on a floor after processing by the transformation module 130 and regularization module 132. Figure 13B shows the regularized 2D
plan layouts of each room, depicting the global relationship between them. Figure 13C, the outer boundary for all Date Recue/Date Received 2021-09-22 of the rooms is generated by finding a convex hull for all the polygons and lines. Figure 13D shows the further refined and post-processed floor plan from the 2D plan layouts.
While the floor plans may be displayed and used in its present state, the modelling system 100 may optionally further include the object detection module 134.
plan layouts of each room, depicting the global relationship between them. Figure 13C, the outer boundary for all Date Recue/Date Received 2021-09-22 of the rooms is generated by finding a convex hull for all the polygons and lines. Figure 13D shows the further refined and post-processed floor plan from the 2D plan layouts.
While the floor plans may be displayed and used in its present state, the modelling system 100 may optionally further include the object detection module 134.
[0091] The object detection module 134 may be coupled to receive data from the regularization module 132 and may be configured to detect objects in the 2D
images (e.g. doors in the present embodiment), and mark the objects in the 2D floor plan generated by the regularization module 132.
images (e.g. doors in the present embodiment), and mark the objects in the 2D floor plan generated by the regularization module 132.
[0092] Indoor object detection, such as the detection of doors, windows or other objects in indoor environments, from images or videos is a widely explored area. Known solutions include using object detection networks such as YOLO, Faster- RCNN, SSD, etc. However, a dataset containing doors or windows that is specific to indoor scenes is not commonly available. It is challenging to generate a dataset containing doors in an indoor environment with diversity to train/fine-tune existing networks. Hence, the example modelling system 100 uses a DoorDetect dataset 136 from Arduengo, M., Torras, C., Sentis, L.: Robust and adaptive door operation with a mobile manipulator robot, arXiv:1902.09051v2 [cs.R0] 13 Sep 2019, incorporated herein by reference. The example object detection module 134 relies on a trained YOLO
object detection network on the DoorDetect dataset 136 to detect doors in the indoor scenes to complete the floor plans. YOLO' s detection network has 24 convolutional layers followed by 2 fully connected layers (see Figure 14, for example). Each alternating convolutional layer has a reduction of feature space from its preceding layer. The network is pre-trained with ImageNet-1000 class dataset. The DoorDetect dataset 134 contains 1213 images with annotated objects in an indoor environment.
The door images contain various doors, such as entrance doors, cabinet doors, refrigerator doors, etc. The mAP on DoorDetect dataset for YOLO came out to be 45%.
object detection network on the DoorDetect dataset 136 to detect doors in the indoor scenes to complete the floor plans. YOLO' s detection network has 24 convolutional layers followed by 2 fully connected layers (see Figure 14, for example). Each alternating convolutional layer has a reduction of feature space from its preceding layer. The network is pre-trained with ImageNet-1000 class dataset. The DoorDetect dataset 134 contains 1213 images with annotated objects in an indoor environment.
The door images contain various doors, such as entrance doors, cabinet doors, refrigerator doors, etc. The mAP on DoorDetect dataset for YOLO came out to be 45%.
[0093] Figure 15A shows a door that is detected by object detection module 134 in an example 2D image illustration. Figure 15B shows the door placement (with a door symbol) in the floor plan generated by regularization module 132. Figure 15C shows the parameters used for the Date Recue/Date Received 2021-09-22 door placement. The door placement is carried out by object detection module 134 using the following equations:
RatioD= dist(CBBI WI) Lwi dist(CBBF , Wm-) ¨ LWF * RatioD
where CBM is a centroid of a bounding box of door detection (returned by door detection) in the corresponding 2D image, dist(CBN, Wi) is a distance between CBBi and Wi (wall), Lwi is a distance between two corners of the walls in the corresponding 2D image, RatioD is the ratio between them.
RatioD= dist(CBBI WI) Lwi dist(CBBF , Wm-) ¨ LWF * RatioD
where CBM is a centroid of a bounding box of door detection (returned by door detection) in the corresponding 2D image, dist(CBN, Wi) is a distance between CBBi and Wi (wall), Lwi is a distance between two corners of the walls in the corresponding 2D image, RatioD is the ratio between them.
[0094] RatioD is the ratio used for marking the doors in the generated floor plans with the reference of the corresponding 2D images of the scene. For each individual image with a door, the image is marked with a respective door symbol in its corresponding floor plan.
In the present case, LWF is the distance between two corners of the walls in the corresponding 2D
floor plan, dist(CBBF, WiF) is the distance between the centroid of the door symbol (CBBF) and wall (W/F) in the corresponding 2D floor plan, which is an unknown entity and is identified using RatioD to mark the doors in the floor plan. The axis of the door is kept perpendicular to the wall the door belongs to. RatioD is the ratio which is scale invariant for the generated floor plan and will remain the same in the 2D image and its corresponding 2D layout.
In the present case, LWF is the distance between two corners of the walls in the corresponding 2D
floor plan, dist(CBBF, WiF) is the distance between the centroid of the door symbol (CBBF) and wall (W/F) in the corresponding 2D floor plan, which is an unknown entity and is identified using RatioD to mark the doors in the floor plan. The axis of the door is kept perpendicular to the wall the door belongs to. RatioD is the ratio which is scale invariant for the generated floor plan and will remain the same in the 2D image and its corresponding 2D layout.
[0095] Modelling system 100 may further include one or more display devices 106 for displaying the room and floor plan layouts generated by regularization module 132 or objection detection module 134. In some examples, the display device 106 may form part of smart phone 102, or the display device 106 may be separate from smart phone 102.
[0096] Reference is now made to Figure 16, which is a flowchart illustrating an example modelling method 1600 for modelling an interior space of a room using standard 2D RGB images as inputs. The modelling method 1600 may be performed using modelling system 100 as described above, or a different system with similar capabilities.
[0097] At 1602, the modelling method includes receiving 2D images of corners of the interior space, where the 2D images of the corners of the interior space were taken by a camera. In Date Recue/Date Received 2021-09-22 some cases, the camera may be part of a smart phone 102. In the present embodiment, the 2D
images received are monocular RGB images of each corner of the interior space.
For example, if the room is rectangular, the 2D images received may be four images, where each image is a picture of a different corner of the rectangular room. See Figure 4A, column (a).
images received are monocular RGB images of each corner of the interior space.
For example, if the room is rectangular, the 2D images received may be four images, where each image is a picture of a different corner of the rectangular room. See Figure 4A, column (a).
[0098] At 1604, the position and orientation from a perspective of the camera in a 3D
coordinate system in the interior space for each 2D image may be generated, for example using a positioning module with ARCore. The position and orientation of the camera is collectively known as the pose of the camera.
coordinate system in the interior space for each 2D image may be generated, for example using a positioning module with ARCore. The position and orientation of the camera is collectively known as the pose of the camera.
[0099] At 1606, a depth map for each 2D image may be generated, for example by using a depth module, by estimating the depth of each pixel in each 2D image. The depth map for each 2D image may be generated with encoder-decoder architecture that extracts image features with a pre-trained DenseNet-169. See Figure 4A, column (b).
[00100] At 1608, an edge map for each 2D image may be generated, for example by using an edge module, by identifying whether each pixel is a wall or an edge in each 2D image. The edge map for each 2D image may be generated with encoder-decoder architecture that estimates layout with LayoutNet network. The edge map for each 2D image may further be generated presuming a Manhattan world. See Figure 4A, column (c).
[00101] At 1610, a 3D point cloud for each 2D image may be generated, for example with a reconstruction module, using the corresponding depth map generated at 1606 and the focal length and center coordinates of the camera. In that regard, coordinates for each pixel in each 3D
point cloud may be generated by the following equations:
Z= D,,, v S
X= (u ¨ Cx) * z f Y = (v ¨ C1) * z f Date Recue/Date Received 2021-09-22 wherein X, Y are coordinates corresponding to the real world, Z is the depth coordinate, D11,, is the depth value corresponding to the (u, v) pixel in the depth map, S is the scaling factor of the corresponding scene, f is the focal length of the camera, and G ,G are the center coordinates of the camera. See Figure 4A, column (d).
point cloud may be generated by the following equations:
Z= D,,, v S
X= (u ¨ Cx) * z f Y = (v ¨ C1) * z f Date Recue/Date Received 2021-09-22 wherein X, Y are coordinates corresponding to the real world, Z is the depth coordinate, D11,, is the depth value corresponding to the (u, v) pixel in the depth map, S is the scaling factor of the corresponding scene, f is the focal length of the camera, and G ,G are the center coordinates of the camera. See Figure 4A, column (d).
[00102] Optionally, prior to 1610, the camera may be calibrated to determine the intrinsic parameters of the camera, i.e. to determine the focal length and center coordinates of the camera for each of the 2D images.
[00103] At 1612, the 3D point clouds generated at 1610 may be transformed with the corresponding edge map (generated at 1608) into a 2D space in the 3D
coordinate system from the perspective of the camera, for example using a transformation module. For example, in some embodiments, at 1614, each 3D point cloud may be mapped with the corresponding edge map (generated at 1608) to identify boundary pixels. The identified boundary pixels may then be projecting into a 2D space to generate a partial point cloud for each 3D point cloud. See Figure 4A, column (e).
coordinate system from the perspective of the camera, for example using a transformation module. For example, in some embodiments, at 1614, each 3D point cloud may be mapped with the corresponding edge map (generated at 1608) to identify boundary pixels. The identified boundary pixels may then be projecting into a 2D space to generate a partial point cloud for each 3D point cloud. See Figure 4A, column (e).
[00104] The partial point clouds may then be assembled in the 3D
coordinate system from the perspective of the camera using the corresponding camera positions and orientations (that were generated at 1604). See Figure 4B, panel (f).
coordinate system from the perspective of the camera using the corresponding camera positions and orientations (that were generated at 1604). See Figure 4B, panel (f).
[00105] At 1616, the transformed 3D point clouds in the 2D space may be regularized into boundary lines, for example using a regularization module. In that regard, the point clouds may undergo at least local regularization at 1618, and optionally global regularization at 1620.
[00106] At 1618, each partial point cloud may be translated into boundary corner lines using a clustering algorithm and adjusted to be perpendicular boundary lines (as the present modelling method is assuming a Manhattan world). See Figure 10C. The perpendicular boundary lines may then be joined together to form a polygon with boundary lines (see Figure 11B) and further adjusted such that adjacent lines are collinear. See Figure 11C and panel (g) of Figure 4B.
If a floor has multiple rooms and a floor plan showing a global arrangement of the multiple rooms is desired, the modelling method 1600 may be performed for each of the multiple rooms.
Date Recue/Date Received 2021-09-22
If a floor has multiple rooms and a floor plan showing a global arrangement of the multiple rooms is desired, the modelling method 1600 may be performed for each of the multiple rooms.
Date Recue/Date Received 2021-09-22
[00107] If multiple rooms are involved, when their partial point clouds are assembled at 1612, the partial point clouds will be assembled in the 3D coordinate system from the perspective of the camera, notably, using the corresponding camera positions and orientations (that were generated at 1604). In other words, the pose information for each 2D image, and collectively for the images taken of each room, allows the various partial point clouds of each of the multiple rooms to be arranged relative to one another as is reflected in the real world. See Figure 13A, for example.
[00108] At 1620 then, as described above, the regularized boundary lines for the multiple rooms would be outputted into the form of multiple 2D plan layouts arranged on the floor (see Figure 13B). An outer boundary around all of the 2D layouts may be generated by finding a convex hull for all of the multiple 2D layouts (see Figure 13B). All of the multiple 2D layouts may then be aligned with the generated outer boundary (see Figure 13D).
[00109] Optionally, at 1622, the presence and placement of an object may be detected in one or more of the 2D images, for example with an object detection module. In the present embodiment, the object may be a door. Of course, other objects may be detected according to the present modelling method. Some examples of such objects include restrooms (e.g. toilets, showers, baths, etc.), stairwells, windows, or kitchens (e.g. fridge, stove, etc.). If a door is detected, a door symbol may be included in the corresponding position in the 2D layout of the room or floor generated at 1616.
[00110] The door placement may be carried out at 1622 using the following equations:
RatioD = dist(Qa WD
Lwi dist(CBBF , Wm-) ¨ LWF * RatioD
wherein CBM is a centroid of a bounding box of door detection (returned by door detection) in the corresponding 2D image, dist(CBN, Wi) is a distance between CB.8/ and Wi (wall), Lwi is a distance between two corners of the walls in the corresponding 2D image, RatioD is the ratio between them, LWF is a distance between two corners of walls in the corresponding 2D layout of Date Recue/Date Received 2021-09-22 the room, dist(CBBF, WiF) is a distance between centroid of the door symbol (CBBF) and wall (WO in the corresponding 2D layout of the room.
RatioD = dist(Qa WD
Lwi dist(CBBF , Wm-) ¨ LWF * RatioD
wherein CBM is a centroid of a bounding box of door detection (returned by door detection) in the corresponding 2D image, dist(CBN, Wi) is a distance between CB.8/ and Wi (wall), Lwi is a distance between two corners of the walls in the corresponding 2D image, RatioD is the ratio between them, LWF is a distance between two corners of walls in the corresponding 2D layout of Date Recue/Date Received 2021-09-22 the room, dist(CBBF, WiF) is a distance between centroid of the door symbol (CBBF) and wall (WO in the corresponding 2D layout of the room.
[00111] At 1624, the regularized boundary lines, for example with door symbols, may be outputted to form the 2D layout of the room and/or floor. The 2D layout may be displayed on the smart phone, or on any other suitable display.
[00112] In some example experiments, two alternate hardware platforms were used:
Google Pixel 2 XL and Samsung A50. Both of these mobile phones were utilized to deploy the data collection application (i.e. ARCore, to determine the camera position and camera orientation) and to capture the 2D images for all of the locations. For depth estimation accuracy analysis on the dataset, structural similarity, and peak SNR metrics are used.
Also, metrics such as pixel error and corner error were used for layout estimation accuracy analysis on the dataset.
Google Pixel 2 XL and Samsung A50. Both of these mobile phones were utilized to deploy the data collection application (i.e. ARCore, to determine the camera position and camera orientation) and to capture the 2D images for all of the locations. For depth estimation accuracy analysis on the dataset, structural similarity, and peak SNR metrics are used.
Also, metrics such as pixel error and corner error were used for layout estimation accuracy analysis on the dataset.
[00113] For evaluating the proposed layout estimation system's performance, area, and aspect ratio error metrics were used in quantitative analysis. Qualitative analysis was also done to depict the proposed system's robustness over existing Android and iOS based mobile applications. The performance of the present system has also been compared for the two hardware platforms mentioned above.
[00114] Experiments were performed with three sets of images. The first dataset is the right-wing of the ground floor of the Computer Science Department building in ITT Jodhpur, which are Classrooms. The second dataset is the left-wing of the same floor, which are Labs. The third dataset is the first floor of the same building which are Offices.
Figure 17 shows illustrations of sample images from the collected images from each category.
It can be seen that the images in the dataset can contain zero to moderate or heavy occlusion with differently illuminated environments.
Figure 17 shows illustrations of sample images from the collected images from each category.
It can be seen that the images in the dataset can contain zero to moderate or heavy occlusion with differently illuminated environments.
[00115] Depth estimation analysis was performed, as in Table 1.
[00116] Table 1: Performance analysis of depth estimation on our dataset Method Classrooms Labs Offices Structural similarity 0.8433 0.7528 0.8174 Date Recue/Date Received 2021-09-22 Peak SNR 22.46 17.55 20.7808 I
[00117] Table 1 shows the performance analysis of the depth estimation step in the present method. Ground truth depth maps for all the images in our dataset were generated using a Kinect XBOX 360 depth camera. The performance evaluation is done on two metrics, Structural Similarity (SS) and peak SNR (PSNR) are defined as:
S8(x, (2u.,,pj_H- Cu (2Orry 02) (6) (p2 p2 4, C2) * (0r2 or2 4, C2) x y x y PSNR = 2010gio * (MAX/)¨ lotogio* (MSE) (7)
S8(x, (2u.,,pj_H- Cu (2Orry 02) (6) (p2 p2 4, C2) * (0r2 or2 4, C2) x y x y PSNR = 2010gio * (MAX/)¨ lotogio* (MSE) (7)
[00118] In Eq. 6 , px and py are the mean intensity terms, while ax and ay are the standard deviations in the two image signals x and y, C1 & C2 are included to avoid instability when summations of mean intensities are close to zero. For PSNR, MSE is the mean square error between the reference image and generated image, MAX/ is the maximum possible pixel value of the image. Lower value of SS and PSNR indicates low quality of generated images as compared to reference ground truth image. It can be seen that the images in the Labs dataset are performing worse than other dataset given its lowest value in terms of Structural Similarity, and PSNR
because of the presence of variety of occlusion creating surfaces which creates irregular planes and limited field of view, making depth estimation a challenging task. As shown in Figure 17, for the Labs scene images, the corners of partial scenes are highly occluded because of laboratory equipment, fixed sitting spaces and various other immovable heavy indoor objects.
because of the presence of variety of occlusion creating surfaces which creates irregular planes and limited field of view, making depth estimation a challenging task. As shown in Figure 17, for the Labs scene images, the corners of partial scenes are highly occluded because of laboratory equipment, fixed sitting spaces and various other immovable heavy indoor objects.
[00119] Corner and edge estimation analysis was performed, as in Table 2.
[00120] Table 2: Corner and edge estimation analysis on our dataset Scene Corner error (%) Pixel error (%) Classroom 1.04 3.38 Labs 1.30 4.15 Offices 1.12 3.67 Date Recue/Date Received 2021-09-22
[00121] Table 2 shows the present system's performance on estimating the corners and edges of a room. The annotations for the layouts were generated using the tool proposed in the Dutta paper noted above. The evaluation w done on two parameters, pixel error P and corner error. Pixel error identifies the classification accuracy of each pixel with the estimated layout and ground truth and averaged over all the images in a dataset.
p Similarity(P ixatE Pixel or) n (8) ( Pixel e EdgeMap, if Pixel > o Similarity = (9) Pixel tEdgeMap, Otherwise where, n is the total number of images in a dataset, PixelE and PixelGT are the pixels in estimated and ground truth images. Corner error C calculates the L2 distance between the estimated corner and the ground truth corner of a room, normalized by image diagonal and averaged over all the images in a dataset. Here, Corner E and CornerGT are the estimated and ground truth corners.
= Disti,(CornerF. Cornerar ) (to) C
p Similarity(P ixatE Pixel or) n (8) ( Pixel e EdgeMap, if Pixel > o Similarity = (9) Pixel tEdgeMap, Otherwise where, n is the total number of images in a dataset, PixelE and PixelGT are the pixels in estimated and ground truth images. Corner error C calculates the L2 distance between the estimated corner and the ground truth corner of a room, normalized by image diagonal and averaged over all the images in a dataset. Here, Corner E and CornerGT are the estimated and ground truth corners.
= Disti,(CornerF. Cornerar ) (to) C
[00122] It can be seen that Labs and Offices image dataset is more challenging than other datasets because of more occluded corners and edges with complex design of furniture and other experimental setups.
[00123] Comparative studies were made. Figures 18A and 18B show the generated layouts for two collected image datasets. The image dataset in Figure 18A shows the resultant layout for the labs dataset and the image dataset in Figure 18B shows the resultant layout for the offices dataset. The layout for the labs dataset also includes formation of a corridor in the final layout, where the left panels shows the input stream of RGB images for the respective scenes.
[00124] A comparative study was performed with applications such as Magic Plan, Tape Measure, Google Measure app, and AR Plan3D Ruler with the given ground truth measurements for each dataset. For every categories of images, the ground truth measurement was done by Date Recue/Date Received 2021-09-22 manually measuring each room's dimension in each dataset and evaluating the area and aspect ratio, respectively. Quantitative evaluation was done on mean absolute % error for area and aspect ratio for each dataset.
R
X -Mean Absolute % Error (E) = Gr I I (n) R
=1 XGT
where R is the total number of rooms in a dataset, x, is the area/aspect ratio of room R, and XGT
is the ground truth area/aspect ratio for the same.
R
X -Mean Absolute % Error (E) = Gr I I (n) R
=1 XGT
where R is the total number of rooms in a dataset, x, is the area/aspect ratio of room R, and XGT
is the ground truth area/aspect ratio for the same.
[00125] Table 3: Quantitative evaluation of the estimated layouts for different scenes(S) and methods(M) Scene -> Classrooms Offices Labs Methods,I, Area Aspect Ratio Area Aspect Ratio Area Aspect Ratio Present System 3.12 2.21 3.25 2.65 5.59 3.07 Magic Plan 4.53 3.34 3.67 1.81 5.52 3.03 Tape Measure 3.55 3.58 8.26 1.71 6.93 1.21 Google Measure 7.27 4.06 6.65 2.93 6.02 3.07 app AR Plan3D
3.15 5.20 4.40 1.62 4.39 2.87 Ruler
3.15 5.20 4.40 1.62 4.39 2.87 Ruler
[00126] Table 3 depicts the quantitative evaluation for the estimated layout for different scene dataset and other applications of Android and i0S. Results show that the present modelling system and modelling method performs best in terms of mean error% (E) in area and aspect ratio for the Classroom dataset and area error for the Office dataset. For the lab dataset, ARplan3D
performed best in terms of area error and Tape Measure performed best in aspect ratio error.
performed best in terms of area error and Tape Measure performed best in aspect ratio error.
[00127] Table 4: Qualitative comparison of GRIHA and state-of-the-art.
Method User Interaction Manual Intervention Date Recue/Date Received 2021-09-22 Present System 4 Nos. Not required Magic Plan Continuous Scan Add corners Tape Measure Continuous Scan Add corners Google Measure app Continuous Scan Add corners AR Plan3D Ruler Continuous Scan Add corners, height
Method User Interaction Manual Intervention Date Recue/Date Received 2021-09-22 Present System 4 Nos. Not required Magic Plan Continuous Scan Add corners Tape Measure Continuous Scan Add corners Google Measure app Continuous Scan Add corners AR Plan3D Ruler Continuous Scan Add corners, height
[00128] Table 4 depicts the qualitative comparison between the present modelling system and modelling method and other applications. Here, the number of user interactions and the amount of manual intervention required were considered based on the comparison. In terms of user interaction, the present modelling system and modelling method can use only four interactions, i.e., images of four corners of a room, while other applications require a continuous scan and movement in the entire room. In terms of manual intervention, the present modelling system and modelling method does not necessarily require any "after clicking"
of the pictures.
Whereas the other applications require manually adding the corners and height of the room. The present modelling system and modelling method's only requirement is to "click"
or take images, while other applications require time and manual calibration to understand the environment and features. Due to this continuous scanning and higher level of manual intervention, techniques like Magic Plan yield more accurate results than the present modelling modelling system and modelling method. However, in the other existing applications, if some object occludes the corner, the user must to add the corner themselves. A slight user error can heavily affect the accuracy of the layout. The accuracy of the existing applications also suffers in limited salient features in different scene frames while scanning.
of the pictures.
Whereas the other applications require manually adding the corners and height of the room. The present modelling system and modelling method's only requirement is to "click"
or take images, while other applications require time and manual calibration to understand the environment and features. Due to this continuous scanning and higher level of manual intervention, techniques like Magic Plan yield more accurate results than the present modelling modelling system and modelling method. However, in the other existing applications, if some object occludes the corner, the user must to add the corner themselves. A slight user error can heavily affect the accuracy of the layout. The accuracy of the existing applications also suffers in limited salient features in different scene frames while scanning.
[00129] Robustness analysis was performed. Figures 19A, 19B, and 19C show illustrations of three other publicly available mobile applications while collecting the dataset for evaluation. Figure 19A is a GUI for ARplan 3D Ruler, Figure 19B is a GUI for Magic Plan, and Figure 19C is a GUI for Tape Measure. There is a lot of manual interruption that requires layout estimation using these applications. For example, for ARplan 3D and Magic Plan, the rooms' corners have to be added manually, which is a very error-prone process. The person holding the mobile device has to be cautious and accurate while adding the corners.
Otherwise, there will be Date Recue/Date Received 2021-09-22 an error in the measurement of the edges of the room. Also, Figure 19C shows that if the edges or corners of the room are occluded by furniture or other room fixtures, measuring the edges with these applications is impossible since the wall edges are invisible and have to be scanned through the furniture only.
Otherwise, there will be Date Recue/Date Received 2021-09-22 an error in the measurement of the edges of the room. Also, Figure 19C shows that if the edges or corners of the room are occluded by furniture or other room fixtures, measuring the edges with these applications is impossible since the wall edges are invisible and have to be scanned through the furniture only.
[00130] However, in the present modelling system and modelling method, these issues have been addressed making the present modelling system and method more robust than the other existing mobile applications. The present modelling system and modelling method do not require any manual interruption. Hence, the possibility of introducing manual error is ruled out.
Also, the present modelling system and modelling method does not require the mobile device to be run through all of the room's edges, making the present system and method easier for a user to use and robust in an occluded environment. The existing applications require some time after their launch and need a manual/automatic calibration of AR sensors by rotation and device scanning against the plane ground or wall. The automatic calibration by plane detection becomes difficult or make take longer when the room's lighting condition is not proper or there is no difference in the colour of the wall and the ground. However, this is not a requirement in the present modelling system and modelling method. The user is only required to click/select images of the room, making it more robust in different lighting and interior environments.
Also, the present modelling system and modelling method does not require the mobile device to be run through all of the room's edges, making the present system and method easier for a user to use and robust in an occluded environment. The existing applications require some time after their launch and need a manual/automatic calibration of AR sensors by rotation and device scanning against the plane ground or wall. The automatic calibration by plane detection becomes difficult or make take longer when the room's lighting condition is not proper or there is no difference in the colour of the wall and the ground. However, this is not a requirement in the present modelling system and modelling method. The user is only required to click/select images of the room, making it more robust in different lighting and interior environments.
[00131] Different light conditions and environments affect the quality of images and final results of layout generation. In existing methods, differently illuminated environments play a key role in the functioning of the method. In poor illumination, different applications discussed in the previous section are not able to extract visual features. The existing applications require scanning of the entire scene with a camera and require high contrast edges and curved surfaces to detect feature points. If the captured images do not have enough feature points, then different key points and features are not detected. In poorly illuminated images, there is a lack of contrast between two portions of a scene. Due to inconsistent indoor lighting, existing applications often are not able to capture feature points and do not start functioning. In contrast, the present modelling system and method does not require illumination or high contrast surfaces in the captured images.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[00132] Figures 20A, 20B, and 20C are illustrations of various scenes taken with mobile applications using different applications in low light environments. Figure 20A was taken with Google Measure, under low illumination, in a smooth surface, which could not be scanned because key points could not be detected. Figure 20B was taken with the Magic Plan application in low light and consistent surface, which could not be scanned for the same reason. However, Figure 20C is a part of a dataset collected using the present modelling system and method, which was taken in low illumination and has a consistent surface of smooth walls.
The image in Figure 20C was successfully contributed as part pf the generation of the final layout using the present system and method.
The image in Figure 20C was successfully contributed as part pf the generation of the final layout using the present system and method.
[00133] Figures 21 and 22 show the mean absolute error comparative analysis for area and aspect ratio across the two devices used. These plots infer the robustness and platform independence of the present modelling system and method. The relative performance of the present modelling system and method is similar for both devices in terms of area error and aspect ratio error. Figure 23 shows the comparative analysis of power consumption in mAh with a growing number of query images across devices. It can be seen that Samsung Galaxy A50 is consuming more power for the proposed system and is less efficient than Google Pixel 2 XL.
Energy consumption is measured in terms of the battery used on each mobile device, which was recorded manually, from the start of data collection, with each query image collected.
Energy consumption is measured in terms of the battery used on each mobile device, which was recorded manually, from the start of data collection, with each query image collected.
[00134] Overall, the present modelling system and method can generate a reasonably accurate layout in terms of the error in area, aspect ratio while requiring far less user interaction and intervention than existing applications.
[00135] The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual requirements to achieve the objectives of the solutions of the embodiments.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
[00136] In addition, functional units in the example embodiments may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
[00137] When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of example embodiments may be implemented in the form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the example embodiments. The foregoing storage medium includes any medium that can store program code, such as a Universal Serial Bus (USB) flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc.
In an example, the software product can be an inference model generated from a machine learning training process.
In an example, the software product can be an inference model generated from a machine learning training process.
[00138] In the described methods or block diagrams, the boxes may represent events, steps, functions, processes, modules, messages, and/or state-based operations, etc.
While some of the example embodiments have been described as occurring in a particular order, some of the steps or processes may be performed in a different order provided that the result of the changed order of any given step will not prevent or impair the occurrence of subsequent steps.
Furthermore, some of the messages or steps described may be removed or combined in other embodiments, and some of the messages or steps described herein may be separated into a number of sub-messages or sub-steps in other embodiments. Even further, some or all of the steps may be repeated, as necessary.
Elements described as methods or steps similarly apply to systems or subcomponents, and vice-versa. Reference to such words as "sending" or "receiving" could be interchanged depending on the perspective of the particular device.
While some of the example embodiments have been described as occurring in a particular order, some of the steps or processes may be performed in a different order provided that the result of the changed order of any given step will not prevent or impair the occurrence of subsequent steps.
Furthermore, some of the messages or steps described may be removed or combined in other embodiments, and some of the messages or steps described herein may be separated into a number of sub-messages or sub-steps in other embodiments. Even further, some or all of the steps may be repeated, as necessary.
Elements described as methods or steps similarly apply to systems or subcomponents, and vice-versa. Reference to such words as "sending" or "receiving" could be interchanged depending on the perspective of the particular device.
[00139] The described embodiments are considered to be illustrative and not restrictive.
Example embodiments described as methods would similarly apply to systems or devices, and vice-versa.
Date Recue/Date Received 2021-09-22
Example embodiments described as methods would similarly apply to systems or devices, and vice-versa.
Date Recue/Date Received 2021-09-22
[00140]
The various example embodiments are merely examples and are in no way meant to limit the scope of the example embodiments. Variations of the innovations described herein will be apparent to persons of ordinary skill in the art, such variations being within the intended scope of the example embodiments. In particular, features from one or more of the example embodiments may be selected to create alternative embodiments comprised of a sub-combination of features which may not be explicitly described. In addition, features from one or more of the described example embodiments may be selected and combined to create alternative example embodiments composed of a combination of features which may not be explicitly described.
Features suitable for such combinations and sub-combinations would be readily apparent to persons skilled in the art. The subject matter described herein intends to cover all suitable changes in technology.
Date Recue/Date Received 2021-09-22
The various example embodiments are merely examples and are in no way meant to limit the scope of the example embodiments. Variations of the innovations described herein will be apparent to persons of ordinary skill in the art, such variations being within the intended scope of the example embodiments. In particular, features from one or more of the example embodiments may be selected to create alternative embodiments comprised of a sub-combination of features which may not be explicitly described. In addition, features from one or more of the described example embodiments may be selected and combined to create alternative example embodiments composed of a combination of features which may not be explicitly described.
Features suitable for such combinations and sub-combinations would be readily apparent to persons skilled in the art. The subject matter described herein intends to cover all suitable changes in technology.
Date Recue/Date Received 2021-09-22
Claims (26)
1. A modelling method comprising:
receiving two-dimensional (2D) images of corners of an interior space captured by a camera;
generating, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D
image;
generating a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generating a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generating, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera;
transforming, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera;
regularizing, using a regularization module, the 3D point clouds in the 2D
space into boundary lines; and generating a 2D plan of the interior space from the boundary lines.
receiving two-dimensional (2D) images of corners of an interior space captured by a camera;
generating, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D
image;
generating a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generating a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generating, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera;
transforming, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera;
regularizing, using a regularization module, the 3D point clouds in the 2D
space into boundary lines; and generating a 2D plan of the interior space from the boundary lines.
2. The modelling method of claim 1, wherein the transforming comprises:
mapping each 3D point cloud with the corresponding edge map to identify boundary pixels and projecting them in the 2D space to generate a partial point cloud for each 3D point cloud; and assembling the partial point clouds in the 3D coordinate system from the perspective of the camera using the corresponding camera positions and camera orientations.
mapping each 3D point cloud with the corresponding edge map to identify boundary pixels and projecting them in the 2D space to generate a partial point cloud for each 3D point cloud; and assembling the partial point clouds in the 3D coordinate system from the perspective of the camera using the corresponding camera positions and camera orientations.
3. The modelling method of claim 2, wherein the regularizing comprises:
Date Recue/Date Received 2021-09-22 translating each partial point cloud into boundary corner lines using a clustering algorithm; and adjusting the boundary corner lines to be perpendicular boundary lines.
Date Recue/Date Received 2021-09-22 translating each partial point cloud into boundary corner lines using a clustering algorithm; and adjusting the boundary corner lines to be perpendicular boundary lines.
4. The modelling method of claim 3, wherein the regularizing further comprises:
forming a polygon with the boundary lines; and adjusting the boundary lines such that adjacent lines are collinear.
forming a polygon with the boundary lines; and adjusting the boundary lines such that adjacent lines are collinear.
5. The modelling method of claim 1, wherein the 2D images are RGB monocular images.
6. The modelling method of claim 1, wherein the 2D images are 2D images of each corner of the interior space, each 2D image corresponding with one corner of the interior space.
7. The modelling method of claim 1, wherein the positioning module comprises ARCore for generating the camera position and camera orientation for each 2D image.
8. The modelling method of claim 1, wherein the depth map for each 2D image is generated by an encoder-decoder architecture that extracts image features with a pre-trained DenseNet-169.
9. The modelling method of claim 1, wherein the edge map for each 2D image is generated by an encoder-decoder architecture that estimates layout with LayoutNet network.
10. The modelling method of claim 1, wherein the edge map for each 2D image is generated presuming a Manhattan world.
11. The modelling method of claim 1, further comprising identifying the focal length and center coordinates of the camera prior to generating the 3D point cloud for each 2D image.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
12. The modelling method of claim 1, wherein coordinates for each pixel in each 3D point cloud is generated by:
Z =
S
X = (u ¨ G) * Z
f Y = (v ¨ cy ) * z f wherein X, Y are coordinates corresponding to a real world, Z is a depth coordinate, 1)11,v is a depth value corresponding to the (u, v) pixel in the depth map, S is a scaling factor of each corresponding 2D image, f is the focal length of the camera, and C. , cy are the center coordinates of the camera.
Z =
S
X = (u ¨ G) * Z
f Y = (v ¨ cy ) * z f wherein X, Y are coordinates corresponding to a real world, Z is a depth coordinate, 1)11,v is a depth value corresponding to the (u, v) pixel in the depth map, S is a scaling factor of each corresponding 2D image, f is the focal length of the camera, and C. , cy are the center coordinates of the camera.
13. The modelling method of claim 1, further comprising:
detecting, using an object detecting module, a presence and a door position of a door in one or more of the 2D images; and generating a door symbol in the door position in the 2D plan of the interior space.
detecting, using an object detecting module, a presence and a door position of a door in one or more of the 2D images; and generating a door symbol in the door position in the 2D plan of the interior space.
14. The modelling method of claim 13, wherein the generating the door symbol in the door position is carried out using the following equations:
RatioD = dist(CBBI, WI) Lwi dist(CBBF , WIF ) ¨ LIVF * RatioD
wherein CBBI is a centroid of a bounding box of the door in the corresponding 2D image, dist(CBBI , WI) is a distance between CBBI and WI (wall), Lyi is a distance between two corners of the walls in the corresponding 2D
image, RatioD is the ratio between dist(CBBI , WI) and LH , Date Recue/Date Received 2021-09-22 L WF is a distance between the two corners of the walls in the 2D plan of the interior space, dist(CBBF, WIT') is a distance between a centroid of the door symbol (CBBF) and the wall (WIT') in the 2D plan of the interior space.
RatioD = dist(CBBI, WI) Lwi dist(CBBF , WIF ) ¨ LIVF * RatioD
wherein CBBI is a centroid of a bounding box of the door in the corresponding 2D image, dist(CBBI , WI) is a distance between CBBI and WI (wall), Lyi is a distance between two corners of the walls in the corresponding 2D
image, RatioD is the ratio between dist(CBBI , WI) and LH , Date Recue/Date Received 2021-09-22 L WF is a distance between the two corners of the walls in the 2D plan of the interior space, dist(CBBF, WIT') is a distance between a centroid of the door symbol (CBBF) and the wall (WIT') in the 2D plan of the interior space.
15. The modelling method of claim 1, wherein the interior space is a floor with multiple rooms;
wherein the generating of the boundary lines are for the multiple rooms;
wherein the generating of the 2D plan includes generating respective 2D plans of the multiple rooms and arranging the respective 2D plans on the floor.
wherein the generating of the boundary lines are for the multiple rooms;
wherein the generating of the 2D plan includes generating respective 2D plans of the multiple rooms and arranging the respective 2D plans on the floor.
16. The modelling method of claim 15, further comprising generating an outer boundary by finding a convex hull for all of the multiple 2D plans.
17. The modelling method of claim 16, further comprising aligning all of the multiple 2D
plans with the generated outer boundary.
plans with the generated outer boundary.
18. The modelling method of claim 1, wherein the method is performed by at least one processor.
19. The modelling method of claim 1, further comprising outputting the 2D
plan on a display or on another device.
plan on a display or on another device.
20. A modelling system comprising:
at least one processor; and memory containing instructions which, when executed by the at least one processor, causes the at least one processor to:
receive two-dimensional (2D) images of corners of an interior space captured by a camera;
Date Recue/Date Received 2021-09-22 generate, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D
image;
generate a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generate a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generate, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera;
transform, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera;
regularize, using a regularization module, the 3D point clouds in the 2D space into boundary lines; and generate a 2D plan of the interior space from the boundary lines.
at least one processor; and memory containing instructions which, when executed by the at least one processor, causes the at least one processor to:
receive two-dimensional (2D) images of corners of an interior space captured by a camera;
Date Recue/Date Received 2021-09-22 generate, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D
image;
generate a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generate a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generate, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera;
transform, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera;
regularize, using a regularization module, the 3D point clouds in the 2D space into boundary lines; and generate a 2D plan of the interior space from the boundary lines.
21. The modelling system of claim 20, further comprising a camera configured to capture the 2D images of the interior space.
22. The modelling system of claim 21, wherein the camera is a monocular, RGB camera.
23. The modelling system of claim 21, further comprising a local processor coupled to the camera; and a local memory containing instructions which, when executed by the local processor, cause the local processor to generate the camera position and camera orientation for each 2D
image captured.
image captured.
24. The modelling system of claim 21, wherein the camera, the at least one processor, and the memory are part of a smart phone.
Date Recue/Date Received 2021-09-22
Date Recue/Date Received 2021-09-22
25. The modelling system of claim 20, further comprising a display for displaying the 2D
plan.
plan.
26. A non-transitory memory containing instructions which, when executed by at least one processor, cause the at least one processor to:
receive two-dimensional (2D) images of corners of an interior space captured by a camera;
generate, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D
image;
generate a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generate a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generate, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera;
transform, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera;
regularize, using a regularization module, the 3D point clouds in the 2D space into boundary lines; and generate a 2D plan of the interior space from the boundary lines.
Date Recue/Date Received 2021-09-22
receive two-dimensional (2D) images of corners of an interior space captured by a camera;
generate, using a positioning module, a corresponding camera position and camera orientation in a three-dimensional (3D) coordinate system in the interior space for each 2D
image;
generate a corresponding depth map for each 2D image by using a depth module to estimate depth for each pixel in each 2D image;
generate a corresponding edge map for each 2D image by using an edge module to identify whether each pixel in each 2D image is a wall or an edge;
generate, using a reconstruction module, a 3D point cloud for each 2D image using the corresponding depth map and a focal length and center coordinates of the camera;
transform, using a transformation module, the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system from a perspective of the camera;
regularize, using a regularization module, the 3D point clouds in the 2D space into boundary lines; and generate a 2D plan of the interior space from the boundary lines.
Date Recue/Date Received 2021-09-22
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3131587A CA3131587A1 (en) | 2021-09-22 | 2021-09-22 | 2d and 3d floor plan generation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3131587A CA3131587A1 (en) | 2021-09-22 | 2021-09-22 | 2d and 3d floor plan generation |
Publications (1)
Publication Number | Publication Date |
---|---|
CA3131587A1 true CA3131587A1 (en) | 2023-03-22 |
Family
ID=85685078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA3131587A Pending CA3131587A1 (en) | 2021-09-22 | 2021-09-22 | 2d and 3d floor plan generation |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA3131587A1 (en) |
-
2021
- 2021-09-22 CA CA3131587A patent/CA3131587A1/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12062200B2 (en) | Mapping object instances using video data | |
JP7311204B2 (en) | 3D OBJECT MODELING METHOD, IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS | |
Pintore et al. | State‐of‐the‐art in automatic 3D reconstruction of structured indoor environments | |
CN112771539B (en) | Employing three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications | |
CN110675314B (en) | Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium | |
US10977818B2 (en) | Machine learning based model localization system | |
Wei et al. | A vision and learning-based indoor localization and semantic mapping framework for facility operations and management | |
Nishida et al. | Procedural modeling of a building from a single image | |
CN107622244B (en) | Indoor scene fine analysis method based on depth map | |
US11734861B2 (en) | 2D and 3D floor plan generation | |
da Silveira et al. | 3d scene geometry estimation from 360 imagery: A survey | |
Concha et al. | Manhattan and Piecewise-Planar Constraints for Dense Monocular Mapping. | |
Pintore et al. | Recovering 3D existing-conditions of indoor structures from spherical images | |
CN112055192B (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
Shalaby et al. | Algorithms and applications of structure from motion (SFM): A survey | |
Koch et al. | Comparison of monocular depth estimation methods using geometrically relevant metrics on the IBims-1 dataset | |
Sahili et al. | A Survey of Visual SLAM Methods | |
Vouzounaras et al. | Automatic generation of 3D outdoor and indoor building scenes from a single image | |
Bartczak et al. | Extraction of 3D freeform surfaces as visual landmarks for real-time tracking | |
Goyal et al. | GRIHA: synthesizing 2-dimensional building layouts from images captured using a smart phone | |
CA3131587A1 (en) | 2d and 3d floor plan generation | |
Zioulis et al. | Monocular spherical depth estimation with explicitly connected weak layout cues | |
Gard et al. | SPVLoc: Semantic Panoramic Viewport Matching for 6D Camera Localization in Unseen Environments | |
Babahajiani | Geometric computer vision: Omnidirectional visual and remotely sensed data analysis | |
Asmar et al. | 2D occupancy-grid SLAM of structured indoor environments using a single camera |