CN115393469A - House type graph generation method, device, equipment and medium - Google Patents
House type graph generation method, device, equipment and medium Download PDFInfo
- Publication number
- CN115393469A CN115393469A CN202211003231.9A CN202211003231A CN115393469A CN 115393469 A CN115393469 A CN 115393469A CN 202211003231 A CN202211003231 A CN 202211003231A CN 115393469 A CN115393469 A CN 115393469A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- space
- house
- panoramic
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000013507 mapping Methods 0.000 claims abstract description 65
- 238000004590 computer program Methods 0.000 claims description 19
- 238000003860 storage Methods 0.000 claims description 14
- 230000001502 supplementing effect Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 description 298
- 238000010586 diagram Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000004044 response Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 239000013589 supplement Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000001179 sorption measurement Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application provides a house layout generating method, a house layout generating device, house layout generating equipment and a house layout generating medium. By acquiring a panoramic image and a point Yun Pingmian image corresponding to each space object in a target house, position information corresponding to a door body and/or a window body in each space object and a contour line corresponding to a wall body of each space object can be determined; and further, correcting contour lines which do not meet the requirements in the point cloud plane image according to the relative position and posture relation between the equipment for shooting the panoramic image and the point cloud plane image and the relative position relation between all space objects in the panoramic image, and mapping the door bodies and/or the windows of all the space objects in the panoramic image to the point cloud plane image. Through the method, the house type graph can be automatically generated, the generation efficiency of the house type graph is improved, and the detail information such as door bodies, windows and the like in the house can be mapped into the house type graph, so that the generated house type graph information is richer and more complete, and the house information can be rapidly and accurately known through the house type graph.
Description
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a method, an apparatus, a device, and a medium for generating a house layout.
Background
In the existing house type graph generation scheme, three-dimensional point cloud data and a panoramic graph corresponding to each room in a house are usually acquired, the three-dimensional point cloud data corresponding to each room are fused according to the panoramic graph of the house, namely point cloud data corresponding to adjacent rooms are spliced, and a three-dimensional point cloud model corresponding to the whole house is obtained according to the spliced point cloud data. And further, performing transverse section cutting on the obtained three-dimensional point cloud model to obtain a two-dimensional plane indoor graph corresponding to the whole house.
However, the house type diagram generated by the existing scheme cannot well reflect the detail information in the house, so that the existing house type diagram generation scheme needs to be improved.
Disclosure of Invention
Various aspects of the present application provide a method, an apparatus, a device, and a medium for generating a house type graph, which are used to map detail information such as a door body and a window of each space object in a house into the house type graph while automatically generating the house type graph, so as to obtain a house type graph with richer and complete contents.
The embodiment of the application provides a house layout generating method, which comprises the following steps: acquiring a panoramic image and a point Yun Pingmian image corresponding to each space object in a target house, wherein the panoramic image comprises a wall, a door body and/or a window body of each space object, and the point cloud plane image comprises a contour line corresponding to the wall of each space object; determining the relative position relation between the space objects according to the panoramic image corresponding to the space objects; correcting the contour line position of the corresponding wall body in the point cloud plane map according to the relative position relation; according to the relative position and posture relation between the equipment for respectively obtaining the panoramic image and the point cloud plane image, mapping the position information corresponding to the door body and/or the window body of each space object in the panoramic image onto the corresponding contour line in the point cloud plane image to obtain a point cloud plane image containing the contour of the door body and the contour of the window body; and marking the door body outline and the window body outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house for displaying.
In an optional embodiment, acquiring a panorama and a point Yun Pingmian map corresponding to each space object in a target house includes: acquiring panoramic data and point cloud data corresponding to each space object in a target house; performing three-dimensional live-action space rendering on the panoramic data to obtain a panoramic image corresponding to each space object; generating a three-dimensional point cloud model corresponding to the target house according to the point cloud data; and carrying out plane projection on the three-dimensional point cloud model to obtain a point cloud plane graph corresponding to each space object.
In an optional embodiment, before performing the planar projection on the three-dimensional point cloud model, the method further includes: determining first panoramic pixel coordinates corresponding to the wall body in each space object according to the panoramic data; determining a first three-dimensional point cloud coordinate corresponding to the first panoramic pixel coordinate in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinate and the three-dimensional point cloud coordinate; and if a first space position without point cloud data exists in the space position corresponding to the first three-dimensional point cloud coordinate in the three-dimensional point cloud model, supplementing point cloud data at the first space position.
In an optional embodiment, performing a planar projection on the three-dimensional point cloud model includes: determining target three-dimensional point cloud coordinates corresponding to walls in the space objects in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinates and the three-dimensional point cloud coordinates; and determining a projection contour of the target three-dimensional point cloud coordinate on the point cloud plane map, and taking the projection contour as a contour line corresponding to the wall of each space object on the point cloud plane map.
In an optional embodiment, correcting the contour line position of the corresponding wall in the point cloud plane map according to the relative position relationship includes: determining the corresponding target position of the contour line of each space object on the point cloud plane graph according to the relative position relation; and adjusting the contour lines of the point cloud plane map corresponding to the space objects to the target positions so as to enable the contour lines of the space objects to correspond to the wall positions of the space objects.
In an optional embodiment, mapping position information corresponding to a door body and/or a window body of each spatial object in a panorama onto a corresponding contour line in a point cloud plan according to a relative pose relationship between devices respectively obtaining the panorama and the point cloud plan, includes: mapping the panoramic pixel coordinates corresponding to the door body and/or the window body of each space object in the panoramic image into a spherical space according to the corresponding relation between the panoramic pixel coordinates and the spherical coordinates to obtain corresponding spherical coordinates; mapping the spherical coordinates corresponding to the door body and/or the window body into corresponding three-dimensional point cloud coordinates according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image and the mapping relationship between the spherical coordinates and the three-dimensional point cloud coordinates; and carrying out plane projection on the three-dimensional point cloud coordinates corresponding to the door body and/or the window body, and mapping the outline of the door body and/or the window body to the corresponding outline line in the point cloud plane map.
In an alternative embodiment, the target premises includes an open space therein, the method further comprising: determining target position information corresponding to the open space according to the panoramic image; and generating a corresponding contour line on the point cloud plane map according to the target position information.
The embodiment of the present application further provides a house layout generating apparatus, including: the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a panoramic image and a point Yun Pingmian image corresponding to each space object in a target house, the panoramic image comprises a wall body, a door body and/or a window body of each space object, and the point cloud plane image comprises a contour line corresponding to the wall body of each space object; the determining module is used for determining the relative position relation among the space objects according to the panoramic image corresponding to the space objects; the correction module is used for correcting the contour line position of the corresponding wall body in the point cloud plane map according to the relative position relation; the mapping module is used for mapping the position information corresponding to the door body and/or the window body of each space object in the panoramic image to a corresponding contour line in the point cloud plane image according to the relative position and posture relation between the equipment for respectively obtaining the panoramic image and the point cloud plane image, so as to obtain the point cloud plane image containing the contour of the door body and the contour of the window body; and the marking module is used for marking the door body outline and the window body outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house for displaying.
An embodiment of the present application further provides a computer-side device, including: a processor and a memory for implementing any of the steps of the method when the computer program is executed by the processor.
Embodiments of the present application also provide a computer readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps of the method.
In the embodiment of the application, by acquiring a panoramic image and a point Yun Pingmian image corresponding to each space object in a target house, position information corresponding to a door body and/or a window body in each space object and a relative position relationship between each space object can be determined from the panoramic image, and a contour line corresponding to a wall body of each space object can be determined from a point cloud plane image; further, according to the relative position and posture relation between shooting equipment when shooting the panoramic image and the point cloud plane image of each space object and the relative position relation between all the space objects in the panoramic image, correcting the contour lines which do not meet the requirements in the point cloud plane image, mapping the door bodies and/or the windows of all the space objects in the panoramic image to the corresponding contour lines in the point cloud plane image, and marking the mapped door body contour lines and window contour lines to obtain the house type image corresponding to the target house. Through the method, the house type graph can be automatically generated, the generation efficiency of the house type graph is improved, and the detail information such as door bodies, windows and the like in the house can be mapped into the house type graph, so that the generated house type graph information is richer and more complete, and the house information can be rapidly and accurately known through the house type graph.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a house layout generating method according to an embodiment of the present application;
fig. 2a is a schematic diagram of a three-dimensional point cloud model corresponding to a target house according to an embodiment of the present disclosure;
fig. 2b is a plan view of a point cloud lacking point cloud data according to an embodiment of the present disclosure;
fig. 2c is a point cloud plan after supplementing point cloud data according to an embodiment of the present disclosure;
fig. 2d is a plan view of a point cloud with a contour line according to an embodiment of the present disclosure;
fig. 2e is a plan view of another point cloud with a contour line according to the embodiment of the present disclosure;
FIG. 2f is a partial perspective view of a computer system supporting an editing function according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a house layout generating apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problem that generated house layout information is not rich enough in an existing house layout generation scheme, an embodiment of the present application provides a house layout generation method, and fig. 1 is a flowchart of the house layout generation method provided by the embodiment of the present application, and as shown in fig. 1, the method includes:
s1, acquiring a panorama and a point Yun Pingmian image corresponding to each space object in a target house, wherein the panorama comprises a wall body, a door body and/or a window body of each space object, and the point cloud plane image comprises a contour line corresponding to the wall body of each space object;
s2, determining the relative position relation between the space objects according to the panoramic image corresponding to the space objects;
s3, correcting the contour line position of the corresponding wall body in the point cloud plane graph according to the relative position relation;
s4, according to the relative pose relation between the devices respectively acquiring the panoramic image and the point cloud plane image, mapping the position information corresponding to the door body and/or the window body of each space object in the panoramic image to the corresponding contour line in the point cloud plane image to obtain a point cloud plane image containing the contour of the door body and the contour of the window body;
and S5, marking the door body outline and the window body outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house for displaying.
In the embodiment of the present application, a specific manner of obtaining a panoramic image and a point cloud plan of a target house is not limited, and in an optional example, a panoramic camera may be sequentially used to shoot the target house to obtain a panoramic image corresponding to the target house, and a laser scanning device may be used to perform laser scanning on the target house to obtain a point cloud plan corresponding to the target house; in another optional embodiment, a panoramic camera and a laser scanning device can be simultaneously adopted to perform panoramic shooting and laser scanning on the target house, so as to obtain a panoramic image and a point cloud plane image corresponding to the target house; further optionally, when the panoramic camera and the laser scanning device are used for performing panoramic shooting and laser scanning on the target house, the panoramic camera and the laser scanning device can perform panoramic shooting and laser scanning on the target house according to a preset relative pose relationship, and the embodiment of the application is not limited to specific data corresponding to the preset relative pose relationship.
In the embodiment of the application, the obtained panoramic image includes a wall, a door and/or a window of each space object, and the obtained point cloud plane image includes a contour line corresponding to the wall of each space object. Based on the method, under the condition that the panoramic image and the point cloud plane image of each space object in the target house are obtained, the wall, the door body and/or the window of each space object can be determined from the panoramic image, and the contour line corresponding to the wall of each space object can be determined from the point cloud plane image. When the laser scanning equipment scans each space object, conditions such as missing point clouds and false identification may exist, and therefore contour lines which do not conform to the wall body may exist in the point cloud plane diagram. In this case, the relative position relationship between the space objects can be determined according to the panorama corresponding to the space objects, and the contour line positions of the corresponding walls in the point cloud plane map can be corrected according to the relative position relationship of the space objects, so that the contour lines of the space objects on the point cloud plane map correspond to the wall positions of the target house. For example, two bedrooms of the same size exist in a target house, the two bedrooms are adjacent, but corresponding contour lines of the walls of the two bedrooms on a point cloud plane are not aligned; or, the situation that the contour line of the corresponding wall corner is not a right angle exists on the point cloud plane map. Aiming at similar problems, when the position of the contour line of the corresponding wall body in the point cloud plane map is corrected, the corresponding target position of the contour line of each space object on the point cloud plane map can be determined according to the relative position relation of each space object in the panoramic map; and then, adjusting the contour lines corresponding to the space objects on the point cloud plane map to the target positions so that the contour lines of the space objects correspond to the wall positions of the target house.
Further, in order to obtain a house type graph with richer information, the position information corresponding to the door body and/or the window body of each space object in the panoramic graph can be mapped onto the corresponding contour line in the point cloud plane graph according to the relative pose relationship between the devices respectively obtaining the panoramic graph and the point cloud plane graph, so as to obtain the point cloud plane graph containing the door body contour and the window body contour; and marking the door body outline and the window body outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house. Through the method, the house type graph can be automatically generated, the generation efficiency of the house type graph is improved, and the detail information such as door bodies, windows and the like in the house can be mapped into the house type graph, so that the generated house type graph information is richer and more complete, and the house information can be rapidly and accurately known through the house type graph.
The specific details of each step of the house layout generation method will be described in detail below.
In the embodiment of the present application, a specific manner of generating a panoramic view and a point cloud plan of each spatial object is not limited, and optionally, the panoramic view and the point cloud plan corresponding to each spatial object may be directly generated in the process of shooting and scanning each spatial object by the panoramic device and the laser scanning device. In another optional embodiment, panoramic data and point cloud data corresponding to each spatial object may also be obtained first, and then three-dimensional live-action space rendering is performed on the panoramic data to obtain a panoramic image corresponding to each spatial object; generating a three-dimensional point cloud model corresponding to the target house according to the point cloud data; and further, carrying out plane projection on the three-dimensional point cloud model to obtain a point cloud plane graph corresponding to each space object. Further optionally, when the point cloud plan corresponding to each space object is generated, a contour line of each space object on the point cloud plan can be generated according to the point cloud data corresponding to the wall of each space object, and the contour line is used for identifying the house type corresponding to the target house.
Because the target house may include objects such as furniture, and the like, when the laser scanning device performs laser scanning on each space object, a point cloud data blind area where a wall body cannot be scanned exists, or point cloud data which is identified by mistake exists. Therefore, before the three-dimensional point cloud model is subjected to plane projection, the wrong point cloud data can be corrected by combining with the panoramic image corresponding to each space object.
The following describes an example of an error condition that may exist in point cloud data with reference to the drawings.
Fig. 2a is a three-dimensional point cloud model obtained by scanning each space object by a laser scanning device, and as shown in fig. 2a, the overall contour corresponding to the wall of each space object can be identified by the three-dimensional point cloud model, but the real information in each space object cannot be completely reflected in the three-dimensional point cloud model, that is, the point cloud data obtained by the laser scanning device may be incomplete. For example, when each pair of space objects is scanned by a laser scanning device, a part of a wall body blocked by furniture such as a table, a chair, a wardrobe, a bed, etc. in each space object may not be scanned, and when a three-dimensional point cloud model is generated based on point cloud data, there may not be corresponding point cloud data at a space position corresponding to the part of the wall body, or point cloud data corresponding to scanned furniture such as a table, a chair, a wardrobe, a bed, etc. is determined as point cloud data corresponding to the part of the wall body, thereby causing an error in the point cloud data and an inaccuracy in the generated three-dimensional point cloud model. Therefore, the point cloud plane map projected based on the three-dimensional point cloud model may not match the true house type structure corresponding to each spatial object. For example, the point cloud data in the three-dimensional point cloud model that should be projected onto the point cloud plane map is not projected onto the point cloud plane map; for another example, a spatial position without point cloud data in the three-dimensional point cloud model has a corresponding projection area on the point cloud plane map; for another example, the contour lines on the point cloud plan do not match the projection contour of the spatial positions corresponding to the point cloud data in the three-dimensional point cloud space on the point cloud plan, and so on. The following embodiments are described in detail with reference to the accompanying drawings, in which possible errors are identified.
Case 1: projection information of 'missing' point cloud data on point cloud plane graph。
Fig. 2b is a point cloud plan obtained by performing plane projection on the three-dimensional point cloud model, as shown in fig. 2b, at least one vacant part exists in the point cloud plan, which does not meet the requirements of house types and patterns, and the vacant part in fig. 2b may be caused by that the wall body blocked by other objects is not scanned by the laser scanning device to corresponding point cloud data. Therefore, when a three-dimensional point cloud model is generated according to point cloud data obtained by scanning, a phenomenon of "vacancy" occurs, and further a point cloud plan projected based on the three-dimensional point cloud model also has "vacancy". For such a situation, before generating a contour line of each space object on a point cloud plane graph according to point cloud data corresponding to a wall of each space object, according to a panorama corresponding to each space object, a first panoramic pixel coordinate corresponding to the wall of each space object may be determined, according to a mapping relationship between the panoramic pixel coordinate and a three-dimensional point cloud coordinate, a first three-dimensional point cloud coordinate corresponding to the first panoramic pixel coordinate in a three-dimensional point cloud model may be determined, and then whether corresponding point cloud data exists at a spatial position corresponding to the first three-dimensional point cloud coordinate is determined. And if the first space position without the point cloud data exists in the space position corresponding to the first three-dimensional point cloud coordinate in the three-dimensional point cloud model, supplementing the point cloud data at the position corresponding to the first space position in the three-dimensional point cloud model, and projecting the supplemented point cloud data onto the point cloud plane map so as to correct the wall body outline in the point cloud plane map.
Based on the above, other 'vacant' portions of the wall corresponding to each space object in the three-dimensional point cloud model can be corrected, and after point cloud data of the wall corresponding to each space object in the three-dimensional point cloud model is supplemented, a contour line of each space object on the point cloud plane map is generated according to the point cloud data corresponding to the wall of each space object. Optionally, when generating a contour line of each space object on the point cloud plane graph, determining point cloud data conforming to the shape of the wall, that is, first point cloud data corresponding to the wall of each space object, according to the three-dimensional point cloud coordinates corresponding to each point cloud data in the three-dimensional point cloud model; and generating a contour line of each space object on the point cloud plane drawing according to the projection contour of the first point cloud data on the point cloud plane drawing.
Fig. 2c is a point cloud plane graph obtained by projecting according to the three-dimensional point cloud model after supplementing point cloud data, and as shown in fig. 2c, after supplementing point cloud data in the three-dimensional point cloud model, the point cloud plane graph obtained by projecting according to the three-dimensional point cloud model meets the requirement of an actual house type. Based on this, the contour lines corresponding to the walls of the space objects can be generated on the point cloud plane map, and as shown in fig. 2d, the house type effect of the target house is clearly reflected according to the contour lines corresponding to the walls of the space objects. However, as can be seen from the point cloud plan shown in fig. 2d, the target house includes three space objects, which are respectively the space object corresponding to the rectangle at the upper part in fig. 2d, and the space object corresponding to the rectangles at the left and right sides of the lower part. When the laser scanning device performs laser scanning, different space objects are subjected to laser scanning at different point locations, and point cloud data acquired by the laser scanning device also has slight errors, which may cause that contour lines obtained by projecting point cloud data corresponding to walls between adjacent space objects on a point cloud plane map are inaccurate. For example, as shown in fig. 2d, the upper rectangular space object and the lower two rectangular space objects are adjacent space objects, and there should be only one wall between them.
However, it can be seen on the point cloud plane image shown in fig. 2d that there is a contour line for separating the upper rectangular space object from the lower two rectangular space objects, and the upper rectangular space object is separated from the lower two rectangular space objects, which is not in accordance with the actual requirement. Therefore, before generating the contour line of each space object on the point cloud plane map from the projection contour of the first point cloud data corresponding to the wall of each space object on the point cloud plane map, the target projection contour of each space object corresponding to the first point cloud data on the point cloud plane map may be determined, and if the target projection contour of each space object corresponds to the position of the wall, the position of each target projection contour is adjusted according to the relative positional relationship of each space object, so that each target projection contour is aligned, and further, the contour line of each space object on the point cloud plane map is generated according to each aligned target projection contour.
In the embodiment of the application, a specific manner of correcting the position of the contour line of the corresponding wall in the point cloud plane map according to the relative position relationship of each space object is not limited, and optionally, the position of each target projection contour can be automatically adjusted according to the relative position relationship of each space object when the three-dimensional point cloud model is projected; in another optional embodiment, the point cloud plan provided in the embodiment of the present application supports an editing function, and for a target contour line that does not correspond to a wall position in the point cloud plan, a worker may perform fine adjustment on the form and/or position of the target contour line according to actual requirements, so that the adjusted target contour line corresponds to the position of the wall of each space object. Based on this, in the case of responding to the adjustment operation of the target contour line by the staff on the point cloud plane diagram, the shape and/or the position of the target contour line can be adjusted so that the target contour line corresponds to the wall body position. And generating a contour line corresponding to the wall body position of each space object according to the point cloud data corresponding to the projection contour position corresponding to each adjusted space object. Fig. 2e is a schematic diagram of contour lines generated according to the point cloud data corresponding to each adjusted space object, as shown in fig. 2e, after adjustment, a wall body for separating an upper rectangular space object from a lower rectangular space object corresponds to only one or two contour lines, and meets actual requirements.
Case 2: the point cloud plane map has redundant point cloud data projection information。
As can be seen from the above, since there may be an error when the laser scanning apparatus performs laser scanning on each space object and acquires point cloud data corresponding to each space object, when generating a corresponding three-dimensional point cloud model from the acquired point cloud data, there may be a case where a spatial position of each space object, which is not a wall, is identified as a wall, that is, there may be "redundant" part of point cloud data in the generated three-dimensional point cloud model. Therefore, when the point cloud data in the three-dimensional point cloud model is corrected, in addition to the above embodiment that the corresponding point cloud data is not obtained for the spatial position of the wall body really existing in each spatial object and the correction is performed in a way of point cloud data supplement, the embodiment of the present application also provides a way that the spatial position of each spatial object, which is not the wall body, is recognized as the wall body by the laser scanning device and the correction is performed in a way of point cloud data deletion.
Optionally, when the point cloud data is corrected in a case where the spatial position other than the wall is identified as the wall, a second panoramic pixel coordinate other than the wall in each spatial object may be determined according to the panoramic image, and a second three-dimensional point cloud coordinate corresponding to the second panoramic pixel coordinate in the three-dimensional point cloud model may be determined according to a mapping relationship between the panoramic pixel coordinate and the three-dimensional point cloud coordinate, so as to determine whether point cloud data corresponding to the wall morphology exists in the spatial position corresponding to the second three-dimensional point cloud coordinate in the three-dimensional point cloud model. And if the point cloud data of the wall form exists in the space position corresponding to the second three-dimensional point cloud coordinate in the three-dimensional point cloud model, adjusting the point cloud form at the second space position corresponding to the point cloud data of the wall form, and deleting the contour line obtained by projecting the point cloud data of the wall form on the point cloud plane map.
In the embodiment of the present application, a specific manner of adjusting the point cloud form at the second spatial position is not limited, and optionally, the three-dimensional point cloud coordinate corresponding to the second spatial position may be directly adjusted, for example, the three-dimensional point cloud coordinate originally corresponding to the wall form at the second spatial position is adjusted to the three-dimensional point cloud coordinate corresponding to the ground form, that is, the point cloud data originally corresponding to the wall form at the second spatial position is deleted, so as to correct the point cloud data. In another optional embodiment, the staff may also determine whether the second spatial position corresponding to the spatial position of the wall in the three-dimensional point cloud model should have point cloud data in the form of the wall according to the spatial position of the corresponding wall in the panorama, and directly delete the contour line corresponding to the projection of the second spatial position on the point cloud plan map when determining that the second spatial position should not exist.
Further, under the condition that the contour line corresponding to the projection position of the second spatial position on the point cloud plane map is deleted by the operator, the three-dimensional point cloud coordinate of the corresponding point cloud data can be adjusted to the three-dimensional point cloud coordinate corresponding to the ground form based on the projection relation between the two-dimensional coordinate and the three-dimensional point cloud coordinate corresponding to the deleted contour line, namely, the point cloud data originally corresponding to the wall form at the second spatial position is deleted, so that the point cloud data can be corrected.
Case 3: projection information of point cloud data with 'error' on point cloud plane map。
In the embodiment of the present application, in addition to the supplement of the "vacant" portion and the deletion of the "redundant" portion in the above embodiment, the correction of the point cloud data in the three-dimensional point cloud model also provides an adjustment of an "error" portion, where the "error" portion refers to that there is corresponding point cloud data at a corresponding spatial position in the three-dimensional point cloud model, but the point cloud data at the spatial position does not meet the actual requirement. For example, if the wall in the space object is straight, but the point cloud form corresponding to the wall in the three-dimensional point cloud model is curved, the point cloud data corresponding to the wall is erroneous; for another example, in the three-dimensional point cloud model, the point cloud shape corresponding to the wall is straight, but the contour line corresponding to the point cloud shape is curved, and the point cloud data corresponding to the wall contour is erroneous. For such a situation, the point cloud data corresponding to the "wrong" part needs to be corrected, and taking a situation that a target contour line which does not correspond to the wall position of each space object exists on the point cloud plane map as an example, a worker can adjust the target contour line on the point cloud plane map.
In an optional embodiment, a worker can select a target contour line which does not correspond to the position of the wall contour from the point cloud plane image and move the target contour line, and under the condition that the target contour line on the point cloud plane image is selected in response, the initial position of the selected target contour line on the point cloud plane image corresponding to the point cloud plane image can be determined; further, in the case of responding to the moving operation performed on the selected target contour line, the corresponding target position at the time of the termination of the moving operation may be acquired, and the selected target contour line is moved from the initial position to the target position to adjust the position of the target contour line; the moving operation refers to moving the point cloud data corresponding to at least one end of the contour line and the point cloud data between two ends from the initial position to the target position.
In another optional embodiment, the staff may further perform form adjustment on a target contour line whose point cloud form does not correspond to the wall form, and in response to performing the form adjustment operation on any one of the selected target contour lines, the staff may acquire an initial form corresponding to the target contour line of the adjusted part, acquire a target form corresponding to the target contour line of the adjusted part when the adjustment operation is completed, adjust the target contour line of the adjusted part from the initial form to the target form, and use the overall form after the adjustment of the selected target contour line as the form corresponding to the selected target contour line; the shape adjustment operation refers to moving the point cloud data between the two ends of the contour line without moving the point cloud data corresponding to the two ends of the contour line, for example, adjusting an originally curved contour line to a straight contour line, but is not limited thereto.
In another optional embodiment, if there is a "missing" in the contour line generated based on the point cloud data of the corresponding wall in the three-dimensional point cloud model in the point cloud plan, that is, there is no contour line in the projection position corresponding to the spatial position of the corresponding wall in the three-dimensional point cloud model on the point cloud plan, the worker may also add a contour line on the point cloud plan for the position of the missing contour line. Optionally, the worker may select a position of the wall in the target house, where no corresponding contour line exists on the point cloud plan, as a target position, and perform a contour line adding operation for the target position; further, in a case of responding to the contour line adding operation performed for the target position, a contour line is added at the corresponding target position on the point cloud plan. In the embodiment of the present application, a specific manner of adding a contour line at a target position is not limited, and optionally, a contour line may be directly displayed on a point cloud plane diagram to supplement a missing contour line when a target position selected by a worker is obtained and a contour line adding operation is performed in response to the target position; alternatively, the staff may select the contour line component from a toolbar for editing the point cloud plan, and drag the contour line component onto the point cloud plan to supplement the missing contour line.
Further optionally, in the case that the new contour line is automatically displayed on the point cloud plan view, the initial position of the new contour line is not limited in the embodiment of the present application, and the displayed initial position may be any position within a preset range from the target position, or may be directly displayed at the target position; further optionally, in case of direct display at the target location, both ends of the newly added contour line may be automatically connected with the ends of the existing contour line on the point cloud plan. Optionally, the staff can drag the selected contour line component to any position on the point cloud plane map first, and then move and/or adjust the shape of the contour line component dragged to the point cloud plane map so as to add the contour line component to the target position; in another optional embodiment, both ends of the contour line assembly may have an adsorption function, that is, in the case that the operator moves the contour line to the point cloud plane map and is within a preset range of the existing contour line, one end of the contour line assembly close to the end of the existing contour line may be automatically connected with the end of the existing contour line, which is helpful for improving the accuracy when the contour line is added.
Based on the above, after the point cloud data is corrected, the three-dimensional point cloud model may be subjected to plane projection to obtain a corresponding point cloud plane map, which includes contour lines corresponding to the walls of the space objects. Optionally, the panoramic pixel coordinates corresponding to the wall of each spatial object may be determined according to the panoramic image corresponding to each spatial object; and then, according to the mapping relation between the panoramic pixel coordinates and the three-dimensional point cloud coordinates, determining target three-dimensional point cloud coordinates corresponding to the wall body in each space object in the three-dimensional point cloud model, and determining a projection outline of the target three-dimensional point cloud coordinates on the point cloud plane map so as to take the projection outline as the corresponding outline of the wall body of each space object on the point cloud plane map. However, since the door body, the window, and other objects in each space object are installed in the wall, when the point cloud data corresponding to the wall of each space object is subjected to plane projection, the door body and the window of each space object are projected onto the point cloud plane map along with the wall. Therefore, the outlines of the door and the window cannot be distinguished according to the contour lines obtained from the wall of each space object, and therefore, when the contour lines corresponding to the wall of each space object are obtained, the contours corresponding to the door and/or the window of each space object need to be marked on the point cloud plane.
In the embodiment of the present application, a specific manner of marking the outlines corresponding to the door bodies and/or the windows of the space objects on the point cloud plane map is not limited. Optionally, the position information corresponding to the door body and/or the window of each space object may be determined according to the panorama corresponding to each space object, so as to determine a contour line corresponding to the wall to which the door body and/or the window of each space object belongs on the point cloud plane diagram, and mark the contour corresponding to the door body and/or the window of each space object on the determined contour line. However, there may be a case where the panorama does not conform to the actual size of each spatial object, for example, wall lines, door frames, window frames, and the like in the spatial object are all straight lines, but are rendered as curved lines in the panorama. In order to solve such a problem, the panorama provided in the embodiment of the present application supports a function of correcting the contour of a wall line, a door frame, and a window frame of each space object.
In this embodiment of the application, a specific manner of correcting the outlines of the wall line, the door frame, and the window frame is not limited, and optionally, when the panorama is generated, the outlines of the wall line, the door frame, and the window frame may be corrected according to panoramic data corresponding to the same wall line, the door frame, and the window frame photographed at different points. In another alternative embodiment, the panorama supports an editing function, and the operator can select and adjust the unsatisfactory wall lines, door frames, and window frames in the panorama to obtain the satisfactory panorama. Based on this, if there is a target door body and/or a target window body in the panorama, the horizontal width of which does not meet the requirement, the worker may select the target door body and/or the target window body in the panorama, and perform the horizontal width adjustment operation on the target door body and/or the target window body. Further, under the condition that the adjustment operation of the horizontal width of the target door body and/or the target window in the panorama is responded, the horizontal width of the target door body and/or the target window can be adjusted, so that the position information corresponding to the target door body and/or the target window is mapped to the corresponding contour line in the point cloud plane map according to the adjusted horizontal width.
Optionally, when the position information corresponding to the target door body and/or the target window body is mapped onto the corresponding contour line in the point cloud plane image, the door body and/or the panoramic pixel coordinate corresponding to the window body of each space object may be identified from the panoramic image corresponding to each space object; then, mapping the panoramic pixel coordinates corresponding to the door body and/or the window body of each space object in the panoramic image into a spherical space according to the corresponding relation between the panoramic pixel coordinates and the spherical coordinates to obtain corresponding spherical coordinates; mapping the spherical coordinates corresponding to the door body and/or the window body of each space object into corresponding three-dimensional point cloud coordinates according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image and the mapping relationship between the spherical coordinates and the three-dimensional point cloud coordinates; and further, performing plane projection on the three-dimensional point coordinates corresponding to the door bodies and/or the windows of the space objects to map the outlines of the door bodies and/or the windows of the space objects to corresponding outline lines in the point cloud plane map.
Next, a detailed process of mapping the position information corresponding to the door body and the window body onto the corresponding contour line in the point cloud plane map will be described.
In this embodiment of the present application, a Pixel coordinate at the upper left corner of a panoramic Pixel coordinate is used as an origin, assuming that the length and the width of a panoramic image are H and W, respectively, and a Pixel coordinate corresponding to each Pixel point is Pixel (x, y), then longitude Lon and latitude Lat corresponding to a spherical coordinate after mapping of each panoramic Pixel coordinate are:
Lon=(x/W-0.5)*360;
Lat=(0.5–y/H)*180;
further, an origin O1 (0,0,0) of the spherical coordinate system is established, and assuming that the radius of the spherical coordinate system is R, the spherical coordinates (X, Y, Z) of each panoramic pixel coordinate after mapping are respectively:
X=R*cos(Lon)*cos(Lat);
Y=R*sin(Lat);
Z=R*sin(Lon)*cos(Lat);
further, when the door body and the window body are scanned by the laser scanning equipment, mapping is carried out according to a mapping relation of the corresponding spherical coordinates P = Q (X + X0, Y + Y0, Z + Z0) after rotation and movement transformation when the door body and the window body are mapped from the spherical coordinate system to the three-dimensional point cloud coordinate system; wherein x0, Y0, and z0 are respectively an origin O2 (x 0, Y0, z 0) of a three-dimensional point cloud coordinate system, rotationY is a rotation angle of the laser scanning device around a Y axis of a world coordinate system, and Q is a quaternion obtained by a system function quaternion.
Optionally, when determining the three-dimensional point cloud coordinate corresponding to the door body contour or the window body contour, a sphere coordinate set mapped by the three-dimensional point cloud coordinate corresponding to the wall body contour in the three-dimensional point cloud model may be used as a reference coordinate set, a ray from an original point O1 to a point P in the sphere coordinate system and a focus of any one sphere coordinate in the reference coordinate set are determined, and the three-dimensional point cloud coordinate corresponding to the focus is used as the three-dimensional point cloud coordinate corresponding to the door body contour or the window body contour. Of course, the spherical coordinates corresponding to the known objects in each space object may also be used as the reference coordinate set, for example, the spherical coordinates corresponding to the ground may be used as the reference coordinate set, the focal point of the ray from the origin O1 to the point P and any spherical coordinate in the reference coordinate set, that is, the focal point of the plane where the ground is located, may be determined, and the three-dimensional point cloud coordinates corresponding to the focal point may be used as the three-dimensional point cloud coordinates corresponding to the door body contour or the window body contour.
By the mapping process, the door body outline and the panoramic pixel coordinate corresponding to the window body outline in each space object can be mapped into the three-dimensional point cloud model. However, when shooting and scanning each space object, the panoramic camera and the laser scanning camera may shoot and scan the space object multiple times in different space objects, or may shoot and scan the same position at different points. Therefore, in the process of converting the panoramic pixel coordinate to the three-dimensional point cloud coordinate, different mapping results may be obtained for the same position, for example, the same door contour. Therefore, the embodiment of the application also provides reverse mapping from the three-dimensional point cloud coordinate to the panoramic pixel coordinate so as to verify the result of the previous forward mapping and ensure the accuracy of the mapping result.
Based on the above, in mapping from the three-dimensional point cloud coordinates to the ball coordinates, the ball coordinates P (X, Y, Z) corresponding to the three-dimensional point cloud coordinates (X0, Y0, Z0) can be determined using the following formula:
P=-Q(X-x0,Y-y0,Z-z0);
further, the panoramic pixel coordinate (W, H) corresponding to the spherical coordinate P (X, Y, Z) can be determined by the following formula:
Lon=Atan2(Z,X);
Lat=Asin(Y/R);
X=(Lon/360+0.5)*W;
Y=(0.5–Lat/180)*H;
it should be noted that, when determining the positions of the door body profile and the window profile in each space object in the embodiments of the present application, the embodiments are not limited to the foregoing manner, and because the door body and the window are both installed on the wall of each space object, under the same coordinate system, the spatial position corresponding to any wall includes the spatial position corresponding to the door body and/or the window installed thereon. Based on this, in the embodiment of the present application, the spherical coordinates may also be mapped to the panoramic pixel coordinates corresponding to the door body contour and the window body contour of each space object, and the three-dimensional point cloud coordinates corresponding to the wall body in the three-dimensional point cloud model are also mapped to the spherical coordinates, and based on comparing the two mapping results in the spherical coordinate system, the target space position with the same mapping result may be determined. And then, according to the wall body and the door body or the window body corresponding to the target space position, the corresponding three-dimensional point cloud coordinate can be determined.
Further optionally, if the door body profile and the window body profile do not meet the actual requirements in the panoramic image, as shown in fig. 2f, the staff may also manually adjust the door body profile and the window body profile; the door body outline and the window outline correspond to line segments corresponding to the horizontal widths of the door body and the window on the point cloud plane picture after projection, so that the manual adjustment operation performed on the door body outline and the window outline in the panoramic picture is the horizontal width adjustment operation on the door body and the window. Further, under the condition that adjustment operation is performed on the door body outline and the window body outline in the panoramic image in response, the corresponding three-dimensional point cloud coordinates can be synchronously updated according to the panoramic pixel coordinates corresponding to the horizontal widths of the adjusted door body and window body, so that a point cloud plane image containing the door body and/or window body outline is obtained after the two-dimensional plane projection is performed on the three-dimensional point cloud model.
In the embodiment of the application, the target house further includes an open space, and besides the panoramic pixel coordinates corresponding to the door body outline and the window body outline in the panoramic image are mapped into the three-dimensional point cloud model, the embodiment of the application also supports the mapping of the panoramic pixel coordinates corresponding to the outline of the open space in the target house into the three-dimensional point cloud model, so that a contour line for marking the outline of the open space is generated on the point cloud plane image obtained by performing plane projection on the three-dimensional point cloud model.
Optionally, when determining the three-dimensional point cloud coordinate corresponding to the outline of the open space, determining a third panoramic pixel coordinate corresponding to the open space in the target house according to the panoramic image; further, according to the mapping relation between the panoramic pixel coordinate and the three-dimensional point cloud coordinate, determining a third three-dimensional point cloud coordinate corresponding to the third panoramic pixel coordinate in the three-dimensional point cloud model, and under the condition of performing plane projection on the third three-dimensional point cloud coordinate, determining a first projection area corresponding to the third three-dimensional point cloud coordinate on the point cloud plane map; of course, the method is not limited to this, and the staff may also determine the spatial position corresponding to the open space according to the panorama, and directly select the first projection area corresponding to the open space on the point cloud plane map based on the projection relationship. Based on the method, the staff can perform a first marking operation on the first area to mark the corresponding outline of the open space on the point cloud plane; further, in the case of responding to the first marking operation on the first projection area, an outline corresponding to the open space may be generated on the point cloud plan according to a position corresponding to the execution of the first marking operation.
In the embodiment of the present application, before generating the house layout drawing, in order to clearly and intuitively know the target house layout and the functions of each space object from the house layout drawing, the embodiment of the present application further supports a marking operation on the types of each space object on the point cloud plan to obtain the house layout drawing with more complete and rich information. Optionally, in mapping the type of each spatial object to the point cloud plane map, a fourth panoramic pixel coordinate corresponding to each spatial object may be determined according to the panoramic image; and further, according to the mapping relation between the panoramic pixel coordinate and the three-dimensional point cloud coordinate, determining a fourth three-dimensional point cloud coordinate corresponding to the fourth panoramic pixel coordinate in the three-dimensional point cloud model, and determining a second projection area corresponding to the fourth three-dimensional point cloud coordinate on the point cloud plane map. Based on the point cloud plan, the staff can execute a second marking operation on the second area to mark the corresponding type of each space object on the point cloud plan; further, in response to a second marking operation on the second projection area, the type corresponding to each spatial object may be marked on the point cloud plan.
Based on the above, under the condition of obtaining the corrected and marked point cloud plan, the point cloud data corresponding to the corrected and marked point cloud plan can be updated, and the updated point cloud data is output to the model making equipment, so that the model making equipment can make a three-dimensional model and a house type graph model corresponding to the target house according to the updated point cloud data, and can be applied to scenes such as house leasing, buying and selling, house decoration and the like on line.
In the embodiment of the application, by acquiring a panoramic image and a point Yun Pingmian image corresponding to each space object in a target house, position information corresponding to a door body and/or a window body in each space object and a relative position relationship between each space object can be determined from the panoramic image, and a contour line corresponding to a wall body of each space object can be determined from a point cloud plane image; further, according to the relative position and posture relation between shooting equipment when shooting the panoramic image and the point cloud plane image of each space object and the relative position relation between all the space objects in the panoramic image, correcting the contour lines which do not meet the requirements in the point cloud plane image, mapping the door bodies and/or the windows of all the space objects in the panoramic image to the corresponding contour lines in the point cloud plane image, and marking the mapped door body contour lines and window contour lines to obtain the house type image corresponding to the target house. Through the method, the house type graph can be automatically generated, the generation efficiency of the house type graph is improved, and the detail information such as door bodies, windows and the like in the house can be mapped into the house type graph, so that the generated house type graph information is richer and more complete, and the house information can be rapidly and accurately known through the house type graph.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps S1 to S5 may be device a; for another example, the executing agent of step S1 may be device a, and the executing agent of step S2 and step S5 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations that appear in a specific order are included, but it should be clearly understood that these operations may be executed out of the order they appear herein or in parallel, and the order of the operations, such as S1, S2, etc., is merely used to distinguish between the various operations, and the order itself does not represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Based on the foregoing, an embodiment of the present application further provides a house layout generation apparatus, for example, the house layout generation apparatus may be implemented as a virtual apparatus in a Communication Controller Unit (CCU), for example, an application. As shown in fig. 3, the house layout generating apparatus includes: an acquisition module 301, a determination module 302, a correction module 303, a mapping module 304, and a marking module 305; the acquisition module 301 is configured to acquire a panorama and a point Yun Pingmian map corresponding to each space object in a target house, where the panorama includes a wall, a door body, and/or a window of each space object, and the point cloud plane map includes a contour line corresponding to the wall of each space object; the determining module 302 is configured to determine a relative position relationship between the space objects according to the panorama corresponding to each space object; the correction module 303 is configured to correct the contour line position of the corresponding wall in the point cloud plane map according to the relative position relationship; the mapping module 304 is configured to map, according to the relative pose relationship between the devices that respectively obtain the panoramic image and the point cloud plane image, position information corresponding to a door body and/or a window body of each spatial object in the panoramic image onto a corresponding contour line in the point cloud plane image, so as to obtain a point cloud plane image including a door body contour and a window body contour; the marking module 305 is configured to mark a door body outline and a window outline mapped onto the point cloud plane map, and use the marked point cloud plane map as a house type map corresponding to a target house for display.
In an optional embodiment, the obtaining module 301, when obtaining the panorama and the point cloud plan corresponding to each spatial object in the target house, is configured to: acquiring panoramic data and point cloud data corresponding to each space object in a target house; rendering the panoramic data in a three-dimensional live-action space to obtain a panoramic image corresponding to each space object; generating a three-dimensional point cloud model corresponding to the target house according to the point cloud data; the determining module 302 is configured to perform plane projection on the three-dimensional point cloud model to obtain a point cloud plane map corresponding to each space object.
In an alternative embodiment, the determining module 302 is further configured to, before performing the planar projection on the three-dimensional point cloud model: determining a first panoramic pixel coordinate corresponding to a wall body in each space object according to the panoramic data; determining a first three-dimensional point cloud coordinate corresponding to the first panoramic pixel coordinate in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinate and the three-dimensional point cloud coordinate; and if the first space position without the point cloud data exists in the space position corresponding to the first three-dimensional point cloud coordinate in the three-dimensional point cloud model, supplementing the point cloud data at the first space position.
In an alternative embodiment, the determining module 302, when performing the planar projection on the three-dimensional point cloud model, is configured to: determining target three-dimensional point cloud coordinates corresponding to walls in each space object in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinates and the three-dimensional point cloud coordinates; and determining a projection contour of the target three-dimensional point cloud coordinate on the point cloud plane graph, and taking the projection contour as a corresponding contour line of the wall of each space object on the point cloud plane graph.
In an optional embodiment, when the correcting module 303 corrects the position of the contour line of the corresponding wall in the point cloud plane map according to the relative position relationship, it is configured to: determining the corresponding target position of the contour line of each space object on the point cloud plane graph according to the relative position relation; and adjusting the contour lines of the point cloud plane map corresponding to the space objects to the target positions so as to enable the contour lines of the space objects to correspond to the wall positions of the space objects.
In an optional embodiment, when mapping the position information corresponding to the door body and/or the window body of each spatial object in the panorama to the corresponding contour line in the point cloud plane map according to the relative pose relationship between the devices that respectively obtain the panorama and the point cloud plane map, the mapping module 304 is configured to: mapping the panoramic pixel coordinates corresponding to the door body and/or the window body of each space object in the panoramic image into a spherical space according to the corresponding relation between the panoramic pixel coordinates and the spherical coordinates to obtain corresponding spherical coordinates; mapping the spherical coordinates corresponding to the door body and/or the window body into corresponding three-dimensional point cloud coordinates according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image and the mapping relationship between the spherical coordinates and the three-dimensional point cloud coordinates; and carrying out plane projection on the three-dimensional point cloud coordinates corresponding to the door body and/or the window body, and mapping the outline of the door body and/or the window body to a corresponding outline line in the point cloud plane graph.
In an alternative embodiment, the target premises includes an open space therein, and the mapping module 304 is further configured to: determining target position information corresponding to the open space according to the panoramic image; and generating a corresponding contour line on the point cloud plane graph according to the target position information.
It should be noted that, for specific functions and implementation processes of each module in the apparatus, reference may be made to the method embodiment described above, and details are not described herein again.
An embodiment of the present application further provides a computer device, fig. 4 is a schematic structural diagram of the computer device, and as shown in fig. 4, the computer device includes: a processor 41 and a memory 42 in which computer programs are stored; the processor 41 and the memory 42 may be one or more.
The memory 42 is mainly used for storing computer programs, which can be executed by the processor 41, so that the processor 41 controls the computer device to realize corresponding functions and complete corresponding actions or tasks. In addition to storing computer programs, the memory 42 may be configured to store other various data to support operations on the computer device. Examples of such data include instructions for any application or method operating on a computer device.
The memory 42, which may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiment of the present application, the implementation form of the processor 41 is not limited, and may be, for example, but not limited to, a CPU, a GPU, an MCU, or the like. The processor 41 may be regarded as a control system of the computer device and may be configured to execute a computer program stored in the memory 42 to control the computer device to implement the corresponding functions and to perform the corresponding actions or tasks. It should be noted that, according to the implementation form and the scene of the computer device, the functions, actions or tasks to be implemented may be different; accordingly, the computer programs stored in the memory 42 may vary, and execution of different computer programs by the processor 41 may control the computing device to perform different functions, perform different actions or tasks.
In some alternative embodiments, as shown in fig. 4, the computer device may further include: display 43, power supply 44, and communication 45. Only some of the components are schematically shown in fig. 4, which does not mean that the computer device only includes the components shown in fig. 4, but the computer device may also include other components for different application requirements, for example, in the case where there is a need for voice interaction, as shown in fig. 4, the computer device may also include an audio component 46. The components that can be included in the computer device may depend on the product form of the computer device, and are not limited herein.
In the embodiment of the present application, the display 43 is configured to display a graphical user interface, where a three-dimensional real-scene space corresponding to a multi-floor house is displayed on the graphical user interface; the three-dimensional live-action space comprises at least one space object in each floor space, and the adjacent floor spaces are communicated with each other.
In the embodiment of the present application, when the processor 41 executes the computer program in the memory 42, it is configured to: acquiring a panoramic image and a point Yun Pingmian image corresponding to each space object in a target house, wherein the panoramic image comprises a wall, a door body and/or a window body of each space object, and the point cloud plane image comprises a contour line corresponding to the wall of each space object; determining the relative position relation between the space objects according to the panoramic image corresponding to the space objects; correcting the contour line position of the corresponding wall in the point cloud plane map according to the relative position relationship; according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image, mapping the position information corresponding to the door body and/or the window body of each space object in the panoramic image to the corresponding contour line in the point cloud plane image to obtain the point cloud plane image containing the door body contour and the window body contour; and marking the door body outline and the window outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house for displaying.
In an alternative embodiment, the processor 41, when obtaining the panorama and the point cloud plan corresponding to each spatial object in the target house, is configured to: acquiring panoramic data and point cloud data corresponding to each space object in a target house; rendering the panoramic data in a three-dimensional live-action space to obtain a panoramic image corresponding to each space object; generating a three-dimensional point cloud model corresponding to the target house according to the point cloud data; and carrying out plane projection on the three-dimensional point cloud model to obtain a point cloud plane graph corresponding to each space object.
In an alternative embodiment, the processor 41 is further configured to, before performing the planar projection on the three-dimensional point cloud model: determining a first panoramic pixel coordinate corresponding to a wall body in each space object according to the panoramic data; determining a first three-dimensional point cloud coordinate corresponding to the first panoramic pixel coordinate in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinate and the three-dimensional point cloud coordinate; and if the first space position without the point cloud data exists in the space position corresponding to the first three-dimensional point cloud coordinate in the three-dimensional point cloud model, supplementing the point cloud data at the first space position.
In an alternative embodiment, the processor 41, when performing the planar projection on the three-dimensional point cloud model, is configured to: determining target three-dimensional point cloud coordinates corresponding to wall bodies in all space objects in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinates and the three-dimensional point cloud coordinates; and determining a projection contour of the target three-dimensional point cloud coordinate on the point cloud plane graph, and taking the projection contour as a corresponding contour line of the wall of each space object on the point cloud plane graph.
In an alternative embodiment, the processor 41, when correcting the position of the contour line of the corresponding wall in the point cloud plan according to the relative position relationship, is configured to: determining the corresponding target position of the contour line of each space object on the point cloud plane graph according to the relative position relation; and adjusting the contour lines of the point cloud plane map corresponding to the space objects to the target positions so as to enable the contour lines of the space objects to correspond to the wall positions of the space objects.
In an optional embodiment, when the processor 41 maps the position information corresponding to the door body and/or the window body of each spatial object in the panorama onto the corresponding contour line in the point cloud plane map according to the relative pose relationship between the devices that respectively acquire the panorama and the point cloud plane map, the processor is configured to: mapping the panoramic pixel coordinates corresponding to the door body and/or the window body of each space object in the panoramic image into a spherical space according to the corresponding relation between the panoramic pixel coordinates and the spherical coordinates to obtain corresponding spherical coordinates; mapping the spherical coordinates corresponding to the door body and/or the window body into corresponding three-dimensional point cloud coordinates according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image and the mapping relationship between the spherical coordinates and the three-dimensional point cloud coordinates; and carrying out plane projection on the three-dimensional point cloud coordinates corresponding to the door body and/or the window body, and mapping the outline of the door body and/or the window body to a corresponding outline line in the point cloud plane graph.
In an alternative embodiment, the target premises includes an open space therein, and processor 41 is further configured to: determining target position information corresponding to the open space according to the panoramic image; and generating a corresponding contour line on the point cloud plane graph according to the target position information.
It should be noted that, for specific functions of the processor in the computer device, reference may be made to the method embodiments described above, and details are not described herein again.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by a computer device in the foregoing method embodiments when executed.
The communication component in the above embodiments is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display in the above embodiments includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly of the above embodiments provides power to various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio components in the embodiments of the figures described above may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A house layout generating method is characterized by comprising the following steps:
acquiring a panorama and a point Yun Pingmian image corresponding to each space object in a target house, wherein the panorama comprises a wall body, a door body and/or a window body of each space object, and the point cloud plane image comprises a contour line corresponding to the wall body of each space object;
determining the relative position relation between the space objects according to the panoramic image corresponding to the space objects;
correcting the contour line position of the corresponding wall in the point cloud plane map according to the relative position relation;
according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image, mapping the position information corresponding to the door body and/or the window body of each space object in the panoramic image to the corresponding contour line in the point cloud plane image to obtain the point cloud plane image containing the door body contour and the window body contour;
and marking the door body outline and the window body outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house for displaying.
2. The method of claim 1, wherein obtaining a panorama and a point Yun Pingmian map corresponding to each space object in the target house comprises:
acquiring panoramic data and point cloud data corresponding to each space object in a target house;
performing three-dimensional live-action space rendering on the panoramic data to obtain a panoramic image corresponding to each space object; and
generating a three-dimensional point cloud model corresponding to the target house according to the point cloud data;
and carrying out plane projection on the three-dimensional point cloud model to obtain a point cloud plane graph corresponding to each space object.
3. The method of claim 2, further comprising, prior to the planar projecting the three-dimensional point cloud model:
determining first panoramic pixel coordinates corresponding to the wall in each space object according to the panoramic data;
determining a first three-dimensional point cloud coordinate corresponding to the first panoramic pixel coordinate in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinate and the three-dimensional point cloud coordinate;
and if a first space position without point cloud data exists in the space position corresponding to the first three-dimensional point cloud coordinate in the three-dimensional point cloud model, supplementing point cloud data at the first space position.
4. The method of claim 2, wherein the planar projecting the three-dimensional point cloud model comprises:
determining target three-dimensional point cloud coordinates corresponding to walls in the space objects in the three-dimensional point cloud model according to the mapping relation between the panoramic pixel coordinates and the three-dimensional point cloud coordinates;
and determining a projection contour of the target three-dimensional point cloud coordinate on the point cloud plane map, and taking the projection contour as a contour line corresponding to the wall of each space object on the point cloud plane map.
5. The method of claim 1, wherein correcting the position of the contour line of the corresponding wall in the point cloud plane map according to the relative position relationship comprises:
determining the corresponding target position of the contour line of each space object on the point cloud plane graph according to the relative position relation;
and adjusting the contour lines of the point cloud plane map corresponding to the space objects to the target positions so as to enable the contour lines of the space objects to correspond to the wall positions of the space objects.
6. The method according to claim 1, wherein mapping position information corresponding to a door body and/or a window body of each space object in the panorama onto a corresponding contour line in the point cloud plan according to a relative pose relationship between devices respectively acquiring the panorama and the point cloud plan comprises:
mapping the panoramic pixel coordinates corresponding to the door body and/or the window body of each space object in the panoramic image into a spherical space according to the corresponding relation between the panoramic pixel coordinates and the spherical coordinates to obtain corresponding spherical coordinates;
mapping the spherical coordinates corresponding to the door body and/or the window body into corresponding three-dimensional point cloud coordinates according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image and the mapping relationship between the spherical coordinates and the three-dimensional point cloud coordinates;
and carrying out plane projection on the three-dimensional point cloud coordinates corresponding to the door body and/or the window body, and mapping the outline of the door body and/or the window body to the corresponding outline line in the point cloud plane map.
7. The method of any one of claims 1-6, wherein the target premises includes an open space therein, the method further comprising:
determining target position information corresponding to the open space according to the panoramic image;
and generating a corresponding contour line on the point cloud plane map according to the target position information.
8. A house layout generating apparatus, comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a panoramic image and a Yun Pingmian image corresponding to each space object in a target house, the panoramic image comprises a wall body, a door body and/or a window body of each space object, and the point cloud plane image comprises a contour line corresponding to the wall body of each space object;
the determining module is used for determining the relative position relation among the space objects according to the panoramic image corresponding to the space objects;
the correction module is used for correcting the contour line position of the corresponding wall body in the point cloud plane map according to the relative position relation;
the mapping module is used for mapping the position information corresponding to the door body and/or the window body of each space object in the panoramic image to the corresponding contour line in the point cloud plane image according to the relative pose relationship between the devices respectively acquiring the panoramic image and the point cloud plane image to obtain the point cloud plane image comprising the door body contour and the window body contour;
and the marking module is used for marking the door body outline and the window body outline which are mapped on the point cloud plane map, and taking the marked point cloud plane map as a house type map corresponding to the target house for displaying.
9. A computer device, comprising: a processor and a memory for implementing the steps of the method according to any one of claims 1-7 when the processor executes a computer program.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic terminal device, enable the electronic terminal device to perform the steps of the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211003231.9A CN115393469A (en) | 2022-08-19 | 2022-08-19 | House type graph generation method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211003231.9A CN115393469A (en) | 2022-08-19 | 2022-08-19 | House type graph generation method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115393469A true CN115393469A (en) | 2022-11-25 |
Family
ID=84120648
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211003231.9A Pending CN115393469A (en) | 2022-08-19 | 2022-08-19 | House type graph generation method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115393469A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117496181A (en) * | 2023-11-17 | 2024-02-02 | 杭州中房信息科技有限公司 | OpenCV-based house type graph identification method, storage medium and equipment |
-
2022
- 2022-08-19 CN CN202211003231.9A patent/CN115393469A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117496181A (en) * | 2023-11-17 | 2024-02-02 | 杭州中房信息科技有限公司 | OpenCV-based house type graph identification method, storage medium and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6951595B2 (en) | Housing data collection and model generation methods | |
US11704833B2 (en) | Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium | |
CN115393467A (en) | House type graph generation method, device, equipment and medium | |
US9495802B2 (en) | Position identification method and system | |
JP6879891B2 (en) | Methods and systems for completing point clouds using plane segments | |
US9710971B2 (en) | Information processing device, position designation method and storage medium | |
US10872467B2 (en) | Method for data collection and model generation of house | |
US9390488B2 (en) | Guiding method and information processing apparatus | |
US8023727B2 (en) | Environment map generating apparatus, environment map generating method, and environment map generating program | |
CN114663618B (en) | Three-dimensional reconstruction and correction method, device, equipment and storage medium | |
JP6500355B2 (en) | Display device, display program, and display method | |
US9996947B2 (en) | Monitoring apparatus and monitoring method | |
WO2021035891A1 (en) | Augmented reality technology-based projection method and projection device | |
CN115330966A (en) | Method, system, device and storage medium for generating house type graph | |
CN114494487B (en) | House type graph generation method, device and storage medium based on panorama semantic stitching | |
US20220130064A1 (en) | Feature Determination, Measurement, and Virtualization From 2-D Image Capture | |
CN114972579B (en) | House type graph construction method, device, equipment and storage medium | |
CN115375860B (en) | Point cloud splicing method, device, equipment and storage medium | |
KR101875047B1 (en) | System and method for 3d modelling using photogrammetry | |
JP2023546739A (en) | Methods, apparatus, and systems for generating three-dimensional models of scenes | |
CN115393469A (en) | House type graph generation method, device, equipment and medium | |
CN115330652A (en) | Point cloud splicing method and device and storage medium | |
JP2022507714A (en) | Surveying sampling point planning method, equipment, control terminal and storage medium | |
CN115439576B (en) | House pattern generation method, device, equipment and medium for terminal equipment | |
WO2017024954A1 (en) | Method and device for image display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |